Jan 30 13:55:10.017529 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:55:10.017567 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:10.017585 kernel: BIOS-provided physical RAM map: Jan 30 13:55:10.017595 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:55:10.017604 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:55:10.017615 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:55:10.017626 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 13:55:10.017636 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 13:55:10.017647 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:55:10.017662 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:55:10.017672 kernel: NX (Execute Disable) protection: active Jan 30 13:55:10.017682 kernel: APIC: Static calls initialized Jan 30 13:55:10.017701 kernel: SMBIOS 2.8 present. Jan 30 13:55:10.017713 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 13:55:10.017726 kernel: Hypervisor detected: KVM Jan 30 13:55:10.017744 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:55:10.017763 kernel: kvm-clock: using sched offset of 3718561931 cycles Jan 30 13:55:10.017779 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:55:10.017794 kernel: tsc: Detected 2494.140 MHz processor Jan 30 13:55:10.017808 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:55:10.017823 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:55:10.017838 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 13:55:10.017852 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:55:10.017867 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:55:10.017885 kernel: ACPI: Early table checksum verification disabled Jan 30 13:55:10.017898 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 13:55:10.017909 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.017917 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.017929 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.017941 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 13:55:10.017954 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.017966 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.017977 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.017993 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:10.018004 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 13:55:10.018017 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 13:55:10.018028 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 13:55:10.018040 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 13:55:10.018052 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 13:55:10.018064 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 13:55:10.018086 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 13:55:10.018098 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:55:10.018111 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:55:10.018123 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:55:10.018134 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:55:10.018153 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 13:55:10.018185 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 13:55:10.018203 kernel: Zone ranges: Jan 30 13:55:10.018215 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:55:10.018226 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 13:55:10.018238 kernel: Normal empty Jan 30 13:55:10.018250 kernel: Movable zone start for each node Jan 30 13:55:10.018262 kernel: Early memory node ranges Jan 30 13:55:10.018274 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:55:10.018286 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 13:55:10.018299 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 13:55:10.018316 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:55:10.018331 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:55:10.018347 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 13:55:10.018360 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:55:10.018373 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:55:10.018386 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:55:10.018398 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:55:10.018411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:55:10.018423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:55:10.018441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:55:10.018453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:55:10.018466 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:55:10.018479 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:55:10.018493 kernel: TSC deadline timer available Jan 30 13:55:10.018506 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:55:10.018519 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:55:10.018532 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 13:55:10.018551 kernel: Booting paravirtualized kernel on KVM Jan 30 13:55:10.018566 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:55:10.018586 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:55:10.018600 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:55:10.018612 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:55:10.018624 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:55:10.018637 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:55:10.018653 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:10.018666 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:55:10.018679 kernel: random: crng init done Jan 30 13:55:10.018699 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:55:10.018712 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:55:10.018727 kernel: Fallback order for Node 0: 0 Jan 30 13:55:10.018743 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 13:55:10.018758 kernel: Policy zone: DMA32 Jan 30 13:55:10.018771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:55:10.018786 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 13:55:10.018801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:55:10.018820 kernel: Kernel/User page tables isolation: enabled Jan 30 13:55:10.018834 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:55:10.018847 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:55:10.018858 kernel: Dynamic Preempt: voluntary Jan 30 13:55:10.018870 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:55:10.018884 kernel: rcu: RCU event tracing is enabled. Jan 30 13:55:10.018897 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:55:10.018910 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:55:10.018922 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:55:10.018933 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:55:10.018952 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:55:10.018964 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:55:10.018976 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:55:10.018990 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:55:10.019010 kernel: Console: colour VGA+ 80x25 Jan 30 13:55:10.019023 kernel: printk: console [tty0] enabled Jan 30 13:55:10.019036 kernel: printk: console [ttyS0] enabled Jan 30 13:55:10.019048 kernel: ACPI: Core revision 20230628 Jan 30 13:55:10.019061 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:55:10.019079 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:55:10.019091 kernel: x2apic enabled Jan 30 13:55:10.019102 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:55:10.019115 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:55:10.019127 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 30 13:55:10.019139 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jan 30 13:55:10.019151 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:55:10.021237 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:55:10.021291 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:55:10.021307 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:55:10.021320 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:55:10.021337 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:55:10.021350 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 13:55:10.021363 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:55:10.021376 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:55:10.021389 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:55:10.021403 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:55:10.021431 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:55:10.021444 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:55:10.021457 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:55:10.021470 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:55:10.021484 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:55:10.021498 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:55:10.021510 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:55:10.021524 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:55:10.021542 kernel: landlock: Up and running. Jan 30 13:55:10.021555 kernel: SELinux: Initializing. Jan 30 13:55:10.021569 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.021583 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.021596 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 13:55:10.021623 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:10.021638 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:10.021652 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:10.021666 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 13:55:10.021686 kernel: signal: max sigframe size: 1776 Jan 30 13:55:10.021699 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:55:10.021713 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:55:10.021727 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:55:10.021740 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:55:10.021753 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:55:10.021766 kernel: .... node #0, CPUs: #1 Jan 30 13:55:10.021780 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:55:10.021802 kernel: smpboot: Max logical packages: 1 Jan 30 13:55:10.021820 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jan 30 13:55:10.021834 kernel: devtmpfs: initialized Jan 30 13:55:10.021848 kernel: x86/mm: Memory block size: 128MB Jan 30 13:55:10.021861 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:55:10.021874 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.021888 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:55:10.021902 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:55:10.021918 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:55:10.021931 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:55:10.021950 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:55:10.021965 kernel: audit: type=2000 audit(1738245308.476:1): state=initialized audit_enabled=0 res=1 Jan 30 13:55:10.021980 kernel: cpuidle: using governor menu Jan 30 13:55:10.021995 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:55:10.022010 kernel: dca service started, version 1.12.1 Jan 30 13:55:10.022023 kernel: PCI: Using configuration type 1 for base access Jan 30 13:55:10.022036 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:55:10.022049 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:55:10.022063 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:55:10.022083 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:55:10.022097 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:55:10.022110 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:55:10.022124 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:55:10.022138 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:55:10.022151 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:55:10.022187 kernel: ACPI: Interpreter enabled Jan 30 13:55:10.022201 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:55:10.022215 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:55:10.022237 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:55:10.022253 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:55:10.022267 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:55:10.022281 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:55:10.024286 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:55:10.024524 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:55:10.024710 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:55:10.024746 kernel: acpiphp: Slot [3] registered Jan 30 13:55:10.024763 kernel: acpiphp: Slot [4] registered Jan 30 13:55:10.024777 kernel: acpiphp: Slot [5] registered Jan 30 13:55:10.024792 kernel: acpiphp: Slot [6] registered Jan 30 13:55:10.024806 kernel: acpiphp: Slot [7] registered Jan 30 13:55:10.024821 kernel: acpiphp: Slot [8] registered Jan 30 13:55:10.024835 kernel: acpiphp: Slot [9] registered Jan 30 13:55:10.024953 kernel: acpiphp: Slot [10] registered Jan 30 13:55:10.024971 kernel: acpiphp: Slot [11] registered Jan 30 13:55:10.024984 kernel: acpiphp: Slot [12] registered Jan 30 13:55:10.025032 kernel: acpiphp: Slot [13] registered Jan 30 13:55:10.025046 kernel: acpiphp: Slot [14] registered Jan 30 13:55:10.025059 kernel: acpiphp: Slot [15] registered Jan 30 13:55:10.025072 kernel: acpiphp: Slot [16] registered Jan 30 13:55:10.025086 kernel: acpiphp: Slot [17] registered Jan 30 13:55:10.025100 kernel: acpiphp: Slot [18] registered Jan 30 13:55:10.025114 kernel: acpiphp: Slot [19] registered Jan 30 13:55:10.025127 kernel: acpiphp: Slot [20] registered Jan 30 13:55:10.025140 kernel: acpiphp: Slot [21] registered Jan 30 13:55:10.026229 kernel: acpiphp: Slot [22] registered Jan 30 13:55:10.026257 kernel: acpiphp: Slot [23] registered Jan 30 13:55:10.026268 kernel: acpiphp: Slot [24] registered Jan 30 13:55:10.026278 kernel: acpiphp: Slot [25] registered Jan 30 13:55:10.026288 kernel: acpiphp: Slot [26] registered Jan 30 13:55:10.026297 kernel: acpiphp: Slot [27] registered Jan 30 13:55:10.026306 kernel: acpiphp: Slot [28] registered Jan 30 13:55:10.026316 kernel: acpiphp: Slot [29] registered Jan 30 13:55:10.026325 kernel: acpiphp: Slot [30] registered Jan 30 13:55:10.026334 kernel: acpiphp: Slot [31] registered Jan 30 13:55:10.026350 kernel: PCI host bridge to bus 0000:00 Jan 30 13:55:10.026518 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:55:10.026658 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:55:10.026807 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:55:10.026948 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:55:10.027094 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 13:55:10.027256 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:55:10.027466 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:55:10.027601 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:55:10.027732 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:55:10.027833 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 13:55:10.027932 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:55:10.028040 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:55:10.028143 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:55:10.030437 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:55:10.030674 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 13:55:10.030893 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 13:55:10.031084 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:55:10.031320 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:55:10.031513 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:55:10.031706 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:55:10.031891 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:55:10.032100 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 13:55:10.034362 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 13:55:10.034520 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:55:10.034665 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:55:10.034851 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:55:10.035002 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 13:55:10.035154 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 13:55:10.037468 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 13:55:10.037663 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:55:10.037825 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 13:55:10.037994 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 13:55:10.038168 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 13:55:10.038428 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 13:55:10.038592 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 13:55:10.038756 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 13:55:10.038946 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 13:55:10.039174 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:55:10.040499 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:55:10.040686 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 13:55:10.040868 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 13:55:10.041071 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:55:10.045275 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 13:55:10.045443 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 13:55:10.045593 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 13:55:10.045789 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:55:10.045912 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 13:55:10.046054 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 13:55:10.046067 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:55:10.046078 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:55:10.046088 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:55:10.046097 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:55:10.046107 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:55:10.046120 kernel: iommu: Default domain type: Translated Jan 30 13:55:10.046130 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:55:10.046139 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:55:10.046148 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:55:10.046158 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:55:10.046181 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 13:55:10.046285 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:55:10.046383 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:55:10.046480 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:55:10.046497 kernel: vgaarb: loaded Jan 30 13:55:10.046507 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:55:10.046517 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:55:10.046526 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:55:10.046535 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:55:10.046545 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:55:10.046554 kernel: pnp: PnP ACPI init Jan 30 13:55:10.046564 kernel: pnp: PnP ACPI: found 4 devices Jan 30 13:55:10.046573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:55:10.046585 kernel: NET: Registered PF_INET protocol family Jan 30 13:55:10.046628 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:55:10.046638 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:55:10.046648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:55:10.046658 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:55:10.046667 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:55:10.046677 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:55:10.046686 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.046699 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:55:10.046723 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:55:10.046733 kernel: NET: Registered PF_XDP protocol family Jan 30 13:55:10.046846 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:55:10.046935 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:55:10.047024 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:55:10.047113 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:55:10.047253 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 13:55:10.047360 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:55:10.047470 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:55:10.047485 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:55:10.047584 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 37620 usecs Jan 30 13:55:10.047597 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:55:10.047608 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:55:10.047617 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 30 13:55:10.047627 kernel: Initialise system trusted keyrings Jan 30 13:55:10.047636 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:55:10.047649 kernel: Key type asymmetric registered Jan 30 13:55:10.047659 kernel: Asymmetric key parser 'x509' registered Jan 30 13:55:10.047668 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:55:10.047678 kernel: io scheduler mq-deadline registered Jan 30 13:55:10.047688 kernel: io scheduler kyber registered Jan 30 13:55:10.047697 kernel: io scheduler bfq registered Jan 30 13:55:10.047707 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:55:10.047718 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:55:10.047732 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:55:10.047750 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:55:10.047763 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:55:10.047776 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:55:10.047789 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:55:10.047802 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:55:10.047814 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:55:10.048016 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:55:10.048041 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 30 13:55:10.048158 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:55:10.048347 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:55:09 UTC (1738245309) Jan 30 13:55:10.048452 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:55:10.048465 kernel: intel_pstate: CPU model not supported Jan 30 13:55:10.048475 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:55:10.048485 kernel: Segment Routing with IPv6 Jan 30 13:55:10.048494 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:55:10.048504 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:55:10.048513 kernel: Key type dns_resolver registered Jan 30 13:55:10.048529 kernel: IPI shorthand broadcast: enabled Jan 30 13:55:10.048539 kernel: sched_clock: Marking stable (1117004971, 101601984)->(1247012048, -28405093) Jan 30 13:55:10.048549 kernel: registered taskstats version 1 Jan 30 13:55:10.048559 kernel: Loading compiled-in X.509 certificates Jan 30 13:55:10.048568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:55:10.048577 kernel: Key type .fscrypt registered Jan 30 13:55:10.048587 kernel: Key type fscrypt-provisioning registered Jan 30 13:55:10.048597 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:55:10.048609 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:55:10.048620 kernel: ima: No architecture policies found Jan 30 13:55:10.048637 kernel: clk: Disabling unused clocks Jan 30 13:55:10.048651 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:55:10.048664 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:55:10.048704 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:55:10.048724 kernel: Run /init as init process Jan 30 13:55:10.048739 kernel: with arguments: Jan 30 13:55:10.048753 kernel: /init Jan 30 13:55:10.048771 kernel: with environment: Jan 30 13:55:10.048785 kernel: HOME=/ Jan 30 13:55:10.048800 kernel: TERM=linux Jan 30 13:55:10.048815 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:55:10.048834 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:55:10.048868 systemd[1]: Detected virtualization kvm. Jan 30 13:55:10.048883 systemd[1]: Detected architecture x86-64. Jan 30 13:55:10.048898 systemd[1]: Running in initrd. Jan 30 13:55:10.048919 systemd[1]: No hostname configured, using default hostname. Jan 30 13:55:10.048936 systemd[1]: Hostname set to . Jan 30 13:55:10.048953 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:55:10.048971 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:55:10.048986 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:10.049001 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:10.049017 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:55:10.049049 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:55:10.049070 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:55:10.049085 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:55:10.049108 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:55:10.049126 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:55:10.049144 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:10.049162 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:10.051598 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:55:10.051650 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:55:10.051672 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:55:10.051692 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:55:10.051709 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:55:10.051722 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:55:10.051739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:55:10.051754 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:55:10.051769 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:10.051786 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:10.051803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:10.051814 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:55:10.051824 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:55:10.051835 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:55:10.051845 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:55:10.051859 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:55:10.051869 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:55:10.051880 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:55:10.051890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:10.051901 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:55:10.051969 systemd-journald[184]: Collecting audit messages is disabled. Jan 30 13:55:10.052013 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:10.052024 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:55:10.052036 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:55:10.052050 systemd-journald[184]: Journal started Jan 30 13:55:10.052074 systemd-journald[184]: Runtime Journal (/run/log/journal/6d68e7278b8749658367ef5d4e74c710) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:55:10.055331 systemd-modules-load[185]: Inserted module 'overlay' Jan 30 13:55:10.078026 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:55:10.081563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:10.095080 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:55:10.095323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:10.101811 kernel: Bridge firewalling registered Jan 30 13:55:10.098106 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 30 13:55:10.100925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:55:10.104101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:10.111527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:55:10.114887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:55:10.124424 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:55:10.139386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:10.140767 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:10.150463 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:55:10.152324 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:10.165451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:10.168450 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:55:10.191751 dracut-cmdline[219]: dracut-dracut-053 Jan 30 13:55:10.194991 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:10.204502 systemd-resolved[211]: Positive Trust Anchors: Jan 30 13:55:10.205358 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:55:10.206050 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:55:10.211853 systemd-resolved[211]: Defaulting to hostname 'linux'. Jan 30 13:55:10.213715 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:55:10.214291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:10.296206 kernel: SCSI subsystem initialized Jan 30 13:55:10.308226 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:55:10.322266 kernel: iscsi: registered transport (tcp) Jan 30 13:55:10.347356 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:55:10.347446 kernel: QLogic iSCSI HBA Driver Jan 30 13:55:10.411264 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:55:10.417437 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:55:10.462479 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:55:10.462580 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:55:10.462622 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:55:10.517259 kernel: raid6: avx2x4 gen() 12572 MB/s Jan 30 13:55:10.534276 kernel: raid6: avx2x2 gen() 13272 MB/s Jan 30 13:55:10.551486 kernel: raid6: avx2x1 gen() 10750 MB/s Jan 30 13:55:10.551585 kernel: raid6: using algorithm avx2x2 gen() 13272 MB/s Jan 30 13:55:10.569342 kernel: raid6: .... xor() 16432 MB/s, rmw enabled Jan 30 13:55:10.569457 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:55:10.595229 kernel: xor: automatically using best checksumming function avx Jan 30 13:55:10.788206 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:55:10.805483 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:55:10.811520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:10.842355 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 30 13:55:10.848378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:10.856509 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:55:10.879485 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jan 30 13:55:10.926153 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:55:10.933526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:55:11.005351 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:11.016696 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:55:11.043743 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:55:11.047237 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:55:11.049304 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:11.050375 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:55:11.058837 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:55:11.101467 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:55:11.108532 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 13:55:11.191670 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 13:55:11.191861 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:55:11.191882 kernel: GPT:9289727 != 125829119 Jan 30 13:55:11.191901 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:55:11.191918 kernel: GPT:9289727 != 125829119 Jan 30 13:55:11.191930 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:55:11.191943 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:11.191955 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:55:11.192132 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:55:11.192146 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 13:55:11.213883 kernel: ACPI: bus type USB registered Jan 30 13:55:11.213922 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 30 13:55:11.214139 kernel: usbcore: registered new interface driver usbfs Jan 30 13:55:11.228190 kernel: usbcore: registered new interface driver hub Jan 30 13:55:11.228279 kernel: usbcore: registered new device driver usb Jan 30 13:55:11.232237 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:55:11.232445 kernel: AES CTR mode by8 optimization enabled Jan 30 13:55:11.237205 kernel: libata version 3.00 loaded. Jan 30 13:55:11.241224 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:55:11.273490 kernel: scsi host1: ata_piix Jan 30 13:55:11.273706 kernel: scsi host2: ata_piix Jan 30 13:55:11.273904 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 13:55:11.273930 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 13:55:11.260040 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:55:11.260252 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:11.261245 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:11.261994 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:11.262260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:11.263398 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:11.272610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:11.307468 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (453) Jan 30 13:55:11.332199 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Jan 30 13:55:11.339639 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:55:11.363123 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:55:11.386791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:11.392998 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:55:11.393766 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:55:11.403673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:55:11.409422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:55:11.412408 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:11.423670 disk-uuid[528]: Primary Header is updated. Jan 30 13:55:11.423670 disk-uuid[528]: Secondary Entries is updated. Jan 30 13:55:11.423670 disk-uuid[528]: Secondary Header is updated. Jan 30 13:55:11.444196 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:11.451797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:11.462863 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 13:55:11.468030 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 13:55:11.468264 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 13:55:11.468390 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 13:55:11.468508 kernel: hub 1-0:1.0: USB hub found Jan 30 13:55:11.468655 kernel: hub 1-0:1.0: 2 ports detected Jan 30 13:55:11.468795 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:12.460314 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:12.462243 disk-uuid[533]: The operation has completed successfully. Jan 30 13:55:12.525540 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:55:12.526700 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:55:12.558699 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:55:12.563434 sh[557]: Success Jan 30 13:55:12.584218 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:55:12.662449 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:55:12.682433 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:55:12.685933 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:55:12.721403 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:55:12.721530 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:12.721554 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:55:12.721576 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:55:12.721597 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:55:12.731083 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:55:12.732405 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:55:12.738447 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:55:12.742410 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:55:12.757239 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:12.759720 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:12.759814 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:12.767200 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:12.780554 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:55:12.781726 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:12.788824 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:55:12.799359 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:55:12.869099 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:55:12.893731 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:55:12.917481 systemd-networkd[742]: lo: Link UP Jan 30 13:55:12.917495 systemd-networkd[742]: lo: Gained carrier Jan 30 13:55:12.920769 systemd-networkd[742]: Enumeration completed Jan 30 13:55:12.920956 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:55:12.921941 systemd-networkd[742]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:55:12.921947 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 13:55:12.923104 systemd-networkd[742]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:55:12.923111 systemd-networkd[742]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:55:12.925210 systemd-networkd[742]: eth0: Link UP Jan 30 13:55:12.925216 systemd-networkd[742]: eth0: Gained carrier Jan 30 13:55:12.925228 systemd-networkd[742]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:55:12.930826 systemd[1]: Reached target network.target - Network. Jan 30 13:55:12.934312 systemd-networkd[742]: eth1: Link UP Jan 30 13:55:12.934317 systemd-networkd[742]: eth1: Gained carrier Jan 30 13:55:12.934331 systemd-networkd[742]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:55:12.954274 systemd-networkd[742]: eth0: DHCPv4 address 146.190.136.39/20, gateway 146.190.128.1 acquired from 169.254.169.253 Jan 30 13:55:12.961294 systemd-networkd[742]: eth1: DHCPv4 address 10.124.0.6/20 acquired from 169.254.169.253 Jan 30 13:55:12.965978 ignition[649]: Ignition 2.19.0 Jan 30 13:55:12.965994 ignition[649]: Stage: fetch-offline Jan 30 13:55:12.968481 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:55:12.966079 ignition[649]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:12.966097 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:12.966351 ignition[649]: parsed url from cmdline: "" Jan 30 13:55:12.966359 ignition[649]: no config URL provided Jan 30 13:55:12.966371 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:55:12.966387 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:55:12.966397 ignition[649]: failed to fetch config: resource requires networking Jan 30 13:55:12.966643 ignition[649]: Ignition finished successfully Jan 30 13:55:12.983101 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:55:13.009522 ignition[751]: Ignition 2.19.0 Jan 30 13:55:13.009542 ignition[751]: Stage: fetch Jan 30 13:55:13.009859 ignition[751]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.009892 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.010116 ignition[751]: parsed url from cmdline: "" Jan 30 13:55:13.010125 ignition[751]: no config URL provided Jan 30 13:55:13.010139 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:55:13.010188 ignition[751]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:55:13.010226 ignition[751]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 13:55:13.026374 ignition[751]: GET result: OK Jan 30 13:55:13.027790 ignition[751]: parsing config with SHA512: 5b52044f49a1c986889cb55662796e930d3c24d12787ca8c2532de0f5cc633f28b039982a0750250ba0a2b8e2f20a4b7cdde26aa699ce4ac230d700943a70284 Jan 30 13:55:13.036013 unknown[751]: fetched base config from "system" Jan 30 13:55:13.036031 unknown[751]: fetched base config from "system" Jan 30 13:55:13.036772 ignition[751]: fetch: fetch complete Jan 30 13:55:13.036043 unknown[751]: fetched user config from "digitalocean" Jan 30 13:55:13.036783 ignition[751]: fetch: fetch passed Jan 30 13:55:13.036870 ignition[751]: Ignition finished successfully Jan 30 13:55:13.041385 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:55:13.045462 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:55:13.087930 ignition[758]: Ignition 2.19.0 Jan 30 13:55:13.087950 ignition[758]: Stage: kargs Jan 30 13:55:13.088253 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.088291 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.090028 ignition[758]: kargs: kargs passed Jan 30 13:55:13.092334 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:55:13.090120 ignition[758]: Ignition finished successfully Jan 30 13:55:13.099486 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:55:13.127293 ignition[764]: Ignition 2.19.0 Jan 30 13:55:13.127308 ignition[764]: Stage: disks Jan 30 13:55:13.127549 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.127563 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.128673 ignition[764]: disks: disks passed Jan 30 13:55:13.128762 ignition[764]: Ignition finished successfully Jan 30 13:55:13.130253 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:55:13.134289 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:55:13.135009 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:55:13.135846 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:55:13.136701 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:55:13.137640 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:55:13.146444 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:55:13.166630 systemd-fsck[772]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:55:13.170737 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:55:13.177411 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:55:13.315430 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:55:13.316365 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:55:13.317760 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:55:13.334465 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:55:13.337486 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:55:13.341051 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 13:55:13.351316 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (780) Jan 30 13:55:13.351463 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:55:13.354003 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:55:13.361755 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:13.361811 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:13.361827 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:13.354081 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:55:13.367929 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:55:13.374232 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:13.379407 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:55:13.382780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:55:13.459981 coreos-metadata[782]: Jan 30 13:55:13.459 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:13.478191 coreos-metadata[782]: Jan 30 13:55:13.476 INFO Fetch successful Jan 30 13:55:13.482654 coreos-metadata[783]: Jan 30 13:55:13.482 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:13.484550 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 13:55:13.485372 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 13:55:13.487703 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:55:13.497042 coreos-metadata[783]: Jan 30 13:55:13.496 INFO Fetch successful Jan 30 13:55:13.503066 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:55:13.504151 coreos-metadata[783]: Jan 30 13:55:13.503 INFO wrote hostname ci-4081.3.0-8-baee985ae6 to /sysroot/etc/hostname Jan 30 13:55:13.505066 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:55:13.512479 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:55:13.518220 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:55:13.654740 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:55:13.662606 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:55:13.666988 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:55:13.690218 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:13.717263 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:55:13.722506 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:55:13.738182 ignition[902]: INFO : Ignition 2.19.0 Jan 30 13:55:13.740277 ignition[902]: INFO : Stage: mount Jan 30 13:55:13.740277 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.740277 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.742382 ignition[902]: INFO : mount: mount passed Jan 30 13:55:13.742910 ignition[902]: INFO : Ignition finished successfully Jan 30 13:55:13.744846 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:55:13.757587 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:55:13.771608 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:55:13.796191 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (913) Jan 30 13:55:13.798433 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:13.798534 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:13.800193 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:13.804280 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:13.808723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:55:13.841612 ignition[929]: INFO : Ignition 2.19.0 Jan 30 13:55:13.842534 ignition[929]: INFO : Stage: files Jan 30 13:55:13.842963 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.842963 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.844231 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:55:13.845932 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:55:13.845932 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:55:13.851570 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:55:13.852581 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:55:13.854051 unknown[929]: wrote ssh authorized keys file for user: core Jan 30 13:55:13.854955 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:55:13.856023 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:55:13.856691 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:55:13.856691 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:55:13.856691 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:55:14.001142 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:55:14.070327 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:55:14.071327 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:55:14.071327 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:55:14.071327 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:55:14.071327 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:55:14.074080 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:55:14.074080 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:55:14.074080 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:55:14.074080 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:55:14.077215 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:55:14.077215 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:55:14.077215 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:55:14.077215 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:55:14.077215 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:55:14.077215 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:55:14.433533 systemd-networkd[742]: eth1: Gained IPv6LL Jan 30 13:55:14.566569 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:55:14.753378 systemd-networkd[742]: eth0: Gained IPv6LL Jan 30 13:55:14.800120 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:55:14.800120 ignition[929]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 13:55:14.801543 ignition[929]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:55:14.803572 ignition[929]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:55:14.803572 ignition[929]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 13:55:14.803572 ignition[929]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 13:55:14.803572 ignition[929]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:55:14.803572 ignition[929]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:55:14.803572 ignition[929]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 13:55:14.803572 ignition[929]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:55:14.803572 ignition[929]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:55:14.803572 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:55:14.803572 ignition[929]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:55:14.803572 ignition[929]: INFO : files: files passed Jan 30 13:55:14.803572 ignition[929]: INFO : Ignition finished successfully Jan 30 13:55:14.804647 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:55:14.811371 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:55:14.814933 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:55:14.820370 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:55:14.821028 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:55:14.840139 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:14.840139 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:14.842571 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:14.845109 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:55:14.845813 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:55:14.856568 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:55:14.898522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:55:14.898648 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:55:14.899847 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:55:14.901012 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:55:14.902068 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:55:14.903651 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:55:14.939315 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:55:14.945507 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:55:14.959772 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:14.960334 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:14.961518 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:55:14.962380 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:55:14.962551 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:55:14.963573 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:55:14.964647 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:55:14.965545 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:55:14.966396 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:55:14.967417 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:55:14.968362 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:55:14.969376 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:55:14.970475 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:55:14.971140 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:55:14.971848 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:55:14.972569 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:55:14.972701 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:55:14.973567 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:14.974312 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:14.975128 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:55:14.975596 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:14.976428 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:55:14.976621 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:55:14.977956 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:55:14.978244 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:55:14.978933 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:55:14.979118 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:55:14.979934 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:55:14.980034 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:55:14.986581 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:55:14.989417 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:55:14.989798 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:55:14.989923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:14.993460 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:55:14.993611 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:55:15.003387 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:55:15.004095 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:55:15.011547 ignition[982]: INFO : Ignition 2.19.0 Jan 30 13:55:15.017989 ignition[982]: INFO : Stage: umount Jan 30 13:55:15.017989 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:15.017989 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:15.017989 ignition[982]: INFO : umount: umount passed Jan 30 13:55:15.017989 ignition[982]: INFO : Ignition finished successfully Jan 30 13:55:15.021057 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:55:15.021873 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:55:15.022782 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:55:15.022867 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:55:15.024312 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:55:15.024384 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:55:15.024941 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:55:15.024992 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:55:15.026440 systemd[1]: Stopped target network.target - Network. Jan 30 13:55:15.026996 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:55:15.027077 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:55:15.028373 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:55:15.029038 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:55:15.030196 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:15.030774 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:55:15.032375 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:55:15.033066 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:55:15.033161 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:55:15.033706 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:55:15.033768 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:55:15.034139 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:55:15.035362 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:55:15.036126 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:55:15.036239 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:55:15.037208 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:55:15.038250 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:55:15.040228 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:55:15.040955 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:55:15.041079 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:55:15.042139 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:55:15.042306 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:55:15.042359 systemd-networkd[742]: eth1: DHCPv6 lease lost Jan 30 13:55:15.045439 systemd-networkd[742]: eth0: DHCPv6 lease lost Jan 30 13:55:15.047353 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:55:15.047497 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:55:15.050217 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:55:15.050396 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:55:15.055218 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:55:15.055311 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:15.062387 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:55:15.062991 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:55:15.063084 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:55:15.063681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:55:15.063759 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:15.064350 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:55:15.064422 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:15.065545 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:55:15.065617 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:15.067012 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:15.084434 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:55:15.084613 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:15.085988 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:55:15.086089 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:15.086913 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:55:15.086952 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:15.087603 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:55:15.087655 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:55:15.088745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:55:15.088799 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:55:15.089786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:55:15.089842 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:15.095591 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:55:15.096178 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:55:15.096281 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:15.097180 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:55:15.097258 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:55:15.098478 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:55:15.098539 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:15.099000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:15.099051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:15.099838 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:55:15.103316 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:55:15.109580 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:55:15.109730 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:55:15.110574 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:55:15.120618 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:55:15.131391 systemd[1]: Switching root. Jan 30 13:55:15.177814 systemd-journald[184]: Journal stopped Jan 30 13:55:16.534658 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 30 13:55:16.534743 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:55:16.534760 kernel: SELinux: policy capability open_perms=1 Jan 30 13:55:16.534776 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:55:16.534794 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:55:16.534810 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:55:16.534828 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:55:16.534854 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:55:16.534866 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:55:16.534878 kernel: audit: type=1403 audit(1738245315.403:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:55:16.534898 systemd[1]: Successfully loaded SELinux policy in 39.520ms. Jan 30 13:55:16.534923 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.566ms. Jan 30 13:55:16.534937 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:55:16.534950 systemd[1]: Detected virtualization kvm. Jan 30 13:55:16.534967 systemd[1]: Detected architecture x86-64. Jan 30 13:55:16.534982 systemd[1]: Detected first boot. Jan 30 13:55:16.534995 systemd[1]: Hostname set to . Jan 30 13:55:16.535007 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:55:16.535025 zram_generator::config[1045]: No configuration found. Jan 30 13:55:16.535039 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:55:16.535069 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:55:16.535082 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:55:16.535096 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:55:16.535116 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:55:16.535128 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:55:16.535140 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:55:16.535153 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:55:16.535179 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:55:16.535192 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:55:16.535204 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:55:16.535216 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:16.535228 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:16.535244 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:55:16.535257 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:55:16.535274 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:55:16.535286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:55:16.535303 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:55:16.535321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:16.535337 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:55:16.535349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:16.535366 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:55:16.535385 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:55:16.535398 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:55:16.535410 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:55:16.535422 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:55:16.535434 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:55:16.535447 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:55:16.535459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:16.535474 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:16.535487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:16.535498 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:55:16.535511 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:55:16.535524 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:55:16.535536 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:55:16.535548 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:16.535560 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:55:16.535573 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:55:16.535588 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:55:16.535600 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:55:16.535612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:16.535624 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:55:16.535636 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:55:16.535654 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:16.535669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:55:16.535683 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:16.535699 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:55:16.535711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:16.535724 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:55:16.535737 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:55:16.535750 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:55:16.535762 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:55:16.535774 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:55:16.535787 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:55:16.535802 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:55:16.535815 kernel: fuse: init (API version 7.39) Jan 30 13:55:16.535827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:55:16.535840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:16.535853 kernel: loop: module loaded Jan 30 13:55:16.535864 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:55:16.535876 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:55:16.535940 systemd-journald[1139]: Collecting audit messages is disabled. Jan 30 13:55:16.535986 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:55:16.536001 kernel: ACPI: bus type drm_connector registered Jan 30 13:55:16.536022 systemd-journald[1139]: Journal started Jan 30 13:55:16.536048 systemd-journald[1139]: Runtime Journal (/run/log/journal/6d68e7278b8749658367ef5d4e74c710) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:55:16.539300 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:55:16.543339 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:55:16.546018 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:55:16.547480 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:55:16.550255 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:55:16.551460 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:16.552371 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:55:16.552667 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:55:16.553895 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:16.554117 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:16.555143 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:55:16.555630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:55:16.556463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:16.556753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:16.557775 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:55:16.558014 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:55:16.558764 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:16.559117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:16.560535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:16.562007 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:55:16.563513 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:55:16.580455 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:55:16.588343 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:55:16.590576 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:55:16.593095 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:55:16.607428 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:55:16.621559 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:55:16.624382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:16.630464 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:55:16.632342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:16.645410 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:55:16.665486 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:55:16.675999 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:55:16.681334 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:55:16.688928 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:55:16.692631 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:55:16.704217 systemd-journald[1139]: Time spent on flushing to /var/log/journal/6d68e7278b8749658367ef5d4e74c710 is 40.288ms for 978 entries. Jan 30 13:55:16.704217 systemd-journald[1139]: System Journal (/var/log/journal/6d68e7278b8749658367ef5d4e74c710) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:55:16.758850 systemd-journald[1139]: Received client request to flush runtime journal. Jan 30 13:55:16.751641 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:16.765545 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:55:16.766671 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:16.773514 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:55:16.783186 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 30 13:55:16.783215 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 30 13:55:16.798151 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:55:16.808797 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:55:16.810097 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:55:16.854726 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:55:16.862444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:55:16.893868 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 30 13:55:16.894299 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 30 13:55:16.901343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:17.595833 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:55:17.605510 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:17.638395 systemd-udevd[1213]: Using default interface naming scheme 'v255'. Jan 30 13:55:17.671128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:17.678681 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:55:17.709430 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:55:17.773828 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:55:17.808647 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:55:17.809986 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:17.810206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:17.821409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:17.835502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:17.844297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:17.844722 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:55:17.844776 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:55:17.844828 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:17.845968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:17.846157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:17.857078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:17.860361 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1223) Jan 30 13:55:17.862725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:17.864370 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:17.895566 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:17.897680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:17.902846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:18.022196 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:55:18.031152 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:55:18.034198 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:55:18.040205 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:55:18.044937 systemd-networkd[1218]: lo: Link UP Jan 30 13:55:18.044949 systemd-networkd[1218]: lo: Gained carrier Jan 30 13:55:18.050607 systemd-networkd[1218]: Enumeration completed Jan 30 13:55:18.050833 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:55:18.053099 systemd-networkd[1218]: eth0: Configuring with /run/systemd/network/10-be:aa:c4:c9:be:0a.network. Jan 30 13:55:18.054915 systemd-networkd[1218]: eth1: Configuring with /run/systemd/network/10-5e:44:f9:72:b7:da.network. Jan 30 13:55:18.056516 systemd-networkd[1218]: eth0: Link UP Jan 30 13:55:18.057129 systemd-networkd[1218]: eth0: Gained carrier Jan 30 13:55:18.060198 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:55:18.061107 systemd-networkd[1218]: eth1: Link UP Jan 30 13:55:18.061884 systemd-networkd[1218]: eth1: Gained carrier Jan 30 13:55:18.088414 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:55:18.149204 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:55:18.151194 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:55:18.151482 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:55:18.155553 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:55:18.159217 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:55:18.159310 kernel: [drm] features: -context_init Jan 30 13:55:18.159369 kernel: [drm] number of scanouts: 1 Jan 30 13:55:18.159391 kernel: [drm] number of cap sets: 0 Jan 30 13:55:18.168201 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:55:18.181615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:18.187556 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:55:18.187635 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:55:18.207214 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:55:18.206297 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:18.206556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:18.223841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:18.232533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:18.232911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:18.263638 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:18.378787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:18.407241 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:55:18.439129 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:55:18.459685 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:55:18.478186 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:55:18.505655 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:55:18.507775 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:18.513411 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:55:18.521253 lvm[1278]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:55:18.548297 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:55:18.550644 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:55:18.565388 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 13:55:18.565595 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:55:18.565645 systemd[1]: Reached target machines.target - Containers. Jan 30 13:55:18.568459 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:55:18.583466 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 13:55:18.586782 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 13:55:18.587965 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:55:18.591845 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:55:18.602471 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:55:18.615754 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:55:18.616764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.627493 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:55:18.642412 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:55:18.645913 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:55:18.657766 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:55:18.685615 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:55:18.699310 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:55:18.703017 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:55:18.726848 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:55:18.754574 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:55:18.815369 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 13:55:18.880622 kernel: loop3: detected capacity change from 0 to 8 Jan 30 13:55:18.904413 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:55:18.930000 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:55:18.949483 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 13:55:18.979331 kernel: loop7: detected capacity change from 0 to 8 Jan 30 13:55:18.978295 (sd-merge)[1304]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 13:55:18.980221 (sd-merge)[1304]: Merged extensions into '/usr'. Jan 30 13:55:18.986689 systemd[1]: Reloading requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:55:18.986718 systemd[1]: Reloading... Jan 30 13:55:19.109882 systemd-networkd[1218]: eth1: Gained IPv6LL Jan 30 13:55:19.169332 zram_generator::config[1339]: No configuration found. Jan 30 13:55:19.406314 ldconfig[1288]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:55:19.406099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:19.489262 systemd[1]: Reloading finished in 501 ms. Jan 30 13:55:19.515595 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:55:19.519196 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:55:19.522447 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:55:19.553926 systemd[1]: Starting ensure-sysext.service... Jan 30 13:55:19.562502 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:55:19.573390 systemd[1]: Reloading requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:55:19.573431 systemd[1]: Reloading... Jan 30 13:55:19.614426 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:55:19.615370 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:55:19.616644 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:55:19.617280 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 30 13:55:19.617476 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 30 13:55:19.621781 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:55:19.621796 systemd-tmpfiles[1385]: Skipping /boot Jan 30 13:55:19.641128 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:55:19.641425 systemd-tmpfiles[1385]: Skipping /boot Jan 30 13:55:19.699582 zram_generator::config[1409]: No configuration found. Jan 30 13:55:19.873494 systemd-networkd[1218]: eth0: Gained IPv6LL Jan 30 13:55:19.900401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:19.995221 systemd[1]: Reloading finished in 421 ms. Jan 30 13:55:20.025596 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:20.042546 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:20.052930 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:55:20.062479 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:55:20.077472 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:55:20.096655 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:55:20.111849 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:20.112105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:20.124303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:20.139247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:20.157141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:20.157940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:20.158114 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:20.186648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:20.194778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:20.204615 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:55:20.209733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:20.210052 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:20.231224 augenrules[1491]: No rules Jan 30 13:55:20.220448 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:20.231870 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:55:20.239996 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:20.243763 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:20.282772 systemd[1]: Finished ensure-sysext.service. Jan 30 13:55:20.283922 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:20.284150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:20.303096 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:20.314529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:55:20.329744 systemd-resolved[1468]: Positive Trust Anchors: Jan 30 13:55:20.330474 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:55:20.330519 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:55:20.336025 systemd-resolved[1468]: Using system hostname 'ci-4081.3.0-8-baee985ae6'. Jan 30 13:55:20.340582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:20.352632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:20.353857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:20.371590 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:55:20.386678 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:55:20.389918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:20.390705 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:55:20.399106 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:55:20.400443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:20.401816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:20.403847 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:55:20.404031 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:55:20.408065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:20.408291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:20.410308 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:20.410756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:20.419005 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:55:20.440325 systemd[1]: Reached target network.target - Network. Jan 30 13:55:20.442954 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:55:20.443653 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:20.444201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:20.444327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:20.444382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:55:20.503701 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:55:20.505185 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:55:20.505879 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:55:20.507065 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:55:20.507828 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:55:20.508709 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:55:20.508756 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:55:20.509696 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:55:20.510709 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:55:20.511497 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:55:20.512198 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:55:20.514656 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:55:20.519793 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:55:20.524815 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:55:20.528808 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:55:20.531420 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:55:20.532781 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:55:20.533664 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:55:20.533718 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:55:20.533746 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:55:20.539372 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:55:20.552677 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:55:20.561258 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:55:20.574448 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:55:20.587463 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:55:20.590692 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:55:20.597401 jq[1535]: false Jan 30 13:55:20.609477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:20.621090 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:55:20.631288 coreos-metadata[1530]: Jan 30 13:55:20.629 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:20.637365 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:55:20.655466 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:55:20.659278 dbus-daemon[1532]: [system] SELinux support is enabled Jan 30 13:55:20.668453 coreos-metadata[1530]: Jan 30 13:55:20.664 INFO Fetch successful Jan 30 13:55:20.674643 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:55:20.690096 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:55:20.701866 extend-filesystems[1536]: Found loop4 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found loop5 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found loop6 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found loop7 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found vda Jan 30 13:55:20.718210 extend-filesystems[1536]: Found vda1 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found vda2 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found vda3 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found usr Jan 30 13:55:20.718210 extend-filesystems[1536]: Found vda4 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found vda6 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found vda7 Jan 30 13:55:20.718210 extend-filesystems[1536]: Found vda9 Jan 30 13:55:20.718210 extend-filesystems[1536]: Checking size of /dev/vda9 Jan 30 13:55:20.708681 systemd-timesyncd[1515]: Contacted time server 198.60.22.240:123 (0.flatcar.pool.ntp.org). Jan 30 13:55:20.708783 systemd-timesyncd[1515]: Initial clock synchronization to Thu 2025-01-30 13:55:20.802706 UTC. Jan 30 13:55:20.716417 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:55:20.746130 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:55:20.760778 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:55:20.772016 extend-filesystems[1536]: Resized partition /dev/vda9 Jan 30 13:55:20.775579 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:55:20.781359 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:55:20.792727 extend-filesystems[1572]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:55:20.805218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:55:20.805543 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:55:20.818371 jq[1570]: true Jan 30 13:55:20.822878 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:55:20.825283 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:55:20.831208 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 13:55:20.833862 update_engine[1568]: I20250130 13:55:20.833716 1568 main.cc:92] Flatcar Update Engine starting Jan 30 13:55:20.837859 update_engine[1568]: I20250130 13:55:20.837788 1568 update_check_scheduler.cc:74] Next update check in 9m29s Jan 30 13:55:20.842799 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:55:20.870332 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:55:20.870691 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:55:20.944199 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 13:55:20.957825 (ntainerd)[1581]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:55:20.972223 extend-filesystems[1572]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:55:20.972223 extend-filesystems[1572]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 13:55:20.972223 extend-filesystems[1572]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 13:55:20.993950 extend-filesystems[1536]: Resized filesystem in /dev/vda9 Jan 30 13:55:20.993950 extend-filesystems[1536]: Found vdb Jan 30 13:55:21.002212 jq[1579]: true Jan 30 13:55:20.998476 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:55:20.998949 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:55:21.015597 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:55:21.050951 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1589) Jan 30 13:55:21.059334 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:55:21.062500 tar[1577]: linux-amd64/helm Jan 30 13:55:21.061532 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:55:21.061763 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:55:21.061804 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:55:21.064530 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:55:21.064675 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 13:55:21.064702 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:55:21.068556 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:55:21.096964 sshd_keygen[1569]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:55:21.083435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:55:21.140597 bash[1630]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:55:21.176866 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:55:21.184817 systemd-logind[1558]: New seat seat0. Jan 30 13:55:21.283678 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:55:21.323613 systemd-logind[1558]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:55:21.323766 systemd-logind[1558]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:55:21.324368 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:55:21.342927 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:55:21.357674 systemd[1]: Starting sshkeys.service... Jan 30 13:55:21.391834 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:55:21.392355 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:55:21.418120 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:55:21.506032 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:55:21.530921 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:55:21.590134 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:55:21.610623 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:55:21.628720 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:55:21.637523 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:55:21.671305 coreos-metadata[1653]: Jan 30 13:55:21.670 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:21.672640 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:55:21.694796 coreos-metadata[1653]: Jan 30 13:55:21.692 INFO Fetch successful Jan 30 13:55:21.708285 unknown[1653]: wrote ssh authorized keys file for user: core Jan 30 13:55:21.748226 containerd[1581]: time="2025-01-30T13:55:21.746301124Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:55:21.757199 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:55:21.756555 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:55:21.766974 systemd[1]: Finished sshkeys.service. Jan 30 13:55:21.800561 containerd[1581]: time="2025-01-30T13:55:21.800476350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:21.802970 containerd[1581]: time="2025-01-30T13:55:21.802903813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:21.803134 containerd[1581]: time="2025-01-30T13:55:21.803115926Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:55:21.803417 containerd[1581]: time="2025-01-30T13:55:21.803392030Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:55:21.803750 containerd[1581]: time="2025-01-30T13:55:21.803724969Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:55:21.803832 containerd[1581]: time="2025-01-30T13:55:21.803819976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:21.803974 containerd[1581]: time="2025-01-30T13:55:21.803956648Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:21.804029 containerd[1581]: time="2025-01-30T13:55:21.804019578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:21.804460 containerd[1581]: time="2025-01-30T13:55:21.804425198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:21.805202 containerd[1581]: time="2025-01-30T13:55:21.804548054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:21.805202 containerd[1581]: time="2025-01-30T13:55:21.804587142Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:21.805202 containerd[1581]: time="2025-01-30T13:55:21.804604632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:21.805202 containerd[1581]: time="2025-01-30T13:55:21.804722255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:21.805202 containerd[1581]: time="2025-01-30T13:55:21.805000022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:21.805420 containerd[1581]: time="2025-01-30T13:55:21.805167448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:21.805467 containerd[1581]: time="2025-01-30T13:55:21.805456780Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:55:21.805666 containerd[1581]: time="2025-01-30T13:55:21.805633189Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:55:21.805798 containerd[1581]: time="2025-01-30T13:55:21.805784107Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:55:21.812668 containerd[1581]: time="2025-01-30T13:55:21.812590097Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:55:21.813109 containerd[1581]: time="2025-01-30T13:55:21.813016987Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:55:21.813874 containerd[1581]: time="2025-01-30T13:55:21.813273008Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:55:21.813874 containerd[1581]: time="2025-01-30T13:55:21.813315003Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:55:21.813874 containerd[1581]: time="2025-01-30T13:55:21.813350788Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:55:21.813874 containerd[1581]: time="2025-01-30T13:55:21.813669928Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:55:21.814850 containerd[1581]: time="2025-01-30T13:55:21.814802559Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815320964Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815366332Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815394481Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815420171Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815466675Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815488495Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815514479Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815564118Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815588487Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815610119Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815630181Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815666976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815690703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817208 containerd[1581]: time="2025-01-30T13:55:21.815712407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815734321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815755325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815778713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815806465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815862759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815906302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815934987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815954932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.815976303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.816003213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.816032947Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.816070539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.816089434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.817948 containerd[1581]: time="2025-01-30T13:55:21.816107319Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:55:21.818549 containerd[1581]: time="2025-01-30T13:55:21.816225599Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:55:21.818549 containerd[1581]: time="2025-01-30T13:55:21.816314209Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:55:21.818549 containerd[1581]: time="2025-01-30T13:55:21.816333800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:55:21.818549 containerd[1581]: time="2025-01-30T13:55:21.816352258Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:55:21.818549 containerd[1581]: time="2025-01-30T13:55:21.816366225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.818549 containerd[1581]: time="2025-01-30T13:55:21.816388183Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:55:21.818549 containerd[1581]: time="2025-01-30T13:55:21.816421487Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:55:21.818549 containerd[1581]: time="2025-01-30T13:55:21.816438449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:55:21.818884 containerd[1581]: time="2025-01-30T13:55:21.817020118Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:55:21.818884 containerd[1581]: time="2025-01-30T13:55:21.817136646Z" level=info msg="Connect containerd service" Jan 30 13:55:21.821369 containerd[1581]: time="2025-01-30T13:55:21.819309912Z" level=info msg="using legacy CRI server" Jan 30 13:55:21.821369 containerd[1581]: time="2025-01-30T13:55:21.819345800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:55:21.821369 containerd[1581]: time="2025-01-30T13:55:21.819575422Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:55:21.821369 containerd[1581]: time="2025-01-30T13:55:21.820825060Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:55:21.821933 containerd[1581]: time="2025-01-30T13:55:21.821897942Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:55:21.822127 containerd[1581]: time="2025-01-30T13:55:21.822105303Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:55:21.822467 containerd[1581]: time="2025-01-30T13:55:21.822401526Z" level=info msg="Start subscribing containerd event" Jan 30 13:55:21.822608 containerd[1581]: time="2025-01-30T13:55:21.822588305Z" level=info msg="Start recovering state" Jan 30 13:55:21.822791 containerd[1581]: time="2025-01-30T13:55:21.822769456Z" level=info msg="Start event monitor" Jan 30 13:55:21.822887 containerd[1581]: time="2025-01-30T13:55:21.822868917Z" level=info msg="Start snapshots syncer" Jan 30 13:55:21.822967 containerd[1581]: time="2025-01-30T13:55:21.822949829Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:55:21.823042 containerd[1581]: time="2025-01-30T13:55:21.823027980Z" level=info msg="Start streaming server" Jan 30 13:55:21.823244 containerd[1581]: time="2025-01-30T13:55:21.823223589Z" level=info msg="containerd successfully booted in 0.078440s" Jan 30 13:55:21.823801 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:55:22.203434 tar[1577]: linux-amd64/LICENSE Jan 30 13:55:22.204297 tar[1577]: linux-amd64/README.md Jan 30 13:55:22.241732 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:55:22.733603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:22.751077 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:22.751634 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:55:22.757408 systemd[1]: Startup finished in 6.954s (kernel) + 7.392s (userspace) = 14.347s. Jan 30 13:55:23.653505 kubelet[1689]: E0130 13:55:23.653311 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:23.655894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:23.656224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:30.455115 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:55:30.465679 systemd[1]: Started sshd@0-146.190.136.39:22-147.75.109.163:57924.service - OpenSSH per-connection server daemon (147.75.109.163:57924). Jan 30 13:55:30.527004 sshd[1702]: Accepted publickey for core from 147.75.109.163 port 57924 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:30.531269 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:30.544709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:55:30.559666 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:55:30.565275 systemd-logind[1558]: New session 1 of user core. Jan 30 13:55:30.580091 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:55:30.590607 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:55:30.594867 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:55:30.738044 systemd[1708]: Queued start job for default target default.target. Jan 30 13:55:30.738633 systemd[1708]: Created slice app.slice - User Application Slice. Jan 30 13:55:30.738660 systemd[1708]: Reached target paths.target - Paths. Jan 30 13:55:30.738675 systemd[1708]: Reached target timers.target - Timers. Jan 30 13:55:30.745401 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:55:30.758001 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:55:30.758115 systemd[1708]: Reached target sockets.target - Sockets. Jan 30 13:55:30.758137 systemd[1708]: Reached target basic.target - Basic System. Jan 30 13:55:30.758232 systemd[1708]: Reached target default.target - Main User Target. Jan 30 13:55:30.758280 systemd[1708]: Startup finished in 152ms. Jan 30 13:55:30.758817 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:55:30.772831 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:55:30.841464 systemd[1]: Started sshd@1-146.190.136.39:22-147.75.109.163:57930.service - OpenSSH per-connection server daemon (147.75.109.163:57930). Jan 30 13:55:30.903731 sshd[1720]: Accepted publickey for core from 147.75.109.163 port 57930 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:30.906090 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:30.912771 systemd-logind[1558]: New session 2 of user core. Jan 30 13:55:30.923848 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:55:30.994469 sshd[1720]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.004750 systemd[1]: Started sshd@2-146.190.136.39:22-147.75.109.163:57944.service - OpenSSH per-connection server daemon (147.75.109.163:57944). Jan 30 13:55:31.005627 systemd[1]: sshd@1-146.190.136.39:22-147.75.109.163:57930.service: Deactivated successfully. Jan 30 13:55:31.018378 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:55:31.021936 systemd-logind[1558]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:55:31.023842 systemd-logind[1558]: Removed session 2. Jan 30 13:55:31.057154 sshd[1725]: Accepted publickey for core from 147.75.109.163 port 57944 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.060011 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.069629 systemd-logind[1558]: New session 3 of user core. Jan 30 13:55:31.080857 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:55:31.146403 sshd[1725]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.162842 systemd[1]: Started sshd@3-146.190.136.39:22-147.75.109.163:57956.service - OpenSSH per-connection server daemon (147.75.109.163:57956). Jan 30 13:55:31.163646 systemd[1]: sshd@2-146.190.136.39:22-147.75.109.163:57944.service: Deactivated successfully. Jan 30 13:55:31.174074 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:55:31.176630 systemd-logind[1558]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:55:31.180481 systemd-logind[1558]: Removed session 3. Jan 30 13:55:31.214959 sshd[1733]: Accepted publickey for core from 147.75.109.163 port 57956 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.217442 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.225295 systemd-logind[1558]: New session 4 of user core. Jan 30 13:55:31.236017 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:55:31.310535 sshd[1733]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.326046 systemd[1]: Started sshd@4-146.190.136.39:22-147.75.109.163:57962.service - OpenSSH per-connection server daemon (147.75.109.163:57962). Jan 30 13:55:31.327610 systemd[1]: sshd@3-146.190.136.39:22-147.75.109.163:57956.service: Deactivated successfully. Jan 30 13:55:31.332465 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:55:31.335789 systemd-logind[1558]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:55:31.338300 systemd-logind[1558]: Removed session 4. Jan 30 13:55:31.376398 sshd[1741]: Accepted publickey for core from 147.75.109.163 port 57962 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.378333 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.385269 systemd-logind[1558]: New session 5 of user core. Jan 30 13:55:31.392784 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:55:31.467787 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:55:31.468677 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:31.486185 sudo[1748]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:31.490452 sshd[1741]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.504636 systemd[1]: Started sshd@5-146.190.136.39:22-147.75.109.163:57974.service - OpenSSH per-connection server daemon (147.75.109.163:57974). Jan 30 13:55:31.505737 systemd[1]: sshd@4-146.190.136.39:22-147.75.109.163:57962.service: Deactivated successfully. Jan 30 13:55:31.508841 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:55:31.511097 systemd-logind[1558]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:55:31.514745 systemd-logind[1558]: Removed session 5. Jan 30 13:55:31.563600 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 57974 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.566829 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.575105 systemd-logind[1558]: New session 6 of user core. Jan 30 13:55:31.585808 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:55:31.656203 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:55:31.656718 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:31.663450 sudo[1758]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:31.673079 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:55:31.673997 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:31.694699 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:31.718427 auditctl[1761]: No rules Jan 30 13:55:31.719443 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:55:31.719869 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:31.732306 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:31.781297 augenrules[1780]: No rules Jan 30 13:55:31.783635 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:31.789176 sudo[1757]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:31.794217 sshd[1751]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.804855 systemd[1]: Started sshd@6-146.190.136.39:22-147.75.109.163:57990.service - OpenSSH per-connection server daemon (147.75.109.163:57990). Jan 30 13:55:31.805864 systemd[1]: sshd@5-146.190.136.39:22-147.75.109.163:57974.service: Deactivated successfully. Jan 30 13:55:31.814146 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:55:31.817583 systemd-logind[1558]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:55:31.819882 systemd-logind[1558]: Removed session 6. Jan 30 13:55:31.867078 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 57990 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.869861 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.878735 systemd-logind[1558]: New session 7 of user core. Jan 30 13:55:31.885742 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:55:31.953574 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:55:31.954058 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:32.495160 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:55:32.511341 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:55:33.074907 dockerd[1810]: time="2025-01-30T13:55:33.074568727Z" level=info msg="Starting up" Jan 30 13:55:33.215788 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport343889744-merged.mount: Deactivated successfully. Jan 30 13:55:33.330062 dockerd[1810]: time="2025-01-30T13:55:33.329606085Z" level=info msg="Loading containers: start." Jan 30 13:55:33.661984 kernel: Initializing XFRM netlink socket Jan 30 13:55:33.689459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:55:33.707192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:33.905356 systemd-networkd[1218]: docker0: Link UP Jan 30 13:55:33.931953 dockerd[1810]: time="2025-01-30T13:55:33.931834790Z" level=info msg="Loading containers: done." Jan 30 13:55:33.957596 dockerd[1810]: time="2025-01-30T13:55:33.956940184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:55:33.959188 dockerd[1810]: time="2025-01-30T13:55:33.958292986Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:55:33.959188 dockerd[1810]: time="2025-01-30T13:55:33.958553799Z" level=info msg="Daemon has completed initialization" Jan 30 13:55:34.029679 dockerd[1810]: time="2025-01-30T13:55:34.029577152Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:55:34.030663 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:55:34.059663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:34.076957 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:34.164229 kubelet[1952]: E0130 13:55:34.164041 1952 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:34.168920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:34.169229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:35.189387 containerd[1581]: time="2025-01-30T13:55:35.189243761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:55:35.795374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471551334.mount: Deactivated successfully. Jan 30 13:55:37.228047 containerd[1581]: time="2025-01-30T13:55:37.227958639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.229238 containerd[1581]: time="2025-01-30T13:55:37.229183801Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:55:37.230121 containerd[1581]: time="2025-01-30T13:55:37.230071643Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.234339 containerd[1581]: time="2025-01-30T13:55:37.234279419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:37.236348 containerd[1581]: time="2025-01-30T13:55:37.236258345Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.046934525s" Jan 30 13:55:37.237308 containerd[1581]: time="2025-01-30T13:55:37.236552289Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:55:37.269788 containerd[1581]: time="2025-01-30T13:55:37.269735219Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:55:38.914215 containerd[1581]: time="2025-01-30T13:55:38.913964943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.915750 containerd[1581]: time="2025-01-30T13:55:38.915358085Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:55:38.916476 containerd[1581]: time="2025-01-30T13:55:38.916440433Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.921335 containerd[1581]: time="2025-01-30T13:55:38.921253535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.922220 containerd[1581]: time="2025-01-30T13:55:38.922159988Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.65238129s" Jan 30 13:55:38.922220 containerd[1581]: time="2025-01-30T13:55:38.922217789Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:55:38.967787 containerd[1581]: time="2025-01-30T13:55:38.967737261Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:55:40.164205 containerd[1581]: time="2025-01-30T13:55:40.162438574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.164875 containerd[1581]: time="2025-01-30T13:55:40.164426494Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:55:40.166378 containerd[1581]: time="2025-01-30T13:55:40.165581151Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.172196 containerd[1581]: time="2025-01-30T13:55:40.170715279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:40.172922 containerd[1581]: time="2025-01-30T13:55:40.172863937Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.205075053s" Jan 30 13:55:40.173107 containerd[1581]: time="2025-01-30T13:55:40.173077264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:55:40.207275 containerd[1581]: time="2025-01-30T13:55:40.207234471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:55:41.310984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642206800.mount: Deactivated successfully. Jan 30 13:55:41.745389 containerd[1581]: time="2025-01-30T13:55:41.744300348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.745389 containerd[1581]: time="2025-01-30T13:55:41.745322171Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:55:41.746287 containerd[1581]: time="2025-01-30T13:55:41.746251130Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.750464 containerd[1581]: time="2025-01-30T13:55:41.750396943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.751702 containerd[1581]: time="2025-01-30T13:55:41.751621730Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.544000719s" Jan 30 13:55:41.751702 containerd[1581]: time="2025-01-30T13:55:41.751700788Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:55:41.783369 containerd[1581]: time="2025-01-30T13:55:41.783306081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:55:41.785231 systemd-resolved[1468]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 13:55:42.246713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042297160.mount: Deactivated successfully. Jan 30 13:55:43.200119 containerd[1581]: time="2025-01-30T13:55:43.198714824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.200808 containerd[1581]: time="2025-01-30T13:55:43.200746844Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:55:43.201111 containerd[1581]: time="2025-01-30T13:55:43.201071409Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.204768 containerd[1581]: time="2025-01-30T13:55:43.204711727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.207046 containerd[1581]: time="2025-01-30T13:55:43.206985726Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.423616105s" Jan 30 13:55:43.207320 containerd[1581]: time="2025-01-30T13:55:43.207264249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:55:43.260726 containerd[1581]: time="2025-01-30T13:55:43.260654653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:55:43.791967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25852769.mount: Deactivated successfully. Jan 30 13:55:43.797943 containerd[1581]: time="2025-01-30T13:55:43.797880480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.798557 containerd[1581]: time="2025-01-30T13:55:43.798298286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:55:43.799065 containerd[1581]: time="2025-01-30T13:55:43.799037512Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.801365 containerd[1581]: time="2025-01-30T13:55:43.801331118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.803158 containerd[1581]: time="2025-01-30T13:55:43.802468825Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 541.752034ms" Jan 30 13:55:43.803158 containerd[1581]: time="2025-01-30T13:55:43.802509548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:55:43.834596 containerd[1581]: time="2025-01-30T13:55:43.834528900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:55:44.184626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:55:44.193500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:44.327249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4136536067.mount: Deactivated successfully. Jan 30 13:55:44.349804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:44.360441 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:44.429188 kubelet[2140]: E0130 13:55:44.426219 2140 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:44.430751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:44.432314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:44.897399 systemd-resolved[1468]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 13:55:46.318249 containerd[1581]: time="2025-01-30T13:55:46.318171755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:46.319861 containerd[1581]: time="2025-01-30T13:55:46.319798731Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:55:46.322006 containerd[1581]: time="2025-01-30T13:55:46.320315450Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:46.324730 containerd[1581]: time="2025-01-30T13:55:46.324669732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:46.326417 containerd[1581]: time="2025-01-30T13:55:46.326362168Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.491751761s" Jan 30 13:55:46.326630 containerd[1581]: time="2025-01-30T13:55:46.326605938Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:55:49.916900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:49.927676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:49.968629 systemd[1]: Reloading requested from client PID 2253 ('systemctl') (unit session-7.scope)... Jan 30 13:55:49.968654 systemd[1]: Reloading... Jan 30 13:55:50.113308 zram_generator::config[2289]: No configuration found. Jan 30 13:55:50.290674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:50.384293 systemd[1]: Reloading finished in 415 ms. Jan 30 13:55:50.451403 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:55:50.451595 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:55:50.452109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:50.462750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:50.600445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:50.618753 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:55:50.680806 kubelet[2355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:50.680806 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:55:50.680806 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:50.682944 kubelet[2355]: I0130 13:55:50.682839 2355 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:55:51.195069 kubelet[2355]: I0130 13:55:51.194972 2355 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:55:51.195069 kubelet[2355]: I0130 13:55:51.195035 2355 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:55:51.195561 kubelet[2355]: I0130 13:55:51.195482 2355 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:55:51.218350 kubelet[2355]: I0130 13:55:51.217876 2355 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:51.221539 kubelet[2355]: E0130 13:55:51.220568 2355 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://146.190.136.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.245564 kubelet[2355]: I0130 13:55:51.245505 2355 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:55:51.247179 kubelet[2355]: I0130 13:55:51.247050 2355 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:55:51.247551 kubelet[2355]: I0130 13:55:51.247180 2355 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-8-baee985ae6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:55:51.247726 kubelet[2355]: I0130 13:55:51.247590 2355 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:55:51.247726 kubelet[2355]: I0130 13:55:51.247611 2355 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:55:51.247885 kubelet[2355]: I0130 13:55:51.247855 2355 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:51.249371 kubelet[2355]: I0130 13:55:51.249330 2355 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:55:51.249490 kubelet[2355]: I0130 13:55:51.249384 2355 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:55:51.249490 kubelet[2355]: I0130 13:55:51.249457 2355 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:55:51.249562 kubelet[2355]: I0130 13:55:51.249495 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:55:51.254195 kubelet[2355]: W0130 13:55:51.253905 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.136.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.255175 kubelet[2355]: E0130 13:55:51.254618 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.136.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.255313 kubelet[2355]: W0130 13:55:51.255194 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.136.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-8-baee985ae6&limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.255313 kubelet[2355]: E0130 13:55:51.255240 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.136.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-8-baee985ae6&limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.255481 kubelet[2355]: I0130 13:55:51.255429 2355 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:55:51.258556 kubelet[2355]: I0130 13:55:51.257643 2355 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:55:51.258556 kubelet[2355]: W0130 13:55:51.257781 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:55:51.260689 kubelet[2355]: I0130 13:55:51.260299 2355 server.go:1264] "Started kubelet" Jan 30 13:55:51.264717 kubelet[2355]: I0130 13:55:51.264621 2355 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:55:51.266212 kubelet[2355]: I0130 13:55:51.266100 2355 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:55:51.270932 kubelet[2355]: I0130 13:55:51.270328 2355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:55:51.270932 kubelet[2355]: I0130 13:55:51.270637 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:55:51.270932 kubelet[2355]: I0130 13:55:51.270708 2355 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:55:51.272982 kubelet[2355]: E0130 13:55:51.271965 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.136.39:6443/api/v1/namespaces/default/events\": dial tcp 146.190.136.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-8-baee985ae6.181f7ceff6575488 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-8-baee985ae6,UID:ci-4081.3.0-8-baee985ae6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-8-baee985ae6,},FirstTimestamp:2025-01-30 13:55:51.260247176 +0000 UTC m=+0.636452928,LastTimestamp:2025-01-30 13:55:51.260247176 +0000 UTC m=+0.636452928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-8-baee985ae6,}" Jan 30 13:55:51.283206 kubelet[2355]: E0130 13:55:51.282984 2355 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-8-baee985ae6\" not found" Jan 30 13:55:51.283206 kubelet[2355]: I0130 13:55:51.283061 2355 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:55:51.283376 kubelet[2355]: I0130 13:55:51.283271 2355 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:55:51.283376 kubelet[2355]: I0130 13:55:51.283366 2355 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:55:51.284887 kubelet[2355]: W0130 13:55:51.283761 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.136.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.284887 kubelet[2355]: E0130 13:55:51.283836 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.136.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.284887 kubelet[2355]: E0130 13:55:51.284104 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.136.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-8-baee985ae6?timeout=10s\": dial tcp 146.190.136.39:6443: connect: connection refused" interval="200ms" Jan 30 13:55:51.285433 kubelet[2355]: I0130 13:55:51.285406 2355 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:55:51.285538 kubelet[2355]: I0130 13:55:51.285515 2355 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:55:51.288495 kubelet[2355]: E0130 13:55:51.288458 2355 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:55:51.289575 kubelet[2355]: I0130 13:55:51.289540 2355 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:55:51.303043 kubelet[2355]: I0130 13:55:51.302988 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:55:51.304926 kubelet[2355]: I0130 13:55:51.304888 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:55:51.305752 kubelet[2355]: I0130 13:55:51.305337 2355 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:55:51.305752 kubelet[2355]: I0130 13:55:51.305378 2355 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:55:51.305752 kubelet[2355]: E0130 13:55:51.305440 2355 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:55:51.324658 kubelet[2355]: W0130 13:55:51.324561 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.136.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.324658 kubelet[2355]: E0130 13:55:51.324675 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.136.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:51.342388 kubelet[2355]: I0130 13:55:51.342316 2355 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:55:51.342388 kubelet[2355]: I0130 13:55:51.342346 2355 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:55:51.342388 kubelet[2355]: I0130 13:55:51.342399 2355 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:51.344652 kubelet[2355]: I0130 13:55:51.344593 2355 policy_none.go:49] "None policy: Start" Jan 30 13:55:51.346302 kubelet[2355]: I0130 13:55:51.346246 2355 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:55:51.346302 kubelet[2355]: I0130 13:55:51.346300 2355 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:55:51.355525 kubelet[2355]: I0130 13:55:51.355444 2355 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:55:51.355906 kubelet[2355]: I0130 13:55:51.355800 2355 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:55:51.356086 kubelet[2355]: I0130 13:55:51.356054 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:55:51.362398 kubelet[2355]: E0130 13:55:51.362329 2355 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-8-baee985ae6\" not found" Jan 30 13:55:51.385698 kubelet[2355]: I0130 13:55:51.385620 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.386137 kubelet[2355]: E0130 13:55:51.386103 2355 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.136.39:6443/api/v1/nodes\": dial tcp 146.190.136.39:6443: connect: connection refused" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.406513 kubelet[2355]: I0130 13:55:51.406430 2355 topology_manager.go:215] "Topology Admit Handler" podUID="e1e6e8271a139f1620e8ac7ff263c5ae" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.408947 kubelet[2355]: I0130 13:55:51.408422 2355 topology_manager.go:215] "Topology Admit Handler" podUID="63d547c37ebdc0c24151d8fe2d9ccd16" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.409956 kubelet[2355]: I0130 13:55:51.409798 2355 topology_manager.go:215] "Topology Admit Handler" podUID="988887eda6360eefad177b3d1d200202" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.484751 kubelet[2355]: E0130 13:55:51.484594 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.136.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-8-baee985ae6?timeout=10s\": dial tcp 146.190.136.39:6443: connect: connection refused" interval="400ms" Jan 30 13:55:51.585486 kubelet[2355]: I0130 13:55:51.585395 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.585486 kubelet[2355]: I0130 13:55:51.585486 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.585737 kubelet[2355]: I0130 13:55:51.585520 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.585737 kubelet[2355]: I0130 13:55:51.585548 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.585737 kubelet[2355]: I0130 13:55:51.585578 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.585737 kubelet[2355]: I0130 13:55:51.585607 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/988887eda6360eefad177b3d1d200202-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-8-baee985ae6\" (UID: \"988887eda6360eefad177b3d1d200202\") " pod="kube-system/kube-scheduler-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.585737 kubelet[2355]: I0130 13:55:51.585632 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1e6e8271a139f1620e8ac7ff263c5ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-8-baee985ae6\" (UID: \"e1e6e8271a139f1620e8ac7ff263c5ae\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.585958 kubelet[2355]: I0130 13:55:51.585659 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1e6e8271a139f1620e8ac7ff263c5ae-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-8-baee985ae6\" (UID: \"e1e6e8271a139f1620e8ac7ff263c5ae\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.585958 kubelet[2355]: I0130 13:55:51.585685 2355 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1e6e8271a139f1620e8ac7ff263c5ae-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-8-baee985ae6\" (UID: \"e1e6e8271a139f1620e8ac7ff263c5ae\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.587943 kubelet[2355]: I0130 13:55:51.587456 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.587943 kubelet[2355]: E0130 13:55:51.587893 2355 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.136.39:6443/api/v1/nodes\": dial tcp 146.190.136.39:6443: connect: connection refused" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.716956 kubelet[2355]: E0130 13:55:51.716899 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:51.718847 containerd[1581]: time="2025-01-30T13:55:51.718213567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-8-baee985ae6,Uid:e1e6e8271a139f1620e8ac7ff263c5ae,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:51.720106 kubelet[2355]: E0130 13:55:51.720073 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:51.722609 systemd-resolved[1468]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 13:55:51.724385 kubelet[2355]: E0130 13:55:51.724087 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:51.726977 containerd[1581]: time="2025-01-30T13:55:51.726515884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-8-baee985ae6,Uid:988887eda6360eefad177b3d1d200202,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:51.726977 containerd[1581]: time="2025-01-30T13:55:51.726645637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-8-baee985ae6,Uid:63d547c37ebdc0c24151d8fe2d9ccd16,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:51.886237 kubelet[2355]: E0130 13:55:51.886052 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.136.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-8-baee985ae6?timeout=10s\": dial tcp 146.190.136.39:6443: connect: connection refused" interval="800ms" Jan 30 13:55:51.992559 kubelet[2355]: I0130 13:55:51.992525 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:51.993545 kubelet[2355]: E0130 13:55:51.993503 2355 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.136.39:6443/api/v1/nodes\": dial tcp 146.190.136.39:6443: connect: connection refused" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:52.245643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019441866.mount: Deactivated successfully. Jan 30 13:55:52.254506 containerd[1581]: time="2025-01-30T13:55:52.254435363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.255636 containerd[1581]: time="2025-01-30T13:55:52.255571216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:55:52.256740 containerd[1581]: time="2025-01-30T13:55:52.256582120Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.258925 containerd[1581]: time="2025-01-30T13:55:52.258861225Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.261189 containerd[1581]: time="2025-01-30T13:55:52.259548764Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.261189 containerd[1581]: time="2025-01-30T13:55:52.259905111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:55:52.261189 containerd[1581]: time="2025-01-30T13:55:52.260431591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:55:52.262961 containerd[1581]: time="2025-01-30T13:55:52.262897669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:52.265076 containerd[1581]: time="2025-01-30T13:55:52.265023242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.271165ms" Jan 30 13:55:52.267637 containerd[1581]: time="2025-01-30T13:55:52.267587955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.953028ms" Jan 30 13:55:52.272522 containerd[1581]: time="2025-01-30T13:55:52.272464180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.140646ms" Jan 30 13:55:52.308564 kubelet[2355]: W0130 13:55:52.308504 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.136.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:52.308751 kubelet[2355]: E0130 13:55:52.308740 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://146.190.136.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:52.435801 containerd[1581]: time="2025-01-30T13:55:52.435504803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:52.437743 containerd[1581]: time="2025-01-30T13:55:52.437648921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:52.438071 containerd[1581]: time="2025-01-30T13:55:52.438028664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.438776 containerd[1581]: time="2025-01-30T13:55:52.438718589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.439604 containerd[1581]: time="2025-01-30T13:55:52.439214821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:52.440339 containerd[1581]: time="2025-01-30T13:55:52.440188997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:52.440339 containerd[1581]: time="2025-01-30T13:55:52.440217519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.440540 containerd[1581]: time="2025-01-30T13:55:52.440490349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.449235 containerd[1581]: time="2025-01-30T13:55:52.447535294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:52.449988 containerd[1581]: time="2025-01-30T13:55:52.449893990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:52.450180 containerd[1581]: time="2025-01-30T13:55:52.450078875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.450452 containerd[1581]: time="2025-01-30T13:55:52.450388507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:52.582813 containerd[1581]: time="2025-01-30T13:55:52.582568127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-8-baee985ae6,Uid:63d547c37ebdc0c24151d8fe2d9ccd16,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8b1e7c7e9800374bd67a06ec42cce123475870e96ce9382393064224073c1c5\"" Jan 30 13:55:52.587091 kubelet[2355]: W0130 13:55:52.586001 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.136.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:52.587091 kubelet[2355]: E0130 13:55:52.586129 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://146.190.136.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:52.587091 kubelet[2355]: E0130 13:55:52.586456 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:52.599721 containerd[1581]: time="2025-01-30T13:55:52.599651810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-8-baee985ae6,Uid:988887eda6360eefad177b3d1d200202,Namespace:kube-system,Attempt:0,} returns sandbox id \"62b9f9e728df0dbcce9598614e5a0958e1dd213a8049fce69271dbaae1ddf015\"" Jan 30 13:55:52.601046 kubelet[2355]: E0130 13:55:52.601016 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:52.602020 containerd[1581]: time="2025-01-30T13:55:52.601491636Z" level=info msg="CreateContainer within sandbox \"e8b1e7c7e9800374bd67a06ec42cce123475870e96ce9382393064224073c1c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:55:52.606500 containerd[1581]: time="2025-01-30T13:55:52.606244951Z" level=info msg="CreateContainer within sandbox \"62b9f9e728df0dbcce9598614e5a0958e1dd213a8049fce69271dbaae1ddf015\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:55:52.616942 containerd[1581]: time="2025-01-30T13:55:52.616884675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-8-baee985ae6,Uid:e1e6e8271a139f1620e8ac7ff263c5ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dfa6d49bcf5c7a2621e60fbabb064586067cfbef3c20bcf90be2f865f9847b3\"" Jan 30 13:55:52.618842 kubelet[2355]: E0130 13:55:52.618511 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:52.626552 containerd[1581]: time="2025-01-30T13:55:52.626367011Z" level=info msg="CreateContainer within sandbox \"2dfa6d49bcf5c7a2621e60fbabb064586067cfbef3c20bcf90be2f865f9847b3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:55:52.629703 containerd[1581]: time="2025-01-30T13:55:52.629645221Z" level=info msg="CreateContainer within sandbox \"e8b1e7c7e9800374bd67a06ec42cce123475870e96ce9382393064224073c1c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"41e5312b1dcf8ccf3d3bc4230f897129a93d5d0e2ef3978d8a172f9349d6779d\"" Jan 30 13:55:52.631234 containerd[1581]: time="2025-01-30T13:55:52.630990913Z" level=info msg="StartContainer for \"41e5312b1dcf8ccf3d3bc4230f897129a93d5d0e2ef3978d8a172f9349d6779d\"" Jan 30 13:55:52.637224 containerd[1581]: time="2025-01-30T13:55:52.637143454Z" level=info msg="CreateContainer within sandbox \"62b9f9e728df0dbcce9598614e5a0958e1dd213a8049fce69271dbaae1ddf015\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa71bf03ceb819cb2e03c231dc6dc20df66e14d0417fc7ac326b4afad33c02f1\"" Jan 30 13:55:52.640883 containerd[1581]: time="2025-01-30T13:55:52.639345054Z" level=info msg="StartContainer for \"fa71bf03ceb819cb2e03c231dc6dc20df66e14d0417fc7ac326b4afad33c02f1\"" Jan 30 13:55:52.651037 containerd[1581]: time="2025-01-30T13:55:52.650975708Z" level=info msg="CreateContainer within sandbox \"2dfa6d49bcf5c7a2621e60fbabb064586067cfbef3c20bcf90be2f865f9847b3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c55c5d83320a743588c828024a70e72cc1e3ea101224164f30bb434a48e839d\"" Jan 30 13:55:52.652017 containerd[1581]: time="2025-01-30T13:55:52.651883846Z" level=info msg="StartContainer for \"2c55c5d83320a743588c828024a70e72cc1e3ea101224164f30bb434a48e839d\"" Jan 30 13:55:52.689462 kubelet[2355]: E0130 13:55:52.689363 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.136.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-8-baee985ae6?timeout=10s\": dial tcp 146.190.136.39:6443: connect: connection refused" interval="1.6s" Jan 30 13:55:52.769421 kubelet[2355]: W0130 13:55:52.769298 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.136.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-8-baee985ae6&limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:52.770149 kubelet[2355]: E0130 13:55:52.770115 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://146.190.136.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-8-baee985ae6&limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:52.787419 kubelet[2355]: W0130 13:55:52.787311 2355 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.136.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:52.787867 kubelet[2355]: E0130 13:55:52.787841 2355 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://146.190.136.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.136.39:6443: connect: connection refused Jan 30 13:55:52.805203 kubelet[2355]: I0130 13:55:52.803696 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:52.812100 kubelet[2355]: E0130 13:55:52.809025 2355 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://146.190.136.39:6443/api/v1/nodes\": dial tcp 146.190.136.39:6443: connect: connection refused" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:52.825795 containerd[1581]: time="2025-01-30T13:55:52.825732263Z" level=info msg="StartContainer for \"fa71bf03ceb819cb2e03c231dc6dc20df66e14d0417fc7ac326b4afad33c02f1\" returns successfully" Jan 30 13:55:52.852591 containerd[1581]: time="2025-01-30T13:55:52.852021272Z" level=info msg="StartContainer for \"41e5312b1dcf8ccf3d3bc4230f897129a93d5d0e2ef3978d8a172f9349d6779d\" returns successfully" Jan 30 13:55:52.894382 containerd[1581]: time="2025-01-30T13:55:52.894291511Z" level=info msg="StartContainer for \"2c55c5d83320a743588c828024a70e72cc1e3ea101224164f30bb434a48e839d\" returns successfully" Jan 30 13:55:53.352617 kubelet[2355]: E0130 13:55:53.349701 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:53.365267 kubelet[2355]: E0130 13:55:53.363732 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:53.372418 kubelet[2355]: E0130 13:55:53.372256 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:54.375080 kubelet[2355]: E0130 13:55:54.375029 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:54.392228 kubelet[2355]: E0130 13:55:54.390128 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:54.415220 kubelet[2355]: I0130 13:55:54.413477 2355 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:55.341656 kubelet[2355]: I0130 13:55:55.341461 2355 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:55.369263 kubelet[2355]: E0130 13:55:55.368949 2355 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-8-baee985ae6\" not found" Jan 30 13:55:55.399551 kubelet[2355]: E0130 13:55:55.399458 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 30 13:55:55.469220 kubelet[2355]: E0130 13:55:55.469138 2355 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-8-baee985ae6\" not found" Jan 30 13:55:55.569708 kubelet[2355]: E0130 13:55:55.569638 2355 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-8-baee985ae6\" not found" Jan 30 13:55:55.670428 kubelet[2355]: E0130 13:55:55.670275 2355 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-8-baee985ae6\" not found" Jan 30 13:55:56.255660 kubelet[2355]: I0130 13:55:56.255292 2355 apiserver.go:52] "Watching apiserver" Jan 30 13:55:56.284366 kubelet[2355]: I0130 13:55:56.284313 2355 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:55:56.870343 kubelet[2355]: W0130 13:55:56.870097 2355 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:56.871015 kubelet[2355]: E0130 13:55:56.870980 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:57.384106 kubelet[2355]: E0130 13:55:57.384054 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:58.063420 systemd[1]: Reloading requested from client PID 2627 ('systemctl') (unit session-7.scope)... Jan 30 13:55:58.063998 systemd[1]: Reloading... Jan 30 13:55:58.186240 zram_generator::config[2669]: No configuration found. Jan 30 13:55:58.380139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:58.486567 systemd[1]: Reloading finished in 421 ms. Jan 30 13:55:58.528823 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:58.544188 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:55:58.544704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:58.554551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:58.712723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:58.723106 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:55:58.862183 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:58.862183 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:55:58.862183 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:58.862183 kubelet[2727]: I0130 13:55:58.861490 2727 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:55:58.871232 kubelet[2727]: I0130 13:55:58.870775 2727 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:55:58.871232 kubelet[2727]: I0130 13:55:58.870825 2727 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:55:58.871232 kubelet[2727]: I0130 13:55:58.871156 2727 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:55:58.873437 kubelet[2727]: I0130 13:55:58.873383 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:55:58.877127 kubelet[2727]: I0130 13:55:58.876896 2727 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:58.888125 kubelet[2727]: I0130 13:55:58.888062 2727 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:55:58.888851 kubelet[2727]: I0130 13:55:58.888734 2727 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:55:58.889056 kubelet[2727]: I0130 13:55:58.888825 2727 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-8-baee985ae6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:55:58.889211 kubelet[2727]: I0130 13:55:58.889074 2727 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:55:58.889211 kubelet[2727]: I0130 13:55:58.889090 2727 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:55:58.889211 kubelet[2727]: I0130 13:55:58.889155 2727 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:58.889387 kubelet[2727]: I0130 13:55:58.889351 2727 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:55:58.890220 kubelet[2727]: I0130 13:55:58.890125 2727 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:55:58.890338 kubelet[2727]: I0130 13:55:58.890278 2727 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:55:58.891279 kubelet[2727]: I0130 13:55:58.891235 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:55:58.903553 kubelet[2727]: I0130 13:55:58.902758 2727 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:55:58.903553 kubelet[2727]: I0130 13:55:58.903108 2727 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:55:58.904256 kubelet[2727]: I0130 13:55:58.904235 2727 server.go:1264] "Started kubelet" Jan 30 13:55:58.909484 kubelet[2727]: I0130 13:55:58.908865 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:55:58.916397 kubelet[2727]: E0130 13:55:58.916322 2727 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:55:58.920645 kubelet[2727]: I0130 13:55:58.920436 2727 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:55:58.926204 kubelet[2727]: I0130 13:55:58.921652 2727 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:55:58.926204 kubelet[2727]: I0130 13:55:58.925893 2727 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:55:58.929612 kubelet[2727]: I0130 13:55:58.929363 2727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:55:58.933470 kubelet[2727]: I0130 13:55:58.933287 2727 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:55:58.935781 kubelet[2727]: I0130 13:55:58.935416 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:55:58.935781 kubelet[2727]: I0130 13:55:58.935782 2727 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:55:58.942830 kubelet[2727]: I0130 13:55:58.940920 2727 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:55:58.942830 kubelet[2727]: I0130 13:55:58.941118 2727 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:55:58.950458 kubelet[2727]: I0130 13:55:58.948226 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:55:58.950458 kubelet[2727]: I0130 13:55:58.949740 2727 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:55:58.958361 kubelet[2727]: I0130 13:55:58.957541 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:55:58.958361 kubelet[2727]: I0130 13:55:58.957608 2727 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:55:58.958361 kubelet[2727]: I0130 13:55:58.957645 2727 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:55:58.958361 kubelet[2727]: E0130 13:55:58.957720 2727 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:55:59.029020 kubelet[2727]: I0130 13:55:59.026765 2727 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.056854 kubelet[2727]: I0130 13:55:59.055075 2727 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.056854 kubelet[2727]: I0130 13:55:59.055246 2727 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.058977 kubelet[2727]: E0130 13:55:59.058291 2727 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:55:59.102305 kubelet[2727]: I0130 13:55:59.102268 2727 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:55:59.102305 kubelet[2727]: I0130 13:55:59.102306 2727 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:55:59.102498 kubelet[2727]: I0130 13:55:59.102334 2727 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:59.102744 kubelet[2727]: I0130 13:55:59.102698 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:55:59.102826 kubelet[2727]: I0130 13:55:59.102735 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:55:59.102826 kubelet[2727]: I0130 13:55:59.102761 2727 policy_none.go:49] "None policy: Start" Jan 30 13:55:59.104808 kubelet[2727]: I0130 13:55:59.104479 2727 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:55:59.104808 kubelet[2727]: I0130 13:55:59.104517 2727 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:55:59.104952 kubelet[2727]: I0130 13:55:59.104825 2727 state_mem.go:75] "Updated machine memory state" Jan 30 13:55:59.106822 kubelet[2727]: I0130 13:55:59.106785 2727 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:55:59.108323 kubelet[2727]: I0130 13:55:59.107422 2727 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:55:59.112974 kubelet[2727]: I0130 13:55:59.112595 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:55:59.259354 kubelet[2727]: I0130 13:55:59.259281 2727 topology_manager.go:215] "Topology Admit Handler" podUID="988887eda6360eefad177b3d1d200202" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.259500 kubelet[2727]: I0130 13:55:59.259442 2727 topology_manager.go:215] "Topology Admit Handler" podUID="e1e6e8271a139f1620e8ac7ff263c5ae" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.259970 kubelet[2727]: I0130 13:55:59.259580 2727 topology_manager.go:215] "Topology Admit Handler" podUID="63d547c37ebdc0c24151d8fe2d9ccd16" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.270859 kubelet[2727]: W0130 13:55:59.270481 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:59.277056 kubelet[2727]: W0130 13:55:59.275779 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:59.277056 kubelet[2727]: W0130 13:55:59.276005 2727 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:59.277056 kubelet[2727]: E0130 13:55:59.276069 2727 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-8-baee985ae6\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.428854 kubelet[2727]: I0130 13:55:59.428740 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1e6e8271a139f1620e8ac7ff263c5ae-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-8-baee985ae6\" (UID: \"e1e6e8271a139f1620e8ac7ff263c5ae\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.429364 kubelet[2727]: I0130 13:55:59.429157 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1e6e8271a139f1620e8ac7ff263c5ae-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-8-baee985ae6\" (UID: \"e1e6e8271a139f1620e8ac7ff263c5ae\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.429364 kubelet[2727]: I0130 13:55:59.429250 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1e6e8271a139f1620e8ac7ff263c5ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-8-baee985ae6\" (UID: \"e1e6e8271a139f1620e8ac7ff263c5ae\") " pod="kube-system/kube-apiserver-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.429364 kubelet[2727]: I0130 13:55:59.429296 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.429364 kubelet[2727]: I0130 13:55:59.429314 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.429364 kubelet[2727]: I0130 13:55:59.429333 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.429755 kubelet[2727]: I0130 13:55:59.429468 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/988887eda6360eefad177b3d1d200202-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-8-baee985ae6\" (UID: \"988887eda6360eefad177b3d1d200202\") " pod="kube-system/kube-scheduler-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.429755 kubelet[2727]: I0130 13:55:59.429485 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.429755 kubelet[2727]: I0130 13:55:59.429619 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63d547c37ebdc0c24151d8fe2d9ccd16-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-8-baee985ae6\" (UID: \"63d547c37ebdc0c24151d8fe2d9ccd16\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" Jan 30 13:55:59.573081 kubelet[2727]: E0130 13:55:59.573016 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:59.577310 kubelet[2727]: E0130 13:55:59.577263 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:59.578966 kubelet[2727]: E0130 13:55:59.578905 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:55:59.892299 kubelet[2727]: I0130 13:55:59.892002 2727 apiserver.go:52] "Watching apiserver" Jan 30 13:55:59.926504 kubelet[2727]: I0130 13:55:59.926324 2727 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:56:00.006205 kubelet[2727]: E0130 13:56:00.005319 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:00.007629 kubelet[2727]: E0130 13:56:00.007589 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:00.008726 kubelet[2727]: E0130 13:56:00.008665 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:00.200320 kubelet[2727]: I0130 13:56:00.200007 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-8-baee985ae6" podStartSLOduration=1.199977561 podStartE2EDuration="1.199977561s" podCreationTimestamp="2025-01-30 13:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:00.14319294 +0000 UTC m=+1.407181340" watchObservedRunningTime="2025-01-30 13:56:00.199977561 +0000 UTC m=+1.463965953" Jan 30 13:56:00.235672 kubelet[2727]: I0130 13:56:00.235597 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-8-baee985ae6" podStartSLOduration=1.235577601 podStartE2EDuration="1.235577601s" podCreationTimestamp="2025-01-30 13:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:00.20263644 +0000 UTC m=+1.466624827" watchObservedRunningTime="2025-01-30 13:56:00.235577601 +0000 UTC m=+1.499565997" Jan 30 13:56:00.268504 kubelet[2727]: I0130 13:56:00.268382 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-8-baee985ae6" podStartSLOduration=4.26835744 podStartE2EDuration="4.26835744s" podCreationTimestamp="2025-01-30 13:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:00.241567135 +0000 UTC m=+1.505555534" watchObservedRunningTime="2025-01-30 13:56:00.26835744 +0000 UTC m=+1.532345828" Jan 30 13:56:01.010447 kubelet[2727]: E0130 13:56:01.010338 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:03.372877 sudo[1793]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:03.379471 sshd[1786]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:03.386939 systemd[1]: sshd@6-146.190.136.39:22-147.75.109.163:57990.service: Deactivated successfully. Jan 30 13:56:03.390359 systemd-logind[1558]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:56:03.391636 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:56:03.394141 systemd-logind[1558]: Removed session 7. Jan 30 13:56:03.812765 kubelet[2727]: E0130 13:56:03.812709 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:04.021147 kubelet[2727]: E0130 13:56:04.021094 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:05.809529 kubelet[2727]: E0130 13:56:05.809398 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:06.024534 kubelet[2727]: E0130 13:56:06.024495 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:06.414240 update_engine[1568]: I20250130 13:56:06.413370 1568 update_attempter.cc:509] Updating boot flags... Jan 30 13:56:06.464283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2809) Jan 30 13:56:06.540481 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2808) Jan 30 13:56:07.026006 kubelet[2727]: E0130 13:56:07.025931 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:08.592876 kubelet[2727]: E0130 13:56:08.592762 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:09.029402 kubelet[2727]: E0130 13:56:09.029116 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:12.625201 kubelet[2727]: I0130 13:56:12.625054 2727 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:56:12.627493 containerd[1581]: time="2025-01-30T13:56:12.627433336Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:56:12.631480 kubelet[2727]: I0130 13:56:12.627869 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:56:13.537203 kubelet[2727]: I0130 13:56:13.534311 2727 topology_manager.go:215] "Topology Admit Handler" podUID="77097200-b9b9-4c96-9845-52900a8facfb" podNamespace="kube-system" podName="kube-proxy-nznjf" Jan 30 13:56:13.631871 kubelet[2727]: I0130 13:56:13.631821 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77097200-b9b9-4c96-9845-52900a8facfb-xtables-lock\") pod \"kube-proxy-nznjf\" (UID: \"77097200-b9b9-4c96-9845-52900a8facfb\") " pod="kube-system/kube-proxy-nznjf" Jan 30 13:56:13.632646 kubelet[2727]: I0130 13:56:13.632621 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77097200-b9b9-4c96-9845-52900a8facfb-lib-modules\") pod \"kube-proxy-nznjf\" (UID: \"77097200-b9b9-4c96-9845-52900a8facfb\") " pod="kube-system/kube-proxy-nznjf" Jan 30 13:56:13.632769 kubelet[2727]: I0130 13:56:13.632741 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t9lv\" (UniqueName: \"kubernetes.io/projected/77097200-b9b9-4c96-9845-52900a8facfb-kube-api-access-7t9lv\") pod \"kube-proxy-nznjf\" (UID: \"77097200-b9b9-4c96-9845-52900a8facfb\") " pod="kube-system/kube-proxy-nznjf" Jan 30 13:56:13.632885 kubelet[2727]: I0130 13:56:13.632860 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77097200-b9b9-4c96-9845-52900a8facfb-kube-proxy\") pod \"kube-proxy-nznjf\" (UID: \"77097200-b9b9-4c96-9845-52900a8facfb\") " pod="kube-system/kube-proxy-nznjf" Jan 30 13:56:13.704576 kubelet[2727]: I0130 13:56:13.703916 2727 topology_manager.go:215] "Topology Admit Handler" podUID="6cd587aa-3469-45df-a885-c9be61821bae" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-dz5dw" Jan 30 13:56:13.734204 kubelet[2727]: I0130 13:56:13.733914 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6cd587aa-3469-45df-a885-c9be61821bae-var-lib-calico\") pod \"tigera-operator-7bc55997bb-dz5dw\" (UID: \"6cd587aa-3469-45df-a885-c9be61821bae\") " pod="tigera-operator/tigera-operator-7bc55997bb-dz5dw" Jan 30 13:56:13.734204 kubelet[2727]: I0130 13:56:13.733971 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df5rh\" (UniqueName: \"kubernetes.io/projected/6cd587aa-3469-45df-a885-c9be61821bae-kube-api-access-df5rh\") pod \"tigera-operator-7bc55997bb-dz5dw\" (UID: \"6cd587aa-3469-45df-a885-c9be61821bae\") " pod="tigera-operator/tigera-operator-7bc55997bb-dz5dw" Jan 30 13:56:13.841391 kubelet[2727]: E0130 13:56:13.841124 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:13.848514 containerd[1581]: time="2025-01-30T13:56:13.848369097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nznjf,Uid:77097200-b9b9-4c96-9845-52900a8facfb,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:13.890235 containerd[1581]: time="2025-01-30T13:56:13.889114836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:13.890235 containerd[1581]: time="2025-01-30T13:56:13.889768300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:13.890235 containerd[1581]: time="2025-01-30T13:56:13.889809857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:13.890235 containerd[1581]: time="2025-01-30T13:56:13.890074259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:13.948671 containerd[1581]: time="2025-01-30T13:56:13.948591624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nznjf,Uid:77097200-b9b9-4c96-9845-52900a8facfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"850114e27d894103221084da2b8bbff0695b6e510e4d8e92d35bcce6cfa07564\"" Jan 30 13:56:13.950546 kubelet[2727]: E0130 13:56:13.950489 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:13.963578 containerd[1581]: time="2025-01-30T13:56:13.963140203Z" level=info msg="CreateContainer within sandbox \"850114e27d894103221084da2b8bbff0695b6e510e4d8e92d35bcce6cfa07564\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:56:13.979678 containerd[1581]: time="2025-01-30T13:56:13.979567357Z" level=info msg="CreateContainer within sandbox \"850114e27d894103221084da2b8bbff0695b6e510e4d8e92d35bcce6cfa07564\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f946f370a93d373a9e9a53bd8ea423ea311c705d5193eeb371b27fe9ff225148\"" Jan 30 13:56:13.982249 containerd[1581]: time="2025-01-30T13:56:13.981553714Z" level=info msg="StartContainer for \"f946f370a93d373a9e9a53bd8ea423ea311c705d5193eeb371b27fe9ff225148\"" Jan 30 13:56:14.014207 containerd[1581]: time="2025-01-30T13:56:14.013892275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-dz5dw,Uid:6cd587aa-3469-45df-a885-c9be61821bae,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:56:14.080807 containerd[1581]: time="2025-01-30T13:56:14.080563052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:14.080807 containerd[1581]: time="2025-01-30T13:56:14.080729801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:14.081628 containerd[1581]: time="2025-01-30T13:56:14.080786249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:14.081782 containerd[1581]: time="2025-01-30T13:56:14.081564431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:14.086630 containerd[1581]: time="2025-01-30T13:56:14.086591007Z" level=info msg="StartContainer for \"f946f370a93d373a9e9a53bd8ea423ea311c705d5193eeb371b27fe9ff225148\" returns successfully" Jan 30 13:56:14.177154 containerd[1581]: time="2025-01-30T13:56:14.176998628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-dz5dw,Uid:6cd587aa-3469-45df-a885-c9be61821bae,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cb32101138994338d0850b4d0787ca7176cf83ba4c09707b46c03072d7669735\"" Jan 30 13:56:14.182945 containerd[1581]: time="2025-01-30T13:56:14.181584307Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:56:15.064835 kubelet[2727]: E0130 13:56:15.064689 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:15.577663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501868882.mount: Deactivated successfully. Jan 30 13:56:16.072698 kubelet[2727]: E0130 13:56:16.072107 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:16.305473 containerd[1581]: time="2025-01-30T13:56:16.305406021Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:16.306621 containerd[1581]: time="2025-01-30T13:56:16.306351639Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:56:16.309209 containerd[1581]: time="2025-01-30T13:56:16.307386434Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:16.311104 containerd[1581]: time="2025-01-30T13:56:16.311023823Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:16.312655 containerd[1581]: time="2025-01-30T13:56:16.312576511Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.13093966s" Jan 30 13:56:16.312655 containerd[1581]: time="2025-01-30T13:56:16.312642000Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:56:16.321929 containerd[1581]: time="2025-01-30T13:56:16.321874296Z" level=info msg="CreateContainer within sandbox \"cb32101138994338d0850b4d0787ca7176cf83ba4c09707b46c03072d7669735\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:56:16.338894 containerd[1581]: time="2025-01-30T13:56:16.338743314Z" level=info msg="CreateContainer within sandbox \"cb32101138994338d0850b4d0787ca7176cf83ba4c09707b46c03072d7669735\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1cbd3bb46c233cb0efab740cfae788dd95e860edc79f6928293762b743b31f7b\"" Jan 30 13:56:16.343106 containerd[1581]: time="2025-01-30T13:56:16.341005186Z" level=info msg="StartContainer for \"1cbd3bb46c233cb0efab740cfae788dd95e860edc79f6928293762b743b31f7b\"" Jan 30 13:56:16.345462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071484255.mount: Deactivated successfully. Jan 30 13:56:16.436329 containerd[1581]: time="2025-01-30T13:56:16.436287547Z" level=info msg="StartContainer for \"1cbd3bb46c233cb0efab740cfae788dd95e860edc79f6928293762b743b31f7b\" returns successfully" Jan 30 13:56:17.086996 kubelet[2727]: I0130 13:56:17.086924 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nznjf" podStartSLOduration=4.086901656 podStartE2EDuration="4.086901656s" podCreationTimestamp="2025-01-30 13:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:15.082616942 +0000 UTC m=+16.346605331" watchObservedRunningTime="2025-01-30 13:56:17.086901656 +0000 UTC m=+18.350890050" Jan 30 13:56:19.012250 kubelet[2727]: I0130 13:56:19.012043 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-dz5dw" podStartSLOduration=3.873838696 podStartE2EDuration="6.012011981s" podCreationTimestamp="2025-01-30 13:56:13 +0000 UTC" firstStartedPulling="2025-01-30 13:56:14.1799136 +0000 UTC m=+15.443901974" lastFinishedPulling="2025-01-30 13:56:16.318086879 +0000 UTC m=+17.582075259" observedRunningTime="2025-01-30 13:56:17.087829896 +0000 UTC m=+18.351818302" watchObservedRunningTime="2025-01-30 13:56:19.012011981 +0000 UTC m=+20.276000398" Jan 30 13:56:19.996877 kubelet[2727]: I0130 13:56:19.996774 2727 topology_manager.go:215] "Topology Admit Handler" podUID="f8ee8baf-fdef-4896-9369-b72a1778c36a" podNamespace="calico-system" podName="calico-typha-845858f8bc-dtwhz" Jan 30 13:56:20.117156 kubelet[2727]: I0130 13:56:20.113909 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcvn2\" (UniqueName: \"kubernetes.io/projected/f8ee8baf-fdef-4896-9369-b72a1778c36a-kube-api-access-wcvn2\") pod \"calico-typha-845858f8bc-dtwhz\" (UID: \"f8ee8baf-fdef-4896-9369-b72a1778c36a\") " pod="calico-system/calico-typha-845858f8bc-dtwhz" Jan 30 13:56:20.117156 kubelet[2727]: I0130 13:56:20.114079 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ee8baf-fdef-4896-9369-b72a1778c36a-tigera-ca-bundle\") pod \"calico-typha-845858f8bc-dtwhz\" (UID: \"f8ee8baf-fdef-4896-9369-b72a1778c36a\") " pod="calico-system/calico-typha-845858f8bc-dtwhz" Jan 30 13:56:20.117156 kubelet[2727]: I0130 13:56:20.114116 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f8ee8baf-fdef-4896-9369-b72a1778c36a-typha-certs\") pod \"calico-typha-845858f8bc-dtwhz\" (UID: \"f8ee8baf-fdef-4896-9369-b72a1778c36a\") " pod="calico-system/calico-typha-845858f8bc-dtwhz" Jan 30 13:56:20.172879 kubelet[2727]: I0130 13:56:20.172823 2727 topology_manager.go:215] "Topology Admit Handler" podUID="154b3faf-3122-49e2-8769-6e33faef8fe5" podNamespace="calico-system" podName="calico-node-4flcj" Jan 30 13:56:20.315720 kubelet[2727]: I0130 13:56:20.315518 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmqd\" (UniqueName: \"kubernetes.io/projected/154b3faf-3122-49e2-8769-6e33faef8fe5-kube-api-access-spmqd\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.315720 kubelet[2727]: I0130 13:56:20.315596 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-lib-modules\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.315720 kubelet[2727]: I0130 13:56:20.315632 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-policysync\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.315720 kubelet[2727]: I0130 13:56:20.315662 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/154b3faf-3122-49e2-8769-6e33faef8fe5-tigera-ca-bundle\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.315720 kubelet[2727]: I0130 13:56:20.315689 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-var-run-calico\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.317693 kubelet[2727]: I0130 13:56:20.315725 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-bin-dir\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.317693 kubelet[2727]: I0130 13:56:20.315752 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-xtables-lock\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.317693 kubelet[2727]: I0130 13:56:20.315779 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/154b3faf-3122-49e2-8769-6e33faef8fe5-node-certs\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.317693 kubelet[2727]: I0130 13:56:20.315841 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-log-dir\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.317693 kubelet[2727]: I0130 13:56:20.315874 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-net-dir\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.317943 kubelet[2727]: I0130 13:56:20.315918 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-var-lib-calico\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.317943 kubelet[2727]: I0130 13:56:20.315945 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-flexvol-driver-host\") pod \"calico-node-4flcj\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " pod="calico-system/calico-node-4flcj" Jan 30 13:56:20.322410 kubelet[2727]: E0130 13:56:20.320773 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:20.342945 containerd[1581]: time="2025-01-30T13:56:20.342879783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-845858f8bc-dtwhz,Uid:f8ee8baf-fdef-4896-9369-b72a1778c36a,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:20.434221 containerd[1581]: time="2025-01-30T13:56:20.422187861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:20.434221 containerd[1581]: time="2025-01-30T13:56:20.422296291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:20.434221 containerd[1581]: time="2025-01-30T13:56:20.422314742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:20.434221 containerd[1581]: time="2025-01-30T13:56:20.424208939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:20.452707 kubelet[2727]: I0130 13:56:20.450008 2727 topology_manager.go:215] "Topology Admit Handler" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" podNamespace="calico-system" podName="csi-node-driver-rzzqf" Jan 30 13:56:20.452707 kubelet[2727]: E0130 13:56:20.452583 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzzqf" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" Jan 30 13:56:20.478137 kubelet[2727]: E0130 13:56:20.474072 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.478137 kubelet[2727]: W0130 13:56:20.474136 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.478137 kubelet[2727]: E0130 13:56:20.474194 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.488484 kubelet[2727]: E0130 13:56:20.482414 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.488484 kubelet[2727]: W0130 13:56:20.482443 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.488484 kubelet[2727]: E0130 13:56:20.482470 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.488484 kubelet[2727]: E0130 13:56:20.487802 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:20.500341 containerd[1581]: time="2025-01-30T13:56:20.499407888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4flcj,Uid:154b3faf-3122-49e2-8769-6e33faef8fe5,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:20.506332 kubelet[2727]: E0130 13:56:20.506293 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.506332 kubelet[2727]: W0130 13:56:20.506327 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.506732 kubelet[2727]: E0130 13:56:20.506354 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.513370 kubelet[2727]: E0130 13:56:20.513318 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.513370 kubelet[2727]: W0130 13:56:20.513361 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.513631 kubelet[2727]: E0130 13:56:20.513399 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.517446 kubelet[2727]: E0130 13:56:20.517313 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.517446 kubelet[2727]: W0130 13:56:20.517349 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.517446 kubelet[2727]: E0130 13:56:20.517385 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.531707 kubelet[2727]: E0130 13:56:20.531335 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.531707 kubelet[2727]: W0130 13:56:20.531370 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.531707 kubelet[2727]: E0130 13:56:20.531407 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.533479 kubelet[2727]: E0130 13:56:20.533292 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.533479 kubelet[2727]: W0130 13:56:20.533324 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.533479 kubelet[2727]: E0130 13:56:20.533361 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.537755 kubelet[2727]: E0130 13:56:20.535533 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.537755 kubelet[2727]: W0130 13:56:20.535571 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.537755 kubelet[2727]: E0130 13:56:20.535609 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.545722 kubelet[2727]: E0130 13:56:20.545303 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.545722 kubelet[2727]: W0130 13:56:20.545343 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.545722 kubelet[2727]: E0130 13:56:20.545379 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.547800 kubelet[2727]: E0130 13:56:20.547336 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.547800 kubelet[2727]: W0130 13:56:20.547370 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.547800 kubelet[2727]: E0130 13:56:20.547405 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.548801 kubelet[2727]: E0130 13:56:20.547930 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.548801 kubelet[2727]: W0130 13:56:20.547950 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.548801 kubelet[2727]: E0130 13:56:20.547988 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.548801 kubelet[2727]: E0130 13:56:20.548199 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.548801 kubelet[2727]: W0130 13:56:20.548208 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.548801 kubelet[2727]: E0130 13:56:20.548221 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.548801 kubelet[2727]: E0130 13:56:20.548565 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.548801 kubelet[2727]: W0130 13:56:20.548579 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.548801 kubelet[2727]: E0130 13:56:20.548598 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.551702 kubelet[2727]: E0130 13:56:20.548868 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.551702 kubelet[2727]: W0130 13:56:20.548879 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.551702 kubelet[2727]: E0130 13:56:20.548893 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.551702 kubelet[2727]: E0130 13:56:20.549208 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.551702 kubelet[2727]: W0130 13:56:20.549219 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.551702 kubelet[2727]: E0130 13:56:20.549234 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.551702 kubelet[2727]: E0130 13:56:20.549442 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.551702 kubelet[2727]: W0130 13:56:20.549455 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.551702 kubelet[2727]: E0130 13:56:20.549469 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.551702 kubelet[2727]: E0130 13:56:20.549701 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.552927 kubelet[2727]: W0130 13:56:20.549713 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.552927 kubelet[2727]: E0130 13:56:20.549727 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.552927 kubelet[2727]: E0130 13:56:20.549907 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.552927 kubelet[2727]: W0130 13:56:20.549918 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.552927 kubelet[2727]: E0130 13:56:20.549927 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.552927 kubelet[2727]: E0130 13:56:20.550408 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.552927 kubelet[2727]: W0130 13:56:20.550420 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.552927 kubelet[2727]: E0130 13:56:20.550436 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.552927 kubelet[2727]: E0130 13:56:20.550839 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.552927 kubelet[2727]: W0130 13:56:20.550854 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.556441 kubelet[2727]: E0130 13:56:20.550870 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.556441 kubelet[2727]: E0130 13:56:20.551066 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.556441 kubelet[2727]: W0130 13:56:20.551073 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.556441 kubelet[2727]: E0130 13:56:20.551083 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.556441 kubelet[2727]: E0130 13:56:20.551961 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.556441 kubelet[2727]: W0130 13:56:20.551977 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.556441 kubelet[2727]: E0130 13:56:20.551993 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.556441 kubelet[2727]: E0130 13:56:20.552678 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.556441 kubelet[2727]: W0130 13:56:20.552696 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.556441 kubelet[2727]: E0130 13:56:20.552713 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.558143 kubelet[2727]: I0130 13:56:20.552766 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/823691bc-ea65-4b7b-a6e1-f21ba2308d6d-varrun\") pod \"csi-node-driver-rzzqf\" (UID: \"823691bc-ea65-4b7b-a6e1-f21ba2308d6d\") " pod="calico-system/csi-node-driver-rzzqf" Jan 30 13:56:20.558143 kubelet[2727]: E0130 13:56:20.553485 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.558143 kubelet[2727]: W0130 13:56:20.553503 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.558143 kubelet[2727]: E0130 13:56:20.553527 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.558143 kubelet[2727]: E0130 13:56:20.553738 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.558143 kubelet[2727]: W0130 13:56:20.553749 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.558143 kubelet[2727]: E0130 13:56:20.553783 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.558143 kubelet[2727]: E0130 13:56:20.555263 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.558143 kubelet[2727]: W0130 13:56:20.555280 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.560703 kubelet[2727]: E0130 13:56:20.555294 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.560703 kubelet[2727]: I0130 13:56:20.555339 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/823691bc-ea65-4b7b-a6e1-f21ba2308d6d-registration-dir\") pod \"csi-node-driver-rzzqf\" (UID: \"823691bc-ea65-4b7b-a6e1-f21ba2308d6d\") " pod="calico-system/csi-node-driver-rzzqf" Jan 30 13:56:20.560703 kubelet[2727]: E0130 13:56:20.555646 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.560703 kubelet[2727]: W0130 13:56:20.555660 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.560703 kubelet[2727]: E0130 13:56:20.555686 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.560703 kubelet[2727]: I0130 13:56:20.555709 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/823691bc-ea65-4b7b-a6e1-f21ba2308d6d-socket-dir\") pod \"csi-node-driver-rzzqf\" (UID: \"823691bc-ea65-4b7b-a6e1-f21ba2308d6d\") " pod="calico-system/csi-node-driver-rzzqf" Jan 30 13:56:20.560703 kubelet[2727]: E0130 13:56:20.556769 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.560703 kubelet[2727]: W0130 13:56:20.557607 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.561091 kubelet[2727]: E0130 13:56:20.557663 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.561091 kubelet[2727]: E0130 13:56:20.557916 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.561091 kubelet[2727]: W0130 13:56:20.557927 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.561091 kubelet[2727]: E0130 13:56:20.558027 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.561091 kubelet[2727]: E0130 13:56:20.558189 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.561091 kubelet[2727]: W0130 13:56:20.558198 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.561091 kubelet[2727]: E0130 13:56:20.558410 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.561091 kubelet[2727]: I0130 13:56:20.558470 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8pdb\" (UniqueName: \"kubernetes.io/projected/823691bc-ea65-4b7b-a6e1-f21ba2308d6d-kube-api-access-d8pdb\") pod \"csi-node-driver-rzzqf\" (UID: \"823691bc-ea65-4b7b-a6e1-f21ba2308d6d\") " pod="calico-system/csi-node-driver-rzzqf" Jan 30 13:56:20.561091 kubelet[2727]: E0130 13:56:20.558532 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.565105 kubelet[2727]: W0130 13:56:20.558546 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.565105 kubelet[2727]: E0130 13:56:20.558578 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.565105 kubelet[2727]: E0130 13:56:20.558853 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.565105 kubelet[2727]: W0130 13:56:20.558867 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.565105 kubelet[2727]: E0130 13:56:20.559863 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.565105 kubelet[2727]: E0130 13:56:20.560608 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.565105 kubelet[2727]: W0130 13:56:20.560624 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.565105 kubelet[2727]: E0130 13:56:20.560645 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.565105 kubelet[2727]: I0130 13:56:20.560678 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/823691bc-ea65-4b7b-a6e1-f21ba2308d6d-kubelet-dir\") pod \"csi-node-driver-rzzqf\" (UID: \"823691bc-ea65-4b7b-a6e1-f21ba2308d6d\") " pod="calico-system/csi-node-driver-rzzqf" Jan 30 13:56:20.567731 kubelet[2727]: E0130 13:56:20.561268 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.567731 kubelet[2727]: W0130 13:56:20.561289 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.567731 kubelet[2727]: E0130 13:56:20.561312 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.567731 kubelet[2727]: E0130 13:56:20.564529 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.567731 kubelet[2727]: W0130 13:56:20.564554 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.567731 kubelet[2727]: E0130 13:56:20.564660 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.567731 kubelet[2727]: E0130 13:56:20.564953 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.567731 kubelet[2727]: W0130 13:56:20.564965 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.567731 kubelet[2727]: E0130 13:56:20.564983 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.567731 kubelet[2727]: E0130 13:56:20.565320 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.570409 kubelet[2727]: W0130 13:56:20.565337 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.570409 kubelet[2727]: E0130 13:56:20.565354 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.655743 containerd[1581]: time="2025-01-30T13:56:20.654630414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:20.655743 containerd[1581]: time="2025-01-30T13:56:20.654724703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:20.655743 containerd[1581]: time="2025-01-30T13:56:20.654743036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:20.655743 containerd[1581]: time="2025-01-30T13:56:20.654883217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:20.662287 kubelet[2727]: E0130 13:56:20.661862 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.662287 kubelet[2727]: W0130 13:56:20.661903 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.662287 kubelet[2727]: E0130 13:56:20.661937 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.662287 kubelet[2727]: E0130 13:56:20.662259 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.662287 kubelet[2727]: W0130 13:56:20.662275 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.662287 kubelet[2727]: E0130 13:56:20.662299 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.664913 kubelet[2727]: E0130 13:56:20.662609 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.664913 kubelet[2727]: W0130 13:56:20.662626 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.664913 kubelet[2727]: E0130 13:56:20.662662 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.664913 kubelet[2727]: E0130 13:56:20.663048 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.664913 kubelet[2727]: W0130 13:56:20.663071 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.664913 kubelet[2727]: E0130 13:56:20.663094 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.664913 kubelet[2727]: E0130 13:56:20.663427 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.664913 kubelet[2727]: W0130 13:56:20.663443 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.664913 kubelet[2727]: E0130 13:56:20.663484 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.664913 kubelet[2727]: E0130 13:56:20.663958 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.667935 kubelet[2727]: W0130 13:56:20.663976 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.667935 kubelet[2727]: E0130 13:56:20.664090 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.667935 kubelet[2727]: E0130 13:56:20.664343 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.667935 kubelet[2727]: W0130 13:56:20.664367 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.667935 kubelet[2727]: E0130 13:56:20.664410 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.667935 kubelet[2727]: E0130 13:56:20.664691 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.667935 kubelet[2727]: W0130 13:56:20.664708 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.667935 kubelet[2727]: E0130 13:56:20.664813 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.667935 kubelet[2727]: E0130 13:56:20.665138 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.667935 kubelet[2727]: W0130 13:56:20.665231 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.671044 kubelet[2727]: E0130 13:56:20.665408 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.671044 kubelet[2727]: E0130 13:56:20.665585 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.671044 kubelet[2727]: W0130 13:56:20.665598 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.671044 kubelet[2727]: E0130 13:56:20.665721 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.671044 kubelet[2727]: E0130 13:56:20.667093 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.671044 kubelet[2727]: W0130 13:56:20.667150 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.692160 kubelet[2727]: E0130 13:56:20.692085 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.693400 kubelet[2727]: E0130 13:56:20.692649 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.693400 kubelet[2727]: W0130 13:56:20.692678 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.694623 kubelet[2727]: E0130 13:56:20.694573 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.694623 kubelet[2727]: W0130 13:56:20.694626 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.698204 kubelet[2727]: E0130 13:56:20.696764 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.698204 kubelet[2727]: W0130 13:56:20.696809 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.703219 kubelet[2727]: E0130 13:56:20.702084 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.703219 kubelet[2727]: W0130 13:56:20.702136 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.703219 kubelet[2727]: E0130 13:56:20.702193 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.713268 kubelet[2727]: E0130 13:56:20.712447 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.713268 kubelet[2727]: E0130 13:56:20.712651 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.713268 kubelet[2727]: E0130 13:56:20.712688 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.713268 kubelet[2727]: E0130 13:56:20.712779 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.713268 kubelet[2727]: W0130 13:56:20.712794 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.713268 kubelet[2727]: E0130 13:56:20.712816 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.718401 kubelet[2727]: E0130 13:56:20.717569 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.718401 kubelet[2727]: W0130 13:56:20.717609 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.718401 kubelet[2727]: E0130 13:56:20.717665 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.757491 kubelet[2727]: E0130 13:56:20.757438 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.757491 kubelet[2727]: W0130 13:56:20.757484 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.757491 kubelet[2727]: E0130 13:56:20.757530 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.761833 kubelet[2727]: E0130 13:56:20.761328 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.761833 kubelet[2727]: W0130 13:56:20.761368 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.761833 kubelet[2727]: E0130 13:56:20.761405 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.769297 kubelet[2727]: E0130 13:56:20.768943 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.769297 kubelet[2727]: W0130 13:56:20.768977 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.769297 kubelet[2727]: E0130 13:56:20.769014 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.776255 kubelet[2727]: E0130 13:56:20.774075 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.776255 kubelet[2727]: W0130 13:56:20.774116 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.776255 kubelet[2727]: E0130 13:56:20.774153 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.784136 kubelet[2727]: E0130 13:56:20.782435 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.784136 kubelet[2727]: W0130 13:56:20.782472 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.789354 kubelet[2727]: E0130 13:56:20.789305 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.794918 kubelet[2727]: E0130 13:56:20.792263 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.794918 kubelet[2727]: W0130 13:56:20.792302 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.797273 kubelet[2727]: E0130 13:56:20.796853 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.804634 kubelet[2727]: E0130 13:56:20.804584 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.804634 kubelet[2727]: W0130 13:56:20.804624 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.805664 kubelet[2727]: E0130 13:56:20.805379 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.805664 kubelet[2727]: W0130 13:56:20.805407 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.805664 kubelet[2727]: E0130 13:56:20.805438 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.807107 kubelet[2727]: E0130 13:56:20.806119 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.808588 kubelet[2727]: E0130 13:56:20.808246 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:20.809041 kubelet[2727]: W0130 13:56:20.809003 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:20.809156 kubelet[2727]: E0130 13:56:20.809051 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:20.830721 containerd[1581]: time="2025-01-30T13:56:20.828624609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-845858f8bc-dtwhz,Uid:f8ee8baf-fdef-4896-9369-b72a1778c36a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\"" Jan 30 13:56:20.847692 kubelet[2727]: E0130 13:56:20.847653 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:20.877750 containerd[1581]: time="2025-01-30T13:56:20.877692373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:56:20.891845 containerd[1581]: time="2025-01-30T13:56:20.891728043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4flcj,Uid:154b3faf-3122-49e2-8769-6e33faef8fe5,Namespace:calico-system,Attempt:0,} returns sandbox id \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\"" Jan 30 13:56:20.893682 kubelet[2727]: E0130 13:56:20.893044 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:21.958321 kubelet[2727]: E0130 13:56:21.958056 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzzqf" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" Jan 30 13:56:22.272035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503243786.mount: Deactivated successfully. Jan 30 13:56:23.154913 containerd[1581]: time="2025-01-30T13:56:23.154837854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:23.157008 containerd[1581]: time="2025-01-30T13:56:23.156915778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:56:23.157339 containerd[1581]: time="2025-01-30T13:56:23.157131972Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:23.160034 containerd[1581]: time="2025-01-30T13:56:23.159937310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:23.161747 containerd[1581]: time="2025-01-30T13:56:23.161187742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.283412369s" Jan 30 13:56:23.161747 containerd[1581]: time="2025-01-30T13:56:23.161247039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:56:23.164623 containerd[1581]: time="2025-01-30T13:56:23.163872641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:56:23.203571 containerd[1581]: time="2025-01-30T13:56:23.203489955Z" level=info msg="CreateContainer within sandbox \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:56:23.227790 containerd[1581]: time="2025-01-30T13:56:23.227585465Z" level=info msg="CreateContainer within sandbox \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\"" Jan 30 13:56:23.231011 containerd[1581]: time="2025-01-30T13:56:23.228956681Z" level=info msg="StartContainer for \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\"" Jan 30 13:56:23.307910 systemd[1]: run-containerd-runc-k8s.io-ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07-runc.suZ5E0.mount: Deactivated successfully. Jan 30 13:56:23.397615 containerd[1581]: time="2025-01-30T13:56:23.397546540Z" level=info msg="StartContainer for \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\" returns successfully" Jan 30 13:56:23.958913 kubelet[2727]: E0130 13:56:23.958844 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzzqf" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" Jan 30 13:56:24.124240 kubelet[2727]: E0130 13:56:24.122745 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:24.150682 kubelet[2727]: I0130 13:56:24.149779 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-845858f8bc-dtwhz" podStartSLOduration=2.860257949 podStartE2EDuration="5.149752198s" podCreationTimestamp="2025-01-30 13:56:19 +0000 UTC" firstStartedPulling="2025-01-30 13:56:20.873791962 +0000 UTC m=+22.137780346" lastFinishedPulling="2025-01-30 13:56:23.163286199 +0000 UTC m=+24.427274595" observedRunningTime="2025-01-30 13:56:24.145157881 +0000 UTC m=+25.409146290" watchObservedRunningTime="2025-01-30 13:56:24.149752198 +0000 UTC m=+25.413740596" Jan 30 13:56:24.199473 kubelet[2727]: E0130 13:56:24.199301 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.199473 kubelet[2727]: W0130 13:56:24.199343 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.199473 kubelet[2727]: E0130 13:56:24.199376 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.199908 kubelet[2727]: E0130 13:56:24.199829 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.199908 kubelet[2727]: W0130 13:56:24.199852 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.199908 kubelet[2727]: E0130 13:56:24.199875 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.200217 kubelet[2727]: E0130 13:56:24.200196 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.200217 kubelet[2727]: W0130 13:56:24.200216 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.200418 kubelet[2727]: E0130 13:56:24.200236 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.200540 kubelet[2727]: E0130 13:56:24.200513 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.200540 kubelet[2727]: W0130 13:56:24.200533 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.200652 kubelet[2727]: E0130 13:56:24.200549 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.200944 kubelet[2727]: E0130 13:56:24.200922 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.200944 kubelet[2727]: W0130 13:56:24.200942 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.201069 kubelet[2727]: E0130 13:56:24.200959 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.201260 kubelet[2727]: E0130 13:56:24.201244 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.201333 kubelet[2727]: W0130 13:56:24.201262 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.201333 kubelet[2727]: E0130 13:56:24.201278 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.201636 kubelet[2727]: E0130 13:56:24.201524 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.201636 kubelet[2727]: W0130 13:56:24.201542 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.201636 kubelet[2727]: E0130 13:56:24.201558 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.201822 kubelet[2727]: E0130 13:56:24.201800 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.201822 kubelet[2727]: W0130 13:56:24.201813 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.201930 kubelet[2727]: E0130 13:56:24.201828 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.202211 kubelet[2727]: E0130 13:56:24.202190 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.202211 kubelet[2727]: W0130 13:56:24.202207 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.202348 kubelet[2727]: E0130 13:56:24.202223 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.202479 kubelet[2727]: E0130 13:56:24.202459 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.202479 kubelet[2727]: W0130 13:56:24.202477 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.202594 kubelet[2727]: E0130 13:56:24.202492 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.202714 kubelet[2727]: E0130 13:56:24.202701 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.202746 kubelet[2727]: W0130 13:56:24.202718 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.202746 kubelet[2727]: E0130 13:56:24.202731 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.202985 kubelet[2727]: E0130 13:56:24.202963 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.202985 kubelet[2727]: W0130 13:56:24.202975 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.202985 kubelet[2727]: E0130 13:56:24.202986 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.203224 kubelet[2727]: E0130 13:56:24.203195 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.203224 kubelet[2727]: W0130 13:56:24.203205 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.203224 kubelet[2727]: E0130 13:56:24.203214 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.203376 kubelet[2727]: E0130 13:56:24.203366 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.203376 kubelet[2727]: W0130 13:56:24.203373 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.203495 kubelet[2727]: E0130 13:56:24.203381 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.203599 kubelet[2727]: E0130 13:56:24.203585 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.203727 kubelet[2727]: W0130 13:56:24.203601 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.203727 kubelet[2727]: E0130 13:56:24.203615 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.256134 kubelet[2727]: E0130 13:56:24.256024 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.256797 kubelet[2727]: W0130 13:56:24.256368 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.256797 kubelet[2727]: E0130 13:56:24.256433 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.257641 kubelet[2727]: E0130 13:56:24.257605 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.257846 kubelet[2727]: W0130 13:56:24.257753 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.257846 kubelet[2727]: E0130 13:56:24.257798 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.258554 kubelet[2727]: E0130 13:56:24.258433 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.258554 kubelet[2727]: W0130 13:56:24.258450 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.258554 kubelet[2727]: E0130 13:56:24.258476 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.258774 kubelet[2727]: E0130 13:56:24.258758 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.258807 kubelet[2727]: W0130 13:56:24.258776 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.258807 kubelet[2727]: E0130 13:56:24.258798 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.259061 kubelet[2727]: E0130 13:56:24.259047 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.259061 kubelet[2727]: W0130 13:56:24.259060 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.259128 kubelet[2727]: E0130 13:56:24.259083 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.259348 kubelet[2727]: E0130 13:56:24.259336 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.259348 kubelet[2727]: W0130 13:56:24.259347 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.259416 kubelet[2727]: E0130 13:56:24.259365 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.259836 kubelet[2727]: E0130 13:56:24.259818 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.259836 kubelet[2727]: W0130 13:56:24.259833 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.259930 kubelet[2727]: E0130 13:56:24.259848 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.260097 kubelet[2727]: E0130 13:56:24.260084 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.260127 kubelet[2727]: W0130 13:56:24.260097 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.260364 kubelet[2727]: E0130 13:56:24.260235 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.260449 kubelet[2727]: E0130 13:56:24.260393 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.260449 kubelet[2727]: W0130 13:56:24.260405 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.260562 kubelet[2727]: E0130 13:56:24.260534 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.260672 kubelet[2727]: E0130 13:56:24.260657 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.260711 kubelet[2727]: W0130 13:56:24.260672 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.260759 kubelet[2727]: E0130 13:56:24.260743 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.261016 kubelet[2727]: E0130 13:56:24.261002 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.261072 kubelet[2727]: W0130 13:56:24.261018 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.261072 kubelet[2727]: E0130 13:56:24.261034 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.261245 kubelet[2727]: E0130 13:56:24.261233 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.261285 kubelet[2727]: W0130 13:56:24.261247 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.261285 kubelet[2727]: E0130 13:56:24.261275 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.261606 kubelet[2727]: E0130 13:56:24.261589 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.261673 kubelet[2727]: W0130 13:56:24.261606 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.262195 kubelet[2727]: E0130 13:56:24.261635 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.262760 kubelet[2727]: E0130 13:56:24.262739 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.262760 kubelet[2727]: W0130 13:56:24.262755 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.264004 kubelet[2727]: E0130 13:56:24.262771 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.264004 kubelet[2727]: E0130 13:56:24.263030 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.264004 kubelet[2727]: W0130 13:56:24.263040 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.264004 kubelet[2727]: E0130 13:56:24.263052 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.264004 kubelet[2727]: E0130 13:56:24.263248 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.264004 kubelet[2727]: W0130 13:56:24.263257 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.264004 kubelet[2727]: E0130 13:56:24.263266 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.264004 kubelet[2727]: E0130 13:56:24.263756 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.264004 kubelet[2727]: W0130 13:56:24.263768 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.264004 kubelet[2727]: E0130 13:56:24.263779 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.264997 kubelet[2727]: E0130 13:56:24.264058 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:24.264997 kubelet[2727]: W0130 13:56:24.264066 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:24.264997 kubelet[2727]: E0130 13:56:24.264077 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:24.587471 containerd[1581]: time="2025-01-30T13:56:24.587338529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:24.590498 containerd[1581]: time="2025-01-30T13:56:24.590432870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:56:24.591194 containerd[1581]: time="2025-01-30T13:56:24.591100891Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:24.593277 containerd[1581]: time="2025-01-30T13:56:24.593143611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:24.594356 containerd[1581]: time="2025-01-30T13:56:24.594005939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.430075668s" Jan 30 13:56:24.594356 containerd[1581]: time="2025-01-30T13:56:24.594049418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:56:24.599678 containerd[1581]: time="2025-01-30T13:56:24.599577060Z" level=info msg="CreateContainer within sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:56:24.696785 containerd[1581]: time="2025-01-30T13:56:24.695620848Z" level=info msg="CreateContainer within sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\"" Jan 30 13:56:24.699030 containerd[1581]: time="2025-01-30T13:56:24.697632203Z" level=info msg="StartContainer for \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\"" Jan 30 13:56:24.801513 containerd[1581]: time="2025-01-30T13:56:24.801469189Z" level=info msg="StartContainer for \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\" returns successfully" Jan 30 13:56:24.869990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17-rootfs.mount: Deactivated successfully. Jan 30 13:56:24.900461 containerd[1581]: time="2025-01-30T13:56:24.873276992Z" level=info msg="shim disconnected" id=797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17 namespace=k8s.io Jan 30 13:56:24.900461 containerd[1581]: time="2025-01-30T13:56:24.900300045Z" level=warning msg="cleaning up after shim disconnected" id=797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17 namespace=k8s.io Jan 30 13:56:24.900461 containerd[1581]: time="2025-01-30T13:56:24.900322836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:25.127409 kubelet[2727]: E0130 13:56:25.127274 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:25.129201 kubelet[2727]: I0130 13:56:25.128527 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:25.129708 kubelet[2727]: E0130 13:56:25.129678 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:25.132501 containerd[1581]: time="2025-01-30T13:56:25.132467732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:56:25.958829 kubelet[2727]: E0130 13:56:25.958241 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzzqf" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" Jan 30 13:56:27.958344 kubelet[2727]: E0130 13:56:27.958269 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzzqf" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" Jan 30 13:56:29.290572 containerd[1581]: time="2025-01-30T13:56:29.290483287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:29.292526 containerd[1581]: time="2025-01-30T13:56:29.292413690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:56:29.293462 containerd[1581]: time="2025-01-30T13:56:29.293122000Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:29.295725 containerd[1581]: time="2025-01-30T13:56:29.295686470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:29.296899 containerd[1581]: time="2025-01-30T13:56:29.296845223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.163997557s" Jan 30 13:56:29.296899 containerd[1581]: time="2025-01-30T13:56:29.296897631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:56:29.303380 containerd[1581]: time="2025-01-30T13:56:29.303199818Z" level=info msg="CreateContainer within sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:56:29.347676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1329461639.mount: Deactivated successfully. Jan 30 13:56:29.372778 containerd[1581]: time="2025-01-30T13:56:29.372682358Z" level=info msg="CreateContainer within sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\"" Jan 30 13:56:29.375348 containerd[1581]: time="2025-01-30T13:56:29.374115313Z" level=info msg="StartContainer for \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\"" Jan 30 13:56:29.493777 containerd[1581]: time="2025-01-30T13:56:29.493704855Z" level=info msg="StartContainer for \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\" returns successfully" Jan 30 13:56:29.959062 kubelet[2727]: E0130 13:56:29.958575 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rzzqf" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" Jan 30 13:56:30.157537 kubelet[2727]: E0130 13:56:30.156757 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:30.168653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054-rootfs.mount: Deactivated successfully. Jan 30 13:56:30.172724 containerd[1581]: time="2025-01-30T13:56:30.171960464Z" level=info msg="shim disconnected" id=b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054 namespace=k8s.io Jan 30 13:56:30.172724 containerd[1581]: time="2025-01-30T13:56:30.172061874Z" level=warning msg="cleaning up after shim disconnected" id=b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054 namespace=k8s.io Jan 30 13:56:30.172724 containerd[1581]: time="2025-01-30T13:56:30.172076509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:30.209663 kubelet[2727]: I0130 13:56:30.207806 2727 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:56:30.254726 kubelet[2727]: I0130 13:56:30.254654 2727 topology_manager.go:215] "Topology Admit Handler" podUID="59569bfa-ceee-4967-bb23-bc58916a113d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fnpc6" Jan 30 13:56:30.258207 kubelet[2727]: I0130 13:56:30.258112 2727 topology_manager.go:215] "Topology Admit Handler" podUID="961e4391-08da-49eb-8e7d-aa735452853a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-km89h" Jan 30 13:56:30.280225 kubelet[2727]: I0130 13:56:30.278433 2727 topology_manager.go:215] "Topology Admit Handler" podUID="948fd5df-85f6-4117-955d-ae954df34712" podNamespace="calico-system" podName="calico-kube-controllers-7d4b699786-9bmxf" Jan 30 13:56:30.290193 kubelet[2727]: I0130 13:56:30.289281 2727 topology_manager.go:215] "Topology Admit Handler" podUID="97b02c81-6b32-4a65-8b9e-6d8426a65011" podNamespace="calico-apiserver" podName="calico-apiserver-99cfd4b69-blxcz" Jan 30 13:56:30.305055 kubelet[2727]: I0130 13:56:30.304685 2727 topology_manager.go:215] "Topology Admit Handler" podUID="3d139b6a-d18c-4490-b775-b61437104603" podNamespace="calico-apiserver" podName="calico-apiserver-99cfd4b69-99xnz" Jan 30 13:56:30.308660 kubelet[2727]: I0130 13:56:30.308612 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/961e4391-08da-49eb-8e7d-aa735452853a-config-volume\") pod \"coredns-7db6d8ff4d-km89h\" (UID: \"961e4391-08da-49eb-8e7d-aa735452853a\") " pod="kube-system/coredns-7db6d8ff4d-km89h" Jan 30 13:56:30.308927 kubelet[2727]: I0130 13:56:30.308894 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdjr\" (UniqueName: \"kubernetes.io/projected/961e4391-08da-49eb-8e7d-aa735452853a-kube-api-access-swdjr\") pod \"coredns-7db6d8ff4d-km89h\" (UID: \"961e4391-08da-49eb-8e7d-aa735452853a\") " pod="kube-system/coredns-7db6d8ff4d-km89h" Jan 30 13:56:30.309098 kubelet[2727]: I0130 13:56:30.309078 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn82x\" (UniqueName: \"kubernetes.io/projected/59569bfa-ceee-4967-bb23-bc58916a113d-kube-api-access-xn82x\") pod \"coredns-7db6d8ff4d-fnpc6\" (UID: \"59569bfa-ceee-4967-bb23-bc58916a113d\") " pod="kube-system/coredns-7db6d8ff4d-fnpc6" Jan 30 13:56:30.309243 kubelet[2727]: I0130 13:56:30.309228 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59569bfa-ceee-4967-bb23-bc58916a113d-config-volume\") pod \"coredns-7db6d8ff4d-fnpc6\" (UID: \"59569bfa-ceee-4967-bb23-bc58916a113d\") " pod="kube-system/coredns-7db6d8ff4d-fnpc6" Jan 30 13:56:30.410770 kubelet[2727]: I0130 13:56:30.410683 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/948fd5df-85f6-4117-955d-ae954df34712-tigera-ca-bundle\") pod \"calico-kube-controllers-7d4b699786-9bmxf\" (UID: \"948fd5df-85f6-4117-955d-ae954df34712\") " pod="calico-system/calico-kube-controllers-7d4b699786-9bmxf" Jan 30 13:56:30.410770 kubelet[2727]: I0130 13:56:30.410751 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/97b02c81-6b32-4a65-8b9e-6d8426a65011-calico-apiserver-certs\") pod \"calico-apiserver-99cfd4b69-blxcz\" (UID: \"97b02c81-6b32-4a65-8b9e-6d8426a65011\") " pod="calico-apiserver/calico-apiserver-99cfd4b69-blxcz" Jan 30 13:56:30.411034 kubelet[2727]: I0130 13:56:30.410859 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d58cn\" (UniqueName: \"kubernetes.io/projected/97b02c81-6b32-4a65-8b9e-6d8426a65011-kube-api-access-d58cn\") pod \"calico-apiserver-99cfd4b69-blxcz\" (UID: \"97b02c81-6b32-4a65-8b9e-6d8426a65011\") " pod="calico-apiserver/calico-apiserver-99cfd4b69-blxcz" Jan 30 13:56:30.411034 kubelet[2727]: I0130 13:56:30.410894 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h57j7\" (UniqueName: \"kubernetes.io/projected/3d139b6a-d18c-4490-b775-b61437104603-kube-api-access-h57j7\") pod \"calico-apiserver-99cfd4b69-99xnz\" (UID: \"3d139b6a-d18c-4490-b775-b61437104603\") " pod="calico-apiserver/calico-apiserver-99cfd4b69-99xnz" Jan 30 13:56:30.411034 kubelet[2727]: I0130 13:56:30.410914 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzvch\" (UniqueName: \"kubernetes.io/projected/948fd5df-85f6-4117-955d-ae954df34712-kube-api-access-nzvch\") pod \"calico-kube-controllers-7d4b699786-9bmxf\" (UID: \"948fd5df-85f6-4117-955d-ae954df34712\") " pod="calico-system/calico-kube-controllers-7d4b699786-9bmxf" Jan 30 13:56:30.411034 kubelet[2727]: I0130 13:56:30.410937 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3d139b6a-d18c-4490-b775-b61437104603-calico-apiserver-certs\") pod \"calico-apiserver-99cfd4b69-99xnz\" (UID: \"3d139b6a-d18c-4490-b775-b61437104603\") " pod="calico-apiserver/calico-apiserver-99cfd4b69-99xnz" Jan 30 13:56:30.576855 kubelet[2727]: E0130 13:56:30.576340 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:30.579014 containerd[1581]: time="2025-01-30T13:56:30.578955509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fnpc6,Uid:59569bfa-ceee-4967-bb23-bc58916a113d,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:30.600566 containerd[1581]: time="2025-01-30T13:56:30.600485652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4b699786-9bmxf,Uid:948fd5df-85f6-4117-955d-ae954df34712,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:30.607705 kubelet[2727]: E0130 13:56:30.607669 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:30.612011 containerd[1581]: time="2025-01-30T13:56:30.611779879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99cfd4b69-blxcz,Uid:97b02c81-6b32-4a65-8b9e-6d8426a65011,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:56:30.618652 containerd[1581]: time="2025-01-30T13:56:30.617758725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km89h,Uid:961e4391-08da-49eb-8e7d-aa735452853a,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:30.623442 containerd[1581]: time="2025-01-30T13:56:30.623386502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99cfd4b69-99xnz,Uid:3d139b6a-d18c-4490-b775-b61437104603,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:56:31.026818 containerd[1581]: time="2025-01-30T13:56:31.026617409Z" level=error msg="Failed to destroy network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.034521 containerd[1581]: time="2025-01-30T13:56:31.034443679Z" level=error msg="Failed to destroy network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.039234 containerd[1581]: time="2025-01-30T13:56:31.039079669Z" level=error msg="encountered an error cleaning up failed sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.041765 containerd[1581]: time="2025-01-30T13:56:31.041523891Z" level=error msg="encountered an error cleaning up failed sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.066075 containerd[1581]: time="2025-01-30T13:56:31.065858948Z" level=error msg="Failed to destroy network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.067318 containerd[1581]: time="2025-01-30T13:56:31.066698317Z" level=error msg="encountered an error cleaning up failed sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.067318 containerd[1581]: time="2025-01-30T13:56:31.066970522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99cfd4b69-blxcz,Uid:97b02c81-6b32-4a65-8b9e-6d8426a65011,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.067610 containerd[1581]: time="2025-01-30T13:56:31.067423591Z" level=error msg="Failed to destroy network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.068790 containerd[1581]: time="2025-01-30T13:56:31.067970985Z" level=error msg="encountered an error cleaning up failed sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.068790 containerd[1581]: time="2025-01-30T13:56:31.068047346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km89h,Uid:961e4391-08da-49eb-8e7d-aa735452853a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.075939 containerd[1581]: time="2025-01-30T13:56:31.075454882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4b699786-9bmxf,Uid:948fd5df-85f6-4117-955d-ae954df34712,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.075939 containerd[1581]: time="2025-01-30T13:56:31.075576040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99cfd4b69-99xnz,Uid:3d139b6a-d18c-4490-b775-b61437104603,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.075939 containerd[1581]: time="2025-01-30T13:56:31.075756320Z" level=error msg="Failed to destroy network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.076782 containerd[1581]: time="2025-01-30T13:56:31.076728971Z" level=error msg="encountered an error cleaning up failed sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.076992 containerd[1581]: time="2025-01-30T13:56:31.076952478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fnpc6,Uid:59569bfa-ceee-4967-bb23-bc58916a113d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.077245 kubelet[2727]: E0130 13:56:31.077068 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.077245 kubelet[2727]: E0130 13:56:31.077225 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99cfd4b69-99xnz" Jan 30 13:56:31.079284 kubelet[2727]: E0130 13:56:31.077260 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99cfd4b69-99xnz" Jan 30 13:56:31.079284 kubelet[2727]: E0130 13:56:31.077326 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99cfd4b69-99xnz_calico-apiserver(3d139b6a-d18c-4490-b775-b61437104603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99cfd4b69-99xnz_calico-apiserver(3d139b6a-d18c-4490-b775-b61437104603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99cfd4b69-99xnz" podUID="3d139b6a-d18c-4490-b775-b61437104603" Jan 30 13:56:31.079284 kubelet[2727]: E0130 13:56:31.077874 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.079424 kubelet[2727]: E0130 13:56:31.077962 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fnpc6" Jan 30 13:56:31.079424 kubelet[2727]: E0130 13:56:31.077989 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-fnpc6" Jan 30 13:56:31.079424 kubelet[2727]: E0130 13:56:31.078028 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fnpc6_kube-system(59569bfa-ceee-4967-bb23-bc58916a113d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fnpc6_kube-system(59569bfa-ceee-4967-bb23-bc58916a113d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fnpc6" podUID="59569bfa-ceee-4967-bb23-bc58916a113d" Jan 30 13:56:31.079527 kubelet[2727]: E0130 13:56:31.078075 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.079527 kubelet[2727]: E0130 13:56:31.078095 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km89h" Jan 30 13:56:31.079527 kubelet[2727]: E0130 13:56:31.078109 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km89h" Jan 30 13:56:31.079708 kubelet[2727]: E0130 13:56:31.078133 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-km89h_kube-system(961e4391-08da-49eb-8e7d-aa735452853a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-km89h_kube-system(961e4391-08da-49eb-8e7d-aa735452853a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-km89h" podUID="961e4391-08da-49eb-8e7d-aa735452853a" Jan 30 13:56:31.079708 kubelet[2727]: E0130 13:56:31.078158 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.079708 kubelet[2727]: E0130 13:56:31.078257 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99cfd4b69-blxcz" Jan 30 13:56:31.080007 kubelet[2727]: E0130 13:56:31.078272 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-99cfd4b69-blxcz" Jan 30 13:56:31.080007 kubelet[2727]: E0130 13:56:31.078299 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-99cfd4b69-blxcz_calico-apiserver(97b02c81-6b32-4a65-8b9e-6d8426a65011)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-99cfd4b69-blxcz_calico-apiserver(97b02c81-6b32-4a65-8b9e-6d8426a65011)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99cfd4b69-blxcz" podUID="97b02c81-6b32-4a65-8b9e-6d8426a65011" Jan 30 13:56:31.080007 kubelet[2727]: E0130 13:56:31.078309 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.080123 kubelet[2727]: E0130 13:56:31.078379 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4b699786-9bmxf" Jan 30 13:56:31.080123 kubelet[2727]: E0130 13:56:31.079675 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d4b699786-9bmxf" Jan 30 13:56:31.080123 kubelet[2727]: E0130 13:56:31.079810 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d4b699786-9bmxf_calico-system(948fd5df-85f6-4117-955d-ae954df34712)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d4b699786-9bmxf_calico-system(948fd5df-85f6-4117-955d-ae954df34712)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4b699786-9bmxf" podUID="948fd5df-85f6-4117-955d-ae954df34712" Jan 30 13:56:31.162806 kubelet[2727]: I0130 13:56:31.161603 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:31.169668 kubelet[2727]: E0130 13:56:31.169611 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:31.174301 containerd[1581]: time="2025-01-30T13:56:31.172966163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:56:31.178157 containerd[1581]: time="2025-01-30T13:56:31.176346948Z" level=info msg="StopPodSandbox for \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\"" Jan 30 13:56:31.181207 kubelet[2727]: I0130 13:56:31.180130 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:56:31.182239 containerd[1581]: time="2025-01-30T13:56:31.182137203Z" level=info msg="Ensure that sandbox e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21 in task-service has been cleanup successfully" Jan 30 13:56:31.187213 containerd[1581]: time="2025-01-30T13:56:31.186122305Z" level=info msg="StopPodSandbox for \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\"" Jan 30 13:56:31.191056 containerd[1581]: time="2025-01-30T13:56:31.189462913Z" level=info msg="Ensure that sandbox 09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d in task-service has been cleanup successfully" Jan 30 13:56:31.199196 kubelet[2727]: I0130 13:56:31.198478 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:31.199936 containerd[1581]: time="2025-01-30T13:56:31.199430523Z" level=info msg="StopPodSandbox for \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\"" Jan 30 13:56:31.201144 containerd[1581]: time="2025-01-30T13:56:31.200838193Z" level=info msg="Ensure that sandbox 376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58 in task-service has been cleanup successfully" Jan 30 13:56:31.221416 kubelet[2727]: I0130 13:56:31.220997 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:31.223339 containerd[1581]: time="2025-01-30T13:56:31.223077147Z" level=info msg="StopPodSandbox for \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\"" Jan 30 13:56:31.226900 containerd[1581]: time="2025-01-30T13:56:31.225611164Z" level=info msg="Ensure that sandbox 29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6 in task-service has been cleanup successfully" Jan 30 13:56:31.249559 kubelet[2727]: I0130 13:56:31.249513 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:56:31.257466 containerd[1581]: time="2025-01-30T13:56:31.255410403Z" level=info msg="StopPodSandbox for \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\"" Jan 30 13:56:31.258475 containerd[1581]: time="2025-01-30T13:56:31.258229209Z" level=info msg="Ensure that sandbox f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18 in task-service has been cleanup successfully" Jan 30 13:56:31.352129 containerd[1581]: time="2025-01-30T13:56:31.349722771Z" level=error msg="StopPodSandbox for \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\" failed" error="failed to destroy network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.352411 kubelet[2727]: E0130 13:56:31.350673 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:56:31.352411 kubelet[2727]: E0130 13:56:31.350946 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d"} Jan 30 13:56:31.352411 kubelet[2727]: E0130 13:56:31.351273 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d139b6a-d18c-4490-b775-b61437104603\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:31.352411 kubelet[2727]: E0130 13:56:31.351482 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d139b6a-d18c-4490-b775-b61437104603\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99cfd4b69-99xnz" podUID="3d139b6a-d18c-4490-b775-b61437104603" Jan 30 13:56:31.372038 containerd[1581]: time="2025-01-30T13:56:31.371941892Z" level=error msg="StopPodSandbox for \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\" failed" error="failed to destroy network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.372343 kubelet[2727]: E0130 13:56:31.372278 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:31.372343 kubelet[2727]: E0130 13:56:31.372334 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6"} Jan 30 13:56:31.372547 kubelet[2727]: E0130 13:56:31.372372 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"948fd5df-85f6-4117-955d-ae954df34712\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:31.372547 kubelet[2727]: E0130 13:56:31.372401 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"948fd5df-85f6-4117-955d-ae954df34712\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d4b699786-9bmxf" podUID="948fd5df-85f6-4117-955d-ae954df34712" Jan 30 13:56:31.373532 containerd[1581]: time="2025-01-30T13:56:31.373376475Z" level=error msg="StopPodSandbox for \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\" failed" error="failed to destroy network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.374332 kubelet[2727]: E0130 13:56:31.373828 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:31.374332 kubelet[2727]: E0130 13:56:31.373896 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58"} Jan 30 13:56:31.374332 kubelet[2727]: E0130 13:56:31.373944 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97b02c81-6b32-4a65-8b9e-6d8426a65011\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:31.374332 kubelet[2727]: E0130 13:56:31.373979 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97b02c81-6b32-4a65-8b9e-6d8426a65011\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-99cfd4b69-blxcz" podUID="97b02c81-6b32-4a65-8b9e-6d8426a65011" Jan 30 13:56:31.376078 containerd[1581]: time="2025-01-30T13:56:31.375996531Z" level=error msg="StopPodSandbox for \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\" failed" error="failed to destroy network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.376429 kubelet[2727]: E0130 13:56:31.376362 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:31.376530 kubelet[2727]: E0130 13:56:31.376448 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21"} Jan 30 13:56:31.376530 kubelet[2727]: E0130 13:56:31.376496 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"961e4391-08da-49eb-8e7d-aa735452853a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:31.376670 kubelet[2727]: E0130 13:56:31.376531 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"961e4391-08da-49eb-8e7d-aa735452853a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-km89h" podUID="961e4391-08da-49eb-8e7d-aa735452853a" Jan 30 13:56:31.383447 containerd[1581]: time="2025-01-30T13:56:31.383366842Z" level=error msg="StopPodSandbox for \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\" failed" error="failed to destroy network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:31.383837 kubelet[2727]: E0130 13:56:31.383756 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:56:31.384038 kubelet[2727]: E0130 13:56:31.383946 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18"} Jan 30 13:56:31.384038 kubelet[2727]: E0130 13:56:31.384004 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59569bfa-ceee-4967-bb23-bc58916a113d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:31.384178 kubelet[2727]: E0130 13:56:31.384040 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59569bfa-ceee-4967-bb23-bc58916a113d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-fnpc6" podUID="59569bfa-ceee-4967-bb23-bc58916a113d" Jan 30 13:56:31.963990 containerd[1581]: time="2025-01-30T13:56:31.963342346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzzqf,Uid:823691bc-ea65-4b7b-a6e1-f21ba2308d6d,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:32.062366 containerd[1581]: time="2025-01-30T13:56:32.062288937Z" level=error msg="Failed to destroy network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.064880 containerd[1581]: time="2025-01-30T13:56:32.062878129Z" level=error msg="encountered an error cleaning up failed sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.064880 containerd[1581]: time="2025-01-30T13:56:32.063011418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzzqf,Uid:823691bc-ea65-4b7b-a6e1-f21ba2308d6d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.065123 kubelet[2727]: E0130 13:56:32.064369 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.065123 kubelet[2727]: E0130 13:56:32.064444 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rzzqf" Jan 30 13:56:32.065123 kubelet[2727]: E0130 13:56:32.064470 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rzzqf" Jan 30 13:56:32.065317 kubelet[2727]: E0130 13:56:32.064530 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rzzqf_calico-system(823691bc-ea65-4b7b-a6e1-f21ba2308d6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rzzqf_calico-system(823691bc-ea65-4b7b-a6e1-f21ba2308d6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rzzqf" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" Jan 30 13:56:32.067726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226-shm.mount: Deactivated successfully. Jan 30 13:56:32.254425 kubelet[2727]: I0130 13:56:32.254245 2727 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:32.257811 containerd[1581]: time="2025-01-30T13:56:32.255153772Z" level=info msg="StopPodSandbox for \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\"" Jan 30 13:56:32.257811 containerd[1581]: time="2025-01-30T13:56:32.255429407Z" level=info msg="Ensure that sandbox 697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226 in task-service has been cleanup successfully" Jan 30 13:56:32.302561 containerd[1581]: time="2025-01-30T13:56:32.302471186Z" level=error msg="StopPodSandbox for \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\" failed" error="failed to destroy network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:32.303222 kubelet[2727]: E0130 13:56:32.302989 2727 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:32.303222 kubelet[2727]: E0130 13:56:32.303068 2727 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226"} Jan 30 13:56:32.303222 kubelet[2727]: E0130 13:56:32.303119 2727 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"823691bc-ea65-4b7b-a6e1-f21ba2308d6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:32.303222 kubelet[2727]: E0130 13:56:32.303154 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"823691bc-ea65-4b7b-a6e1-f21ba2308d6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rzzqf" podUID="823691bc-ea65-4b7b-a6e1-f21ba2308d6d" Jan 30 13:56:37.993237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426528812.mount: Deactivated successfully. Jan 30 13:56:38.099367 containerd[1581]: time="2025-01-30T13:56:38.098916518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:38.099367 containerd[1581]: time="2025-01-30T13:56:38.084146268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:56:38.136726 containerd[1581]: time="2025-01-30T13:56:38.136583721Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:38.154476 containerd[1581]: time="2025-01-30T13:56:38.154416487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.981376614s" Jan 30 13:56:38.154476 containerd[1581]: time="2025-01-30T13:56:38.154476128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:56:38.158183 containerd[1581]: time="2025-01-30T13:56:38.157204464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:38.263797 containerd[1581]: time="2025-01-30T13:56:38.262928629Z" level=info msg="CreateContainer within sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:56:38.361221 containerd[1581]: time="2025-01-30T13:56:38.359417952Z" level=info msg="CreateContainer within sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\"" Jan 30 13:56:38.378179 containerd[1581]: time="2025-01-30T13:56:38.378115070Z" level=info msg="StartContainer for \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\"" Jan 30 13:56:38.469428 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:56:38.468834 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:56:38.468955 systemd-resolved[1468]: Flushed all caches. Jan 30 13:56:38.573236 containerd[1581]: time="2025-01-30T13:56:38.572456902Z" level=info msg="StartContainer for \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\" returns successfully" Jan 30 13:56:38.726715 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:56:38.726970 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:56:39.283589 kubelet[2727]: E0130 13:56:39.283529 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:40.295020 kubelet[2727]: E0130 13:56:40.294398 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:40.516010 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:56:40.514470 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:56:40.514484 systemd-resolved[1468]: Flushed all caches. Jan 30 13:56:41.291695 kubelet[2727]: E0130 13:56:41.291661 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:41.326407 systemd[1]: run-containerd-runc-k8s.io-4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3-runc.igfJnx.mount: Deactivated successfully. Jan 30 13:56:41.960375 containerd[1581]: time="2025-01-30T13:56:41.959629936Z" level=info msg="StopPodSandbox for \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\"" Jan 30 13:56:41.961130 containerd[1581]: time="2025-01-30T13:56:41.961052979Z" level=info msg="StopPodSandbox for \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\"" Jan 30 13:56:42.080013 kubelet[2727]: I0130 13:56:42.071059 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4flcj" podStartSLOduration=4.793190314 podStartE2EDuration="22.060294933s" podCreationTimestamp="2025-01-30 13:56:20 +0000 UTC" firstStartedPulling="2025-01-30 13:56:20.894894038 +0000 UTC m=+22.158882413" lastFinishedPulling="2025-01-30 13:56:38.161998634 +0000 UTC m=+39.425987032" observedRunningTime="2025-01-30 13:56:39.350846896 +0000 UTC m=+40.614835293" watchObservedRunningTime="2025-01-30 13:56:42.060294933 +0000 UTC m=+43.324283340" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.061 [INFO][4039] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.066 [INFO][4039] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" iface="eth0" netns="/var/run/netns/cni-26d69aaa-6d93-2ad5-aa3f-a1f078df1342" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.068 [INFO][4039] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" iface="eth0" netns="/var/run/netns/cni-26d69aaa-6d93-2ad5-aa3f-a1f078df1342" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.071 [INFO][4039] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" iface="eth0" netns="/var/run/netns/cni-26d69aaa-6d93-2ad5-aa3f-a1f078df1342" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.071 [INFO][4039] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.071 [INFO][4039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.236 [INFO][4050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.238 [INFO][4050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.238 [INFO][4050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.251 [WARNING][4050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.251 [INFO][4050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.253 [INFO][4050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:42.266239 containerd[1581]: 2025-01-30 13:56:42.258 [INFO][4039] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:56:42.266239 containerd[1581]: time="2025-01-30T13:56:42.264903176Z" level=info msg="TearDown network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\" successfully" Jan 30 13:56:42.266239 containerd[1581]: time="2025-01-30T13:56:42.264947664Z" level=info msg="StopPodSandbox for \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\" returns successfully" Jan 30 13:56:42.268223 containerd[1581]: time="2025-01-30T13:56:42.266834514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99cfd4b69-99xnz,Uid:3d139b6a-d18c-4490-b775-b61437104603,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:56:42.271950 systemd[1]: run-netns-cni\x2d26d69aaa\x2d6d93\x2d2ad5\x2daa3f\x2da1f078df1342.mount: Deactivated successfully. Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.067 [INFO][4038] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.069 [INFO][4038] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" iface="eth0" netns="/var/run/netns/cni-0450a1bc-4c92-6a21-6c61-dff1994ba31e" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.069 [INFO][4038] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" iface="eth0" netns="/var/run/netns/cni-0450a1bc-4c92-6a21-6c61-dff1994ba31e" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.071 [INFO][4038] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" iface="eth0" netns="/var/run/netns/cni-0450a1bc-4c92-6a21-6c61-dff1994ba31e" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.071 [INFO][4038] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.071 [INFO][4038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.236 [INFO][4051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.238 [INFO][4051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.253 [INFO][4051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.267 [WARNING][4051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.267 [INFO][4051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.272 [INFO][4051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:42.283762 containerd[1581]: 2025-01-30 13:56:42.278 [INFO][4038] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:56:42.289006 containerd[1581]: time="2025-01-30T13:56:42.285794629Z" level=info msg="TearDown network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\" successfully" Jan 30 13:56:42.289006 containerd[1581]: time="2025-01-30T13:56:42.285831336Z" level=info msg="StopPodSandbox for \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\" returns successfully" Jan 30 13:56:42.289191 kubelet[2727]: E0130 13:56:42.286369 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:42.290348 systemd[1]: run-netns-cni\x2d0450a1bc\x2d4c92\x2d6a21\x2d6c61\x2ddff1994ba31e.mount: Deactivated successfully. Jan 30 13:56:42.291541 containerd[1581]: time="2025-01-30T13:56:42.291315448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fnpc6,Uid:59569bfa-ceee-4967-bb23-bc58916a113d,Namespace:kube-system,Attempt:1,}" Jan 30 13:56:42.538399 systemd-networkd[1218]: calib909afcdb2b: Link UP Jan 30 13:56:42.540552 systemd-networkd[1218]: calib909afcdb2b: Gained carrier Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.366 [INFO][4066] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.381 [INFO][4066] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0 calico-apiserver-99cfd4b69- calico-apiserver 3d139b6a-d18c-4490-b775-b61437104603 860 0 2025-01-30 13:56:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:99cfd4b69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-8-baee985ae6 calico-apiserver-99cfd4b69-99xnz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib909afcdb2b [] []}} ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-99xnz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.382 [INFO][4066] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-99xnz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.440 [INFO][4089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" HandleID="k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.453 [INFO][4089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" HandleID="k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-8-baee985ae6", "pod":"calico-apiserver-99cfd4b69-99xnz", "timestamp":"2025-01-30 13:56:42.440070623 +0000 UTC"}, Hostname:"ci-4081.3.0-8-baee985ae6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.453 [INFO][4089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.453 [INFO][4089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.453 [INFO][4089] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-8-baee985ae6' Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.457 [INFO][4089] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.472 [INFO][4089] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.479 [INFO][4089] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.482 [INFO][4089] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.485 [INFO][4089] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.485 [INFO][4089] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.487 [INFO][4089] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6 Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.494 [INFO][4089] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.508 [INFO][4089] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.65/26] block=192.168.9.64/26 handle="k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.508 [INFO][4089] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.65/26] handle="k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.508 [INFO][4089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:42.585598 containerd[1581]: 2025-01-30 13:56:42.508 [INFO][4089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.65/26] IPv6=[] ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" HandleID="k8s-pod-network.444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.589674 containerd[1581]: 2025-01-30 13:56:42.514 [INFO][4066] cni-plugin/k8s.go 386: Populated endpoint ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-99xnz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0", GenerateName:"calico-apiserver-99cfd4b69-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d139b6a-d18c-4490-b775-b61437104603", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99cfd4b69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"", Pod:"calico-apiserver-99cfd4b69-99xnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib909afcdb2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:42.589674 containerd[1581]: 2025-01-30 13:56:42.514 [INFO][4066] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.65/32] ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-99xnz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.589674 containerd[1581]: 2025-01-30 13:56:42.514 [INFO][4066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib909afcdb2b ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-99xnz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.589674 containerd[1581]: 2025-01-30 13:56:42.538 [INFO][4066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-99xnz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.589674 containerd[1581]: 2025-01-30 13:56:42.547 [INFO][4066] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-99xnz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0", GenerateName:"calico-apiserver-99cfd4b69-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d139b6a-d18c-4490-b775-b61437104603", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99cfd4b69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6", Pod:"calico-apiserver-99cfd4b69-99xnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib909afcdb2b", MAC:"c2:ae:f8:99:99:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:42.589674 containerd[1581]: 2025-01-30 13:56:42.580 [INFO][4066] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-99xnz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:56:42.605194 systemd-networkd[1218]: cali955eba4949f: Link UP Jan 30 13:56:42.605559 systemd-networkd[1218]: cali955eba4949f: Gained carrier Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.366 [INFO][4072] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.383 [INFO][4072] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0 coredns-7db6d8ff4d- kube-system 59569bfa-ceee-4967-bb23-bc58916a113d 861 0 2025-01-30 13:56:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-8-baee985ae6 coredns-7db6d8ff4d-fnpc6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali955eba4949f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fnpc6" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.383 [INFO][4072] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fnpc6" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.450 [INFO][4093] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" HandleID="k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.464 [INFO][4093] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" HandleID="k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-8-baee985ae6", "pod":"coredns-7db6d8ff4d-fnpc6", "timestamp":"2025-01-30 13:56:42.450142701 +0000 UTC"}, Hostname:"ci-4081.3.0-8-baee985ae6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.464 [INFO][4093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.508 [INFO][4093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.509 [INFO][4093] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-8-baee985ae6' Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.513 [INFO][4093] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.522 [INFO][4093] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.548 [INFO][4093] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.553 [INFO][4093] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.559 [INFO][4093] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.559 [INFO][4093] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.562 [INFO][4093] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6 Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.570 [INFO][4093] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.590 [INFO][4093] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.66/26] block=192.168.9.64/26 handle="k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.592 [INFO][4093] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.66/26] handle="k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.592 [INFO][4093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:42.637540 containerd[1581]: 2025-01-30 13:56:42.592 [INFO][4093] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.66/26] IPv6=[] ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" HandleID="k8s-pod-network.2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.640065 containerd[1581]: 2025-01-30 13:56:42.597 [INFO][4072] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fnpc6" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"59569bfa-ceee-4967-bb23-bc58916a113d", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"", Pod:"coredns-7db6d8ff4d-fnpc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali955eba4949f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:42.640065 containerd[1581]: 2025-01-30 13:56:42.597 [INFO][4072] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.66/32] ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fnpc6" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.640065 containerd[1581]: 2025-01-30 13:56:42.597 [INFO][4072] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali955eba4949f ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fnpc6" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.640065 containerd[1581]: 2025-01-30 13:56:42.606 [INFO][4072] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fnpc6" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.640065 containerd[1581]: 2025-01-30 13:56:42.607 [INFO][4072] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fnpc6" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"59569bfa-ceee-4967-bb23-bc58916a113d", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6", Pod:"coredns-7db6d8ff4d-fnpc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali955eba4949f", MAC:"7a:a7:40:d4:61:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:42.640065 containerd[1581]: 2025-01-30 13:56:42.627 [INFO][4072] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-fnpc6" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:56:42.720121 containerd[1581]: time="2025-01-30T13:56:42.718337647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:42.720121 containerd[1581]: time="2025-01-30T13:56:42.718436972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:42.720121 containerd[1581]: time="2025-01-30T13:56:42.718463075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:42.728996 containerd[1581]: time="2025-01-30T13:56:42.728031550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:42.754813 containerd[1581]: time="2025-01-30T13:56:42.754261761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:42.754813 containerd[1581]: time="2025-01-30T13:56:42.754487879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:42.754813 containerd[1581]: time="2025-01-30T13:56:42.754550726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:42.755469 containerd[1581]: time="2025-01-30T13:56:42.754788183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:42.962284 containerd[1581]: time="2025-01-30T13:56:42.961015754Z" level=info msg="StopPodSandbox for \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\"" Jan 30 13:56:43.057028 containerd[1581]: time="2025-01-30T13:56:43.056285758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fnpc6,Uid:59569bfa-ceee-4967-bb23-bc58916a113d,Namespace:kube-system,Attempt:1,} returns sandbox id \"2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6\"" Jan 30 13:56:43.063436 kubelet[2727]: E0130 13:56:43.061112 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:43.068755 containerd[1581]: time="2025-01-30T13:56:43.068471552Z" level=info msg="CreateContainer within sandbox \"2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:56:43.092117 containerd[1581]: time="2025-01-30T13:56:43.092068278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99cfd4b69-99xnz,Uid:3d139b6a-d18c-4490-b775-b61437104603,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6\"" Jan 30 13:56:43.103500 containerd[1581]: time="2025-01-30T13:56:43.103452362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:56:43.113994 containerd[1581]: time="2025-01-30T13:56:43.113935088Z" level=info msg="CreateContainer within sandbox \"2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ddc2be0eb14dda46a822794f0ff22d259d03a208111338639c6391d5656f0b0\"" Jan 30 13:56:43.115533 containerd[1581]: time="2025-01-30T13:56:43.115089055Z" level=info msg="StartContainer for \"5ddc2be0eb14dda46a822794f0ff22d259d03a208111338639c6391d5656f0b0\"" Jan 30 13:56:43.271326 containerd[1581]: time="2025-01-30T13:56:43.270781654Z" level=info msg="StartContainer for \"5ddc2be0eb14dda46a822794f0ff22d259d03a208111338639c6391d5656f0b0\" returns successfully" Jan 30 13:56:43.318737 kubelet[2727]: E0130 13:56:43.317169 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.183 [INFO][4237] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.183 [INFO][4237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" iface="eth0" netns="/var/run/netns/cni-584d12b7-0429-b32b-0043-39a429c74d98" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.185 [INFO][4237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" iface="eth0" netns="/var/run/netns/cni-584d12b7-0429-b32b-0043-39a429c74d98" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.187 [INFO][4237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" iface="eth0" netns="/var/run/netns/cni-584d12b7-0429-b32b-0043-39a429c74d98" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.188 [INFO][4237] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.188 [INFO][4237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.311 [INFO][4275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.311 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.311 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.333 [WARNING][4275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.333 [INFO][4275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.336 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:43.342355 containerd[1581]: 2025-01-30 13:56:43.339 [INFO][4237] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:43.344345 containerd[1581]: time="2025-01-30T13:56:43.343316008Z" level=info msg="TearDown network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\" successfully" Jan 30 13:56:43.344345 containerd[1581]: time="2025-01-30T13:56:43.343378174Z" level=info msg="StopPodSandbox for \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\" returns successfully" Jan 30 13:56:43.349823 systemd[1]: run-netns-cni\x2d584d12b7\x2d0429\x2db32b\x2d0043\x2d39a429c74d98.mount: Deactivated successfully. Jan 30 13:56:43.355245 containerd[1581]: time="2025-01-30T13:56:43.353571327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4b699786-9bmxf,Uid:948fd5df-85f6-4117-955d-ae954df34712,Namespace:calico-system,Attempt:1,}" Jan 30 13:56:43.355445 kubelet[2727]: I0130 13:56:43.353926 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fnpc6" podStartSLOduration=30.353903059 podStartE2EDuration="30.353903059s" podCreationTimestamp="2025-01-30 13:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:43.351679832 +0000 UTC m=+44.615668229" watchObservedRunningTime="2025-01-30 13:56:43.353903059 +0000 UTC m=+44.617891455" Jan 30 13:56:43.586334 systemd-networkd[1218]: calib909afcdb2b: Gained IPv6LL Jan 30 13:56:43.646741 systemd-networkd[1218]: cali9718234dd52: Link UP Jan 30 13:56:43.646999 systemd-networkd[1218]: cali9718234dd52: Gained carrier Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.475 [INFO][4293] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.505 [INFO][4293] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0 calico-kube-controllers-7d4b699786- calico-system 948fd5df-85f6-4117-955d-ae954df34712 875 0 2025-01-30 13:56:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d4b699786 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-8-baee985ae6 calico-kube-controllers-7d4b699786-9bmxf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9718234dd52 [] []}} ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Namespace="calico-system" Pod="calico-kube-controllers-7d4b699786-9bmxf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.505 [INFO][4293] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Namespace="calico-system" Pod="calico-kube-controllers-7d4b699786-9bmxf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.576 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.589 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d9b10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-8-baee985ae6", "pod":"calico-kube-controllers-7d4b699786-9bmxf", "timestamp":"2025-01-30 13:56:43.576449469 +0000 UTC"}, Hostname:"ci-4081.3.0-8-baee985ae6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.589 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.589 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.589 [INFO][4309] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-8-baee985ae6' Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.593 [INFO][4309] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.600 [INFO][4309] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.607 [INFO][4309] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.612 [INFO][4309] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.616 [INFO][4309] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.616 [INFO][4309] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.619 [INFO][4309] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8 Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.624 [INFO][4309] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.634 [INFO][4309] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.67/26] block=192.168.9.64/26 handle="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.634 [INFO][4309] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.67/26] handle="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.634 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:43.669590 containerd[1581]: 2025-01-30 13:56:43.634 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.67/26] IPv6=[] ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.670985 containerd[1581]: 2025-01-30 13:56:43.640 [INFO][4293] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Namespace="calico-system" Pod="calico-kube-controllers-7d4b699786-9bmxf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0", GenerateName:"calico-kube-controllers-7d4b699786-", Namespace:"calico-system", SelfLink:"", UID:"948fd5df-85f6-4117-955d-ae954df34712", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4b699786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"", Pod:"calico-kube-controllers-7d4b699786-9bmxf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9718234dd52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:43.670985 containerd[1581]: 2025-01-30 13:56:43.641 [INFO][4293] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.67/32] ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Namespace="calico-system" Pod="calico-kube-controllers-7d4b699786-9bmxf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.670985 containerd[1581]: 2025-01-30 13:56:43.641 [INFO][4293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9718234dd52 ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Namespace="calico-system" Pod="calico-kube-controllers-7d4b699786-9bmxf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.670985 containerd[1581]: 2025-01-30 13:56:43.648 [INFO][4293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Namespace="calico-system" Pod="calico-kube-controllers-7d4b699786-9bmxf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.670985 containerd[1581]: 2025-01-30 13:56:43.648 [INFO][4293] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Namespace="calico-system" Pod="calico-kube-controllers-7d4b699786-9bmxf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0", GenerateName:"calico-kube-controllers-7d4b699786-", Namespace:"calico-system", SelfLink:"", UID:"948fd5df-85f6-4117-955d-ae954df34712", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4b699786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8", Pod:"calico-kube-controllers-7d4b699786-9bmxf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9718234dd52", MAC:"de:30:14:a4:8e:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:43.670985 containerd[1581]: 2025-01-30 13:56:43.663 [INFO][4293] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Namespace="calico-system" Pod="calico-kube-controllers-7d4b699786-9bmxf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:43.720566 containerd[1581]: time="2025-01-30T13:56:43.720068816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:43.720566 containerd[1581]: time="2025-01-30T13:56:43.720273867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:43.720566 containerd[1581]: time="2025-01-30T13:56:43.720312687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:43.720566 containerd[1581]: time="2025-01-30T13:56:43.720446741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:43.813066 containerd[1581]: time="2025-01-30T13:56:43.812010170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d4b699786-9bmxf,Uid:948fd5df-85f6-4117-955d-ae954df34712,Namespace:calico-system,Attempt:1,} returns sandbox id \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\"" Jan 30 13:56:44.339195 kubelet[2727]: E0130 13:56:44.339100 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:44.419530 systemd-networkd[1218]: cali955eba4949f: Gained IPv6LL Jan 30 13:56:44.738813 systemd-networkd[1218]: cali9718234dd52: Gained IPv6LL Jan 30 13:56:44.964171 containerd[1581]: time="2025-01-30T13:56:44.962145589Z" level=info msg="StopPodSandbox for \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\"" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.072 [INFO][4410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.072 [INFO][4410] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" iface="eth0" netns="/var/run/netns/cni-3f428057-13fe-6655-b48f-6a7d581fc4b0" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.073 [INFO][4410] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" iface="eth0" netns="/var/run/netns/cni-3f428057-13fe-6655-b48f-6a7d581fc4b0" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.074 [INFO][4410] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" iface="eth0" netns="/var/run/netns/cni-3f428057-13fe-6655-b48f-6a7d581fc4b0" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.074 [INFO][4410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.074 [INFO][4410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.127 [INFO][4416] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.127 [INFO][4416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.127 [INFO][4416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.135 [WARNING][4416] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.136 [INFO][4416] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.139 [INFO][4416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:45.154563 containerd[1581]: 2025-01-30 13:56:45.148 [INFO][4410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:45.159457 containerd[1581]: time="2025-01-30T13:56:45.155365307Z" level=info msg="TearDown network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\" successfully" Jan 30 13:56:45.159457 containerd[1581]: time="2025-01-30T13:56:45.155643366Z" level=info msg="StopPodSandbox for \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\" returns successfully" Jan 30 13:56:45.161258 containerd[1581]: time="2025-01-30T13:56:45.159985896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99cfd4b69-blxcz,Uid:97b02c81-6b32-4a65-8b9e-6d8426a65011,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:56:45.163228 systemd[1]: run-netns-cni\x2d3f428057\x2d13fe\x2d6655\x2db48f\x2d6a7d581fc4b0.mount: Deactivated successfully. Jan 30 13:56:45.343222 kubelet[2727]: E0130 13:56:45.342285 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:45.548085 systemd-networkd[1218]: cali73fb1be6f7b: Link UP Jan 30 13:56:45.553271 systemd-networkd[1218]: cali73fb1be6f7b: Gained carrier Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.237 [INFO][4423] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.259 [INFO][4423] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0 calico-apiserver-99cfd4b69- calico-apiserver 97b02c81-6b32-4a65-8b9e-6d8426a65011 896 0 2025-01-30 13:56:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:99cfd4b69 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-8-baee985ae6 calico-apiserver-99cfd4b69-blxcz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73fb1be6f7b [] []}} ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-blxcz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.259 [INFO][4423] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-blxcz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.399 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" HandleID="k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.427 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" HandleID="k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000fba40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-8-baee985ae6", "pod":"calico-apiserver-99cfd4b69-blxcz", "timestamp":"2025-01-30 13:56:45.399810377 +0000 UTC"}, Hostname:"ci-4081.3.0-8-baee985ae6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.428 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.428 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.428 [INFO][4436] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-8-baee985ae6' Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.437 [INFO][4436] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.448 [INFO][4436] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.468 [INFO][4436] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.473 [INFO][4436] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.479 [INFO][4436] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.479 [INFO][4436] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.487 [INFO][4436] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318 Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.510 [INFO][4436] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.523 [INFO][4436] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.68/26] block=192.168.9.64/26 handle="k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.523 [INFO][4436] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.68/26] handle="k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.523 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:45.605229 containerd[1581]: 2025-01-30 13:56:45.523 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.68/26] IPv6=[] ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" HandleID="k8s-pod-network.f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.606033 containerd[1581]: 2025-01-30 13:56:45.529 [INFO][4423] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-blxcz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0", GenerateName:"calico-apiserver-99cfd4b69-", Namespace:"calico-apiserver", SelfLink:"", UID:"97b02c81-6b32-4a65-8b9e-6d8426a65011", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99cfd4b69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"", Pod:"calico-apiserver-99cfd4b69-blxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73fb1be6f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:45.606033 containerd[1581]: 2025-01-30 13:56:45.529 [INFO][4423] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.68/32] ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-blxcz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.606033 containerd[1581]: 2025-01-30 13:56:45.530 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73fb1be6f7b ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-blxcz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.606033 containerd[1581]: 2025-01-30 13:56:45.558 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-blxcz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.606033 containerd[1581]: 2025-01-30 13:56:45.563 [INFO][4423] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-blxcz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0", GenerateName:"calico-apiserver-99cfd4b69-", Namespace:"calico-apiserver", SelfLink:"", UID:"97b02c81-6b32-4a65-8b9e-6d8426a65011", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99cfd4b69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318", Pod:"calico-apiserver-99cfd4b69-blxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73fb1be6f7b", MAC:"f2:a9:5b:0e:75:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:45.606033 containerd[1581]: 2025-01-30 13:56:45.589 [INFO][4423] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318" Namespace="calico-apiserver" Pod="calico-apiserver-99cfd4b69-blxcz" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:45.680327 containerd[1581]: time="2025-01-30T13:56:45.679358549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:45.680327 containerd[1581]: time="2025-01-30T13:56:45.679810583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:45.680327 containerd[1581]: time="2025-01-30T13:56:45.679837356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:45.680327 containerd[1581]: time="2025-01-30T13:56:45.680003358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:45.833354 containerd[1581]: time="2025-01-30T13:56:45.833218840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-99cfd4b69-blxcz,Uid:97b02c81-6b32-4a65-8b9e-6d8426a65011,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318\"" Jan 30 13:56:45.961120 containerd[1581]: time="2025-01-30T13:56:45.959610660Z" level=info msg="StopPodSandbox for \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\"" Jan 30 13:56:45.963951 containerd[1581]: time="2025-01-30T13:56:45.963066695Z" level=info msg="StopPodSandbox for \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\"" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.136 [INFO][4538] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.136 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" iface="eth0" netns="/var/run/netns/cni-c6d7dfc5-78ac-49a6-1c6a-546615cd16df" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.136 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" iface="eth0" netns="/var/run/netns/cni-c6d7dfc5-78ac-49a6-1c6a-546615cd16df" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.137 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" iface="eth0" netns="/var/run/netns/cni-c6d7dfc5-78ac-49a6-1c6a-546615cd16df" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.137 [INFO][4538] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.137 [INFO][4538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.179 [INFO][4560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.179 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.179 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.188 [WARNING][4560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.188 [INFO][4560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.193 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:46.207895 containerd[1581]: 2025-01-30 13:56:46.202 [INFO][4538] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:46.212050 containerd[1581]: time="2025-01-30T13:56:46.211995672Z" level=info msg="TearDown network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\" successfully" Jan 30 13:56:46.213152 containerd[1581]: time="2025-01-30T13:56:46.212540595Z" level=info msg="StopPodSandbox for \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\" returns successfully" Jan 30 13:56:46.213757 systemd[1]: run-netns-cni\x2dc6d7dfc5\x2d78ac\x2d49a6\x2d1c6a\x2d546615cd16df.mount: Deactivated successfully. Jan 30 13:56:46.217840 kubelet[2727]: E0130 13:56:46.217479 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:46.221352 containerd[1581]: time="2025-01-30T13:56:46.219451309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km89h,Uid:961e4391-08da-49eb-8e7d-aa735452853a,Namespace:kube-system,Attempt:1,}" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.085 [INFO][4543] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.086 [INFO][4543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" iface="eth0" netns="/var/run/netns/cni-028a44b6-f6a4-02f8-a048-13edb64a7bab" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.087 [INFO][4543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" iface="eth0" netns="/var/run/netns/cni-028a44b6-f6a4-02f8-a048-13edb64a7bab" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.089 [INFO][4543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" iface="eth0" netns="/var/run/netns/cni-028a44b6-f6a4-02f8-a048-13edb64a7bab" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.089 [INFO][4543] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.089 [INFO][4543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.196 [INFO][4553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.197 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.197 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.217 [WARNING][4553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.217 [INFO][4553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.228 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:46.237215 containerd[1581]: 2025-01-30 13:56:46.231 [INFO][4543] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:46.241610 containerd[1581]: time="2025-01-30T13:56:46.238441986Z" level=info msg="TearDown network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\" successfully" Jan 30 13:56:46.241610 containerd[1581]: time="2025-01-30T13:56:46.238488868Z" level=info msg="StopPodSandbox for \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\" returns successfully" Jan 30 13:56:46.241610 containerd[1581]: time="2025-01-30T13:56:46.241359647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzzqf,Uid:823691bc-ea65-4b7b-a6e1-f21ba2308d6d,Namespace:calico-system,Attempt:1,}" Jan 30 13:56:46.248231 systemd[1]: run-netns-cni\x2d028a44b6\x2df6a4\x2d02f8\x2da048\x2d13edb64a7bab.mount: Deactivated successfully. Jan 30 13:56:46.339214 containerd[1581]: time="2025-01-30T13:56:46.338659908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.341352 containerd[1581]: time="2025-01-30T13:56:46.341267780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:56:46.345517 containerd[1581]: time="2025-01-30T13:56:46.343813576Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.361806 containerd[1581]: time="2025-01-30T13:56:46.361748003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.365003 containerd[1581]: time="2025-01-30T13:56:46.364911982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.261203273s" Jan 30 13:56:46.365003 containerd[1581]: time="2025-01-30T13:56:46.365004544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:56:46.368424 containerd[1581]: time="2025-01-30T13:56:46.368376177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:56:46.374190 containerd[1581]: time="2025-01-30T13:56:46.374125432Z" level=info msg="CreateContainer within sandbox \"444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:56:46.400434 containerd[1581]: time="2025-01-30T13:56:46.400383882Z" level=info msg="CreateContainer within sandbox \"444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b176def0198a0abd81318efbbcf3bdc290ef1a34132142201f7980e38058675\"" Jan 30 13:56:46.405503 containerd[1581]: time="2025-01-30T13:56:46.405050526Z" level=info msg="StartContainer for \"8b176def0198a0abd81318efbbcf3bdc290ef1a34132142201f7980e38058675\"" Jan 30 13:56:46.466380 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:56:46.465523 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:56:46.465589 systemd-resolved[1468]: Flushed all caches. Jan 30 13:56:46.600020 containerd[1581]: time="2025-01-30T13:56:46.599859332Z" level=info msg="StartContainer for \"8b176def0198a0abd81318efbbcf3bdc290ef1a34132142201f7980e38058675\" returns successfully" Jan 30 13:56:46.688387 systemd-networkd[1218]: cali0e1261f3141: Link UP Jan 30 13:56:46.693154 systemd-networkd[1218]: cali0e1261f3141: Gained carrier Jan 30 13:56:46.721362 systemd-networkd[1218]: cali73fb1be6f7b: Gained IPv6LL Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.370 [INFO][4570] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.405 [INFO][4570] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0 coredns-7db6d8ff4d- kube-system 961e4391-08da-49eb-8e7d-aa735452853a 907 0 2025-01-30 13:56:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-8-baee985ae6 coredns-7db6d8ff4d-km89h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0e1261f3141 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km89h" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.405 [INFO][4570] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km89h" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.522 [INFO][4607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" HandleID="k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.577 [INFO][4607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" HandleID="k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a4200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-8-baee985ae6", "pod":"coredns-7db6d8ff4d-km89h", "timestamp":"2025-01-30 13:56:46.522014409 +0000 UTC"}, Hostname:"ci-4081.3.0-8-baee985ae6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.578 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.578 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.578 [INFO][4607] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-8-baee985ae6' Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.585 [INFO][4607] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.601 [INFO][4607] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.621 [INFO][4607] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.627 [INFO][4607] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.632 [INFO][4607] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.633 [INFO][4607] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.636 [INFO][4607] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.644 [INFO][4607] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.661 [INFO][4607] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.69/26] block=192.168.9.64/26 handle="k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.661 [INFO][4607] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.69/26] handle="k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.662 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:46.754336 containerd[1581]: 2025-01-30 13:56:46.663 [INFO][4607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.69/26] IPv6=[] ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" HandleID="k8s-pod-network.e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.759942 containerd[1581]: 2025-01-30 13:56:46.671 [INFO][4570] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km89h" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"961e4391-08da-49eb-8e7d-aa735452853a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"", Pod:"coredns-7db6d8ff4d-km89h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e1261f3141", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:46.759942 containerd[1581]: 2025-01-30 13:56:46.671 [INFO][4570] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.69/32] ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km89h" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.759942 containerd[1581]: 2025-01-30 13:56:46.671 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e1261f3141 ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km89h" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.759942 containerd[1581]: 2025-01-30 13:56:46.694 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km89h" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.759942 containerd[1581]: 2025-01-30 13:56:46.710 [INFO][4570] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km89h" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"961e4391-08da-49eb-8e7d-aa735452853a", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb", Pod:"coredns-7db6d8ff4d-km89h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e1261f3141", MAC:"7a:fe:c4:e7:d2:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:46.759942 containerd[1581]: 2025-01-30 13:56:46.740 [INFO][4570] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km89h" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:46.870743 systemd-networkd[1218]: cali5c990866613: Link UP Jan 30 13:56:46.878052 systemd-networkd[1218]: cali5c990866613: Gained carrier Jan 30 13:56:46.938129 containerd[1581]: time="2025-01-30T13:56:46.936052691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:46.938129 containerd[1581]: time="2025-01-30T13:56:46.936151711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:46.938129 containerd[1581]: time="2025-01-30T13:56:46.936296953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:46.938129 containerd[1581]: time="2025-01-30T13:56:46.936453220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.414 [INFO][4586] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.445 [INFO][4586] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0 csi-node-driver- calico-system 823691bc-ea65-4b7b-a6e1-f21ba2308d6d 906 0 2025-01-30 13:56:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-8-baee985ae6 csi-node-driver-rzzqf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c990866613 [] []}} ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Namespace="calico-system" Pod="csi-node-driver-rzzqf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.445 [INFO][4586] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Namespace="calico-system" Pod="csi-node-driver-rzzqf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.562 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" HandleID="k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.620 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" HandleID="k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc580), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-8-baee985ae6", "pod":"csi-node-driver-rzzqf", "timestamp":"2025-01-30 13:56:46.562594609 +0000 UTC"}, Hostname:"ci-4081.3.0-8-baee985ae6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.620 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.662 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.662 [INFO][4622] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-8-baee985ae6' Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.666 [INFO][4622] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.694 [INFO][4622] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.718 [INFO][4622] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.743 [INFO][4622] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.768 [INFO][4622] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.768 [INFO][4622] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.774 [INFO][4622] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.795 [INFO][4622] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.811 [INFO][4622] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.70/26] block=192.168.9.64/26 handle="k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.814 [INFO][4622] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.70/26] handle="k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.815 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:46.953458 containerd[1581]: 2025-01-30 13:56:46.816 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.70/26] IPv6=[] ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" HandleID="k8s-pod-network.5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.956513 containerd[1581]: 2025-01-30 13:56:46.840 [INFO][4586] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Namespace="calico-system" Pod="csi-node-driver-rzzqf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"823691bc-ea65-4b7b-a6e1-f21ba2308d6d", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"", Pod:"csi-node-driver-rzzqf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c990866613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:46.956513 containerd[1581]: 2025-01-30 13:56:46.842 [INFO][4586] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.70/32] ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Namespace="calico-system" Pod="csi-node-driver-rzzqf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.956513 containerd[1581]: 2025-01-30 13:56:46.842 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c990866613 ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Namespace="calico-system" Pod="csi-node-driver-rzzqf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.956513 containerd[1581]: 2025-01-30 13:56:46.882 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Namespace="calico-system" Pod="csi-node-driver-rzzqf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:46.956513 containerd[1581]: 2025-01-30 13:56:46.895 [INFO][4586] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Namespace="calico-system" Pod="csi-node-driver-rzzqf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"823691bc-ea65-4b7b-a6e1-f21ba2308d6d", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c", Pod:"csi-node-driver-rzzqf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c990866613", MAC:"42:d8:23:8d:3f:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:46.956513 containerd[1581]: 2025-01-30 13:56:46.923 [INFO][4586] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c" Namespace="calico-system" Pod="csi-node-driver-rzzqf" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:47.049050 containerd[1581]: time="2025-01-30T13:56:47.047367250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:47.049050 containerd[1581]: time="2025-01-30T13:56:47.047448626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:47.049050 containerd[1581]: time="2025-01-30T13:56:47.047464196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:47.049050 containerd[1581]: time="2025-01-30T13:56:47.047631185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:47.136953 containerd[1581]: time="2025-01-30T13:56:47.135788809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km89h,Uid:961e4391-08da-49eb-8e7d-aa735452853a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb\"" Jan 30 13:56:47.141720 kubelet[2727]: E0130 13:56:47.141051 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:47.169483 containerd[1581]: time="2025-01-30T13:56:47.165338355Z" level=info msg="CreateContainer within sandbox \"e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:56:47.210697 containerd[1581]: time="2025-01-30T13:56:47.210547634Z" level=info msg="CreateContainer within sandbox \"e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26cceb28e222d7a9eba6a695ffd96dce2d64e384c99691217bf09a3bb5a39733\"" Jan 30 13:56:47.214693 containerd[1581]: time="2025-01-30T13:56:47.214610143Z" level=info msg="StartContainer for \"26cceb28e222d7a9eba6a695ffd96dce2d64e384c99691217bf09a3bb5a39733\"" Jan 30 13:56:47.324329 containerd[1581]: time="2025-01-30T13:56:47.323668239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rzzqf,Uid:823691bc-ea65-4b7b-a6e1-f21ba2308d6d,Namespace:calico-system,Attempt:1,} returns sandbox id \"5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c\"" Jan 30 13:56:47.346482 containerd[1581]: time="2025-01-30T13:56:47.346315623Z" level=info msg="StartContainer for \"26cceb28e222d7a9eba6a695ffd96dce2d64e384c99691217bf09a3bb5a39733\" returns successfully" Jan 30 13:56:47.374509 kubelet[2727]: E0130 13:56:47.374261 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:47.434218 kubelet[2727]: I0130 13:56:47.431398 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-99cfd4b69-99xnz" podStartSLOduration=23.167419374 podStartE2EDuration="26.431365163s" podCreationTimestamp="2025-01-30 13:56:21 +0000 UTC" firstStartedPulling="2025-01-30 13:56:43.102300063 +0000 UTC m=+44.366288445" lastFinishedPulling="2025-01-30 13:56:46.366245859 +0000 UTC m=+47.630234234" observedRunningTime="2025-01-30 13:56:47.401506797 +0000 UTC m=+48.665495195" watchObservedRunningTime="2025-01-30 13:56:47.431365163 +0000 UTC m=+48.695353562" Jan 30 13:56:48.379482 kubelet[2727]: E0130 13:56:48.377511 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:48.391281 kubelet[2727]: I0130 13:56:48.390123 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:48.405532 kubelet[2727]: I0130 13:56:48.405435 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-km89h" podStartSLOduration=35.40540577 podStartE2EDuration="35.40540577s" podCreationTimestamp="2025-01-30 13:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:47.432387426 +0000 UTC m=+48.696375826" watchObservedRunningTime="2025-01-30 13:56:48.40540577 +0000 UTC m=+49.669394167" Jan 30 13:56:48.514535 systemd-networkd[1218]: cali5c990866613: Gained IPv6LL Jan 30 13:56:48.579041 systemd-networkd[1218]: cali0e1261f3141: Gained IPv6LL Jan 30 13:56:48.851706 systemd[1]: Started sshd@7-146.190.136.39:22-147.75.109.163:57012.service - OpenSSH per-connection server daemon (147.75.109.163:57012). Jan 30 13:56:49.026426 sshd[4849]: Accepted publickey for core from 147.75.109.163 port 57012 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:49.038222 sshd[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:49.062543 systemd-logind[1558]: New session 8 of user core. Jan 30 13:56:49.069365 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:56:49.402668 kubelet[2727]: E0130 13:56:49.402247 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:49.588433 containerd[1581]: time="2025-01-30T13:56:49.588273665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:49.590235 containerd[1581]: time="2025-01-30T13:56:49.589911967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:56:49.591668 containerd[1581]: time="2025-01-30T13:56:49.591443036Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:49.595432 containerd[1581]: time="2025-01-30T13:56:49.595150330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:49.597428 containerd[1581]: time="2025-01-30T13:56:49.596918304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.22848868s" Jan 30 13:56:49.597428 containerd[1581]: time="2025-01-30T13:56:49.596971572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:56:49.602529 containerd[1581]: time="2025-01-30T13:56:49.600898798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:56:49.631045 containerd[1581]: time="2025-01-30T13:56:49.629510324Z" level=info msg="CreateContainer within sandbox \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:56:49.656519 containerd[1581]: time="2025-01-30T13:56:49.655896956Z" level=info msg="CreateContainer within sandbox \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\"" Jan 30 13:56:49.660601 containerd[1581]: time="2025-01-30T13:56:49.660149022Z" level=info msg="StartContainer for \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\"" Jan 30 13:56:49.662357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005020941.mount: Deactivated successfully. Jan 30 13:56:49.730080 sshd[4849]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:49.739538 systemd[1]: sshd@7-146.190.136.39:22-147.75.109.163:57012.service: Deactivated successfully. Jan 30 13:56:49.750100 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:56:49.754218 systemd-logind[1558]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:56:49.759316 systemd-logind[1558]: Removed session 8. Jan 30 13:56:49.833304 containerd[1581]: time="2025-01-30T13:56:49.831694954Z" level=info msg="StartContainer for \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\" returns successfully" Jan 30 13:56:49.881360 kubelet[2727]: I0130 13:56:49.881001 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:49.883952 kubelet[2727]: E0130 13:56:49.883654 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:50.019426 containerd[1581]: time="2025-01-30T13:56:50.018339094Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:50.019426 containerd[1581]: time="2025-01-30T13:56:50.018927928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:56:50.029860 containerd[1581]: time="2025-01-30T13:56:50.027606446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 426.651121ms" Jan 30 13:56:50.029860 containerd[1581]: time="2025-01-30T13:56:50.027690267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:56:50.032656 containerd[1581]: time="2025-01-30T13:56:50.032128623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:56:50.041932 containerd[1581]: time="2025-01-30T13:56:50.041330557Z" level=info msg="CreateContainer within sandbox \"f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:56:50.104282 containerd[1581]: time="2025-01-30T13:56:50.103245203Z" level=info msg="CreateContainer within sandbox \"f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7ba11b9f4baaf6aa33138f2c65781d717bf33d6a1678b4f44857e806747d5371\"" Jan 30 13:56:50.109838 containerd[1581]: time="2025-01-30T13:56:50.108399718Z" level=info msg="StartContainer for \"7ba11b9f4baaf6aa33138f2c65781d717bf33d6a1678b4f44857e806747d5371\"" Jan 30 13:56:50.263505 containerd[1581]: time="2025-01-30T13:56:50.262230027Z" level=info msg="StartContainer for \"7ba11b9f4baaf6aa33138f2c65781d717bf33d6a1678b4f44857e806747d5371\" returns successfully" Jan 30 13:56:50.427380 kubelet[2727]: I0130 13:56:50.426002 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d4b699786-9bmxf" podStartSLOduration=24.650852591 podStartE2EDuration="30.425975455s" podCreationTimestamp="2025-01-30 13:56:20 +0000 UTC" firstStartedPulling="2025-01-30 13:56:43.823547384 +0000 UTC m=+45.087535761" lastFinishedPulling="2025-01-30 13:56:49.598670229 +0000 UTC m=+50.862658625" observedRunningTime="2025-01-30 13:56:50.424248422 +0000 UTC m=+51.688236818" watchObservedRunningTime="2025-01-30 13:56:50.425975455 +0000 UTC m=+51.689963851" Jan 30 13:56:50.435701 kubelet[2727]: E0130 13:56:50.435620 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:50.449251 kubelet[2727]: I0130 13:56:50.448808 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-99cfd4b69-blxcz" podStartSLOduration=25.256451343 podStartE2EDuration="29.448777052s" podCreationTimestamp="2025-01-30 13:56:21 +0000 UTC" firstStartedPulling="2025-01-30 13:56:45.838356264 +0000 UTC m=+47.102344647" lastFinishedPulling="2025-01-30 13:56:50.030681979 +0000 UTC m=+51.294670356" observedRunningTime="2025-01-30 13:56:50.445378768 +0000 UTC m=+51.709367161" watchObservedRunningTime="2025-01-30 13:56:50.448777052 +0000 UTC m=+51.712765447" Jan 30 13:56:50.521310 kernel: bpftool[5025]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:56:50.578639 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:56:50.565726 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:56:50.565783 systemd-resolved[1468]: Flushed all caches. Jan 30 13:56:51.586528 systemd-networkd[1218]: vxlan.calico: Link UP Jan 30 13:56:51.586546 systemd-networkd[1218]: vxlan.calico: Gained carrier Jan 30 13:56:51.763199 containerd[1581]: time="2025-01-30T13:56:51.761683562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:51.766101 containerd[1581]: time="2025-01-30T13:56:51.766010665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:56:51.767237 containerd[1581]: time="2025-01-30T13:56:51.767200296Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:51.772405 containerd[1581]: time="2025-01-30T13:56:51.771132034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:51.772405 containerd[1581]: time="2025-01-30T13:56:51.771960960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.739738604s" Jan 30 13:56:51.772405 containerd[1581]: time="2025-01-30T13:56:51.772006290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:56:51.781199 containerd[1581]: time="2025-01-30T13:56:51.780372355Z" level=info msg="CreateContainer within sandbox \"5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:56:51.819198 containerd[1581]: time="2025-01-30T13:56:51.817731310Z" level=info msg="CreateContainer within sandbox \"5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3baf172167e3d4a732de053b9670c910ec12ad608930240ee21c51a403b9cf04\"" Jan 30 13:56:51.820095 containerd[1581]: time="2025-01-30T13:56:51.820024376Z" level=info msg="StartContainer for \"3baf172167e3d4a732de053b9670c910ec12ad608930240ee21c51a403b9cf04\"" Jan 30 13:56:52.111874 containerd[1581]: time="2025-01-30T13:56:52.111807743Z" level=info msg="StartContainer for \"3baf172167e3d4a732de053b9670c910ec12ad608930240ee21c51a403b9cf04\" returns successfully" Jan 30 13:56:52.121032 containerd[1581]: time="2025-01-30T13:56:52.120988539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:56:53.441521 systemd-networkd[1218]: vxlan.calico: Gained IPv6LL Jan 30 13:56:53.578601 containerd[1581]: time="2025-01-30T13:56:53.578525881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.579805 containerd[1581]: time="2025-01-30T13:56:53.579652451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:56:53.580947 containerd[1581]: time="2025-01-30T13:56:53.580586857Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.584950 containerd[1581]: time="2025-01-30T13:56:53.584884320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.463853206s" Jan 30 13:56:53.585205 containerd[1581]: time="2025-01-30T13:56:53.585157907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:56:53.585584 containerd[1581]: time="2025-01-30T13:56:53.585023141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.595398 containerd[1581]: time="2025-01-30T13:56:53.595342444Z" level=info msg="CreateContainer within sandbox \"5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:56:53.613445 containerd[1581]: time="2025-01-30T13:56:53.612592873Z" level=info msg="CreateContainer within sandbox \"5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"deee3aa8d680fdfb1695360deba0ead134c2c3203ffc34f797b329a74f54e3fe\"" Jan 30 13:56:53.618364 containerd[1581]: time="2025-01-30T13:56:53.616625372Z" level=info msg="StartContainer for \"deee3aa8d680fdfb1695360deba0ead134c2c3203ffc34f797b329a74f54e3fe\"" Jan 30 13:56:53.767295 containerd[1581]: time="2025-01-30T13:56:53.766113624Z" level=info msg="StartContainer for \"deee3aa8d680fdfb1695360deba0ead134c2c3203ffc34f797b329a74f54e3fe\" returns successfully" Jan 30 13:56:54.339985 kubelet[2727]: I0130 13:56:54.339836 2727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:56:54.346869 kubelet[2727]: I0130 13:56:54.346804 2727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:56:54.489183 kubelet[2727]: I0130 13:56:54.486735 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rzzqf" podStartSLOduration=28.229648802 podStartE2EDuration="34.486705461s" podCreationTimestamp="2025-01-30 13:56:20 +0000 UTC" firstStartedPulling="2025-01-30 13:56:47.330385644 +0000 UTC m=+48.594374018" lastFinishedPulling="2025-01-30 13:56:53.587442289 +0000 UTC m=+54.851430677" observedRunningTime="2025-01-30 13:56:54.485053919 +0000 UTC m=+55.749042318" watchObservedRunningTime="2025-01-30 13:56:54.486705461 +0000 UTC m=+55.750693939" Jan 30 13:56:54.745889 systemd[1]: Started sshd@8-146.190.136.39:22-147.75.109.163:57028.service - OpenSSH per-connection server daemon (147.75.109.163:57028). Jan 30 13:56:54.937194 sshd[5194]: Accepted publickey for core from 147.75.109.163 port 57028 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:54.941352 sshd[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:54.950372 systemd-logind[1558]: New session 9 of user core. Jan 30 13:56:54.954725 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:56:55.407052 sshd[5194]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:55.415395 systemd[1]: sshd@8-146.190.136.39:22-147.75.109.163:57028.service: Deactivated successfully. Jan 30 13:56:55.422319 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:56:55.424967 systemd-logind[1558]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:56:55.428249 systemd-logind[1558]: Removed session 9. Jan 30 13:56:56.515544 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:56:56.513577 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:56:56.513638 systemd-resolved[1468]: Flushed all caches. Jan 30 13:56:57.603323 kubelet[2727]: E0130 13:56:57.602428 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:56:58.942253 containerd[1581]: time="2025-01-30T13:56:58.941667263Z" level=info msg="StopPodSandbox for \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\"" Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.041 [WARNING][5252] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0", GenerateName:"calico-kube-controllers-7d4b699786-", Namespace:"calico-system", SelfLink:"", UID:"948fd5df-85f6-4117-955d-ae954df34712", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4b699786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8", Pod:"calico-kube-controllers-7d4b699786-9bmxf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9718234dd52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.045 [INFO][5252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.045 [INFO][5252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" iface="eth0" netns="" Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.045 [INFO][5252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.045 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.089 [INFO][5260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.089 [INFO][5260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.089 [INFO][5260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.096 [WARNING][5260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.096 [INFO][5260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.099 [INFO][5260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.108138 containerd[1581]: 2025-01-30 13:56:59.103 [INFO][5252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:59.108993 containerd[1581]: time="2025-01-30T13:56:59.108257077Z" level=info msg="TearDown network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\" successfully" Jan 30 13:56:59.108993 containerd[1581]: time="2025-01-30T13:56:59.108297341Z" level=info msg="StopPodSandbox for \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\" returns successfully" Jan 30 13:56:59.119861 containerd[1581]: time="2025-01-30T13:56:59.119804389Z" level=info msg="RemovePodSandbox for \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\"" Jan 30 13:56:59.119861 containerd[1581]: time="2025-01-30T13:56:59.119865481Z" level=info msg="Forcibly stopping sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\"" Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.191 [WARNING][5279] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0", GenerateName:"calico-kube-controllers-7d4b699786-", Namespace:"calico-system", SelfLink:"", UID:"948fd5df-85f6-4117-955d-ae954df34712", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d4b699786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8", Pod:"calico-kube-controllers-7d4b699786-9bmxf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9718234dd52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.192 [INFO][5279] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.192 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" iface="eth0" netns="" Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.192 [INFO][5279] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.192 [INFO][5279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.224 [INFO][5285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.224 [INFO][5285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.224 [INFO][5285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.236 [WARNING][5285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.236 [INFO][5285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" HandleID="k8s-pod-network.29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.239 [INFO][5285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.245736 containerd[1581]: 2025-01-30 13:56:59.242 [INFO][5279] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6" Jan 30 13:56:59.246856 containerd[1581]: time="2025-01-30T13:56:59.245840390Z" level=info msg="TearDown network for sandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\" successfully" Jan 30 13:56:59.288688 containerd[1581]: time="2025-01-30T13:56:59.288339055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:59.288688 containerd[1581]: time="2025-01-30T13:56:59.288555400Z" level=info msg="RemovePodSandbox \"29a68ee13d4203b2b1789b6d2551028ce29b60b4841b4c586cf70c6d63fa81f6\" returns successfully" Jan 30 13:56:59.290604 containerd[1581]: time="2025-01-30T13:56:59.290258495Z" level=info msg="StopPodSandbox for \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\"" Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.354 [WARNING][5303] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"961e4391-08da-49eb-8e7d-aa735452853a", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb", Pod:"coredns-7db6d8ff4d-km89h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e1261f3141", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.355 [INFO][5303] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.355 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" iface="eth0" netns="" Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.355 [INFO][5303] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.355 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.394 [INFO][5309] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.395 [INFO][5309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.395 [INFO][5309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.404 [WARNING][5309] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.405 [INFO][5309] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.407 [INFO][5309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.413720 containerd[1581]: 2025-01-30 13:56:59.410 [INFO][5303] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:59.415121 containerd[1581]: time="2025-01-30T13:56:59.413856667Z" level=info msg="TearDown network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\" successfully" Jan 30 13:56:59.415121 containerd[1581]: time="2025-01-30T13:56:59.413902346Z" level=info msg="StopPodSandbox for \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\" returns successfully" Jan 30 13:56:59.415121 containerd[1581]: time="2025-01-30T13:56:59.415085614Z" level=info msg="RemovePodSandbox for \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\"" Jan 30 13:56:59.415758 containerd[1581]: time="2025-01-30T13:56:59.415135313Z" level=info msg="Forcibly stopping sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\"" Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.479 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"961e4391-08da-49eb-8e7d-aa735452853a", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"e894be7c4f05522e6f2df3cad17f84d4c4e287cc862fa4f35801d5e58f15a8fb", Pod:"coredns-7db6d8ff4d-km89h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0e1261f3141", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.480 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.480 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" iface="eth0" netns="" Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.480 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.480 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.522 [INFO][5333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.522 [INFO][5333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.522 [INFO][5333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.532 [WARNING][5333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.532 [INFO][5333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" HandleID="k8s-pod-network.e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--km89h-eth0" Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.535 [INFO][5333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.542392 containerd[1581]: 2025-01-30 13:56:59.539 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21" Jan 30 13:56:59.545271 containerd[1581]: time="2025-01-30T13:56:59.542383887Z" level=info msg="TearDown network for sandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\" successfully" Jan 30 13:56:59.550901 containerd[1581]: time="2025-01-30T13:56:59.550771754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:59.551090 containerd[1581]: time="2025-01-30T13:56:59.551002881Z" level=info msg="RemovePodSandbox \"e1218a94221e738cd86b54f7e25ac1e8875c313fea38897c7d68230ffe9ada21\" returns successfully" Jan 30 13:56:59.552189 containerd[1581]: time="2025-01-30T13:56:59.551884828Z" level=info msg="StopPodSandbox for \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\"" Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.663 [WARNING][5353] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0", GenerateName:"calico-apiserver-99cfd4b69-", Namespace:"calico-apiserver", SelfLink:"", UID:"97b02c81-6b32-4a65-8b9e-6d8426a65011", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99cfd4b69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318", Pod:"calico-apiserver-99cfd4b69-blxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73fb1be6f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.663 [INFO][5353] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.663 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" iface="eth0" netns="" Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.663 [INFO][5353] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.664 [INFO][5353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.701 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.701 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.701 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.712 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.712 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.715 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.720587 containerd[1581]: 2025-01-30 13:56:59.718 [INFO][5353] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:59.721795 containerd[1581]: time="2025-01-30T13:56:59.720709519Z" level=info msg="TearDown network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\" successfully" Jan 30 13:56:59.721795 containerd[1581]: time="2025-01-30T13:56:59.720752481Z" level=info msg="StopPodSandbox for \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\" returns successfully" Jan 30 13:56:59.721795 containerd[1581]: time="2025-01-30T13:56:59.721356063Z" level=info msg="RemovePodSandbox for \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\"" Jan 30 13:56:59.721795 containerd[1581]: time="2025-01-30T13:56:59.721391733Z" level=info msg="Forcibly stopping sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\"" Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.787 [WARNING][5377] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0", GenerateName:"calico-apiserver-99cfd4b69-", Namespace:"calico-apiserver", SelfLink:"", UID:"97b02c81-6b32-4a65-8b9e-6d8426a65011", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99cfd4b69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"f0d0791965a8a8b4e002e0a6d25ff659e4cc5f7fe2f949f70c01c87de8bad318", Pod:"calico-apiserver-99cfd4b69-blxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73fb1be6f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.788 [INFO][5377] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.788 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" iface="eth0" netns="" Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.788 [INFO][5377] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.788 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.820 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.821 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.821 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.829 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.829 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" HandleID="k8s-pod-network.376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--blxcz-eth0" Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.834 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.842456 containerd[1581]: 2025-01-30 13:56:59.837 [INFO][5377] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58" Jan 30 13:56:59.842456 containerd[1581]: time="2025-01-30T13:56:59.841493699Z" level=info msg="TearDown network for sandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\" successfully" Jan 30 13:56:59.847739 containerd[1581]: time="2025-01-30T13:56:59.847692091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:59.847880 containerd[1581]: time="2025-01-30T13:56:59.847787004Z" level=info msg="RemovePodSandbox \"376b55b345083f6b7ea6597ee236c7adf5e953d2a03b49a383ab3d73cdd12c58\" returns successfully" Jan 30 13:56:59.848678 containerd[1581]: time="2025-01-30T13:56:59.848636727Z" level=info msg="StopPodSandbox for \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\"" Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.909 [WARNING][5402] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"823691bc-ea65-4b7b-a6e1-f21ba2308d6d", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c", Pod:"csi-node-driver-rzzqf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c990866613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.910 [INFO][5402] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.910 [INFO][5402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" iface="eth0" netns="" Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.911 [INFO][5402] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.911 [INFO][5402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.946 [INFO][5408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.947 [INFO][5408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.947 [INFO][5408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.957 [WARNING][5408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.957 [INFO][5408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.960 [INFO][5408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.966599 containerd[1581]: 2025-01-30 13:56:59.963 [INFO][5402] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:56:59.967968 containerd[1581]: time="2025-01-30T13:56:59.966647859Z" level=info msg="TearDown network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\" successfully" Jan 30 13:56:59.967968 containerd[1581]: time="2025-01-30T13:56:59.966685292Z" level=info msg="StopPodSandbox for \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\" returns successfully" Jan 30 13:56:59.967968 containerd[1581]: time="2025-01-30T13:56:59.967505673Z" level=info msg="RemovePodSandbox for \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\"" Jan 30 13:56:59.967968 containerd[1581]: time="2025-01-30T13:56:59.967838823Z" level=info msg="Forcibly stopping sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\"" Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.045 [WARNING][5426] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"823691bc-ea65-4b7b-a6e1-f21ba2308d6d", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"5dc7f3f160c26912bb278f9ced394aac887dc559e8ddc9de59193307498db27c", Pod:"csi-node-driver-rzzqf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c990866613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.047 [INFO][5426] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.047 [INFO][5426] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" iface="eth0" netns="" Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.047 [INFO][5426] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.047 [INFO][5426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.087 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.087 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.087 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.098 [WARNING][5433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.099 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" HandleID="k8s-pod-network.697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Workload="ci--4081.3.0--8--baee985ae6-k8s-csi--node--driver--rzzqf-eth0" Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.102 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:00.109356 containerd[1581]: 2025-01-30 13:57:00.105 [INFO][5426] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226" Jan 30 13:57:00.109356 containerd[1581]: time="2025-01-30T13:57:00.108554902Z" level=info msg="TearDown network for sandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\" successfully" Jan 30 13:57:00.114281 containerd[1581]: time="2025-01-30T13:57:00.114221119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:00.114664 containerd[1581]: time="2025-01-30T13:57:00.114629368Z" level=info msg="RemovePodSandbox \"697dd5d0df1e90cdbf2285d6e2be9260921438f29c3ed5b7c308c549a998a226\" returns successfully" Jan 30 13:57:00.115751 containerd[1581]: time="2025-01-30T13:57:00.115703604Z" level=info msg="StopPodSandbox for \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\"" Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.187 [WARNING][5451] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0", GenerateName:"calico-apiserver-99cfd4b69-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d139b6a-d18c-4490-b775-b61437104603", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99cfd4b69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6", Pod:"calico-apiserver-99cfd4b69-99xnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib909afcdb2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.188 [INFO][5451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.188 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" iface="eth0" netns="" Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.188 [INFO][5451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.188 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.219 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.219 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.219 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.229 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.229 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.232 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:00.236048 containerd[1581]: 2025-01-30 13:57:00.234 [INFO][5451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:57:00.237497 containerd[1581]: time="2025-01-30T13:57:00.236153780Z" level=info msg="TearDown network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\" successfully" Jan 30 13:57:00.237497 containerd[1581]: time="2025-01-30T13:57:00.236222109Z" level=info msg="StopPodSandbox for \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\" returns successfully" Jan 30 13:57:00.237497 containerd[1581]: time="2025-01-30T13:57:00.237141521Z" level=info msg="RemovePodSandbox for \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\"" Jan 30 13:57:00.237497 containerd[1581]: time="2025-01-30T13:57:00.237261106Z" level=info msg="Forcibly stopping sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\"" Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.306 [WARNING][5476] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0", GenerateName:"calico-apiserver-99cfd4b69-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d139b6a-d18c-4490-b775-b61437104603", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"99cfd4b69", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"444eedba6d21a8f8b5b3abb366a068d78e1e34e03e91e2a7cf4b68b1727d7cd6", Pod:"calico-apiserver-99cfd4b69-99xnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib909afcdb2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.306 [INFO][5476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.306 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" iface="eth0" netns="" Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.306 [INFO][5476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.306 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.349 [INFO][5482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.349 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.349 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.358 [WARNING][5482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.358 [INFO][5482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" HandleID="k8s-pod-network.09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--apiserver--99cfd4b69--99xnz-eth0" Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.361 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:00.366799 containerd[1581]: 2025-01-30 13:57:00.363 [INFO][5476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d" Jan 30 13:57:00.367984 containerd[1581]: time="2025-01-30T13:57:00.366773109Z" level=info msg="TearDown network for sandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\" successfully" Jan 30 13:57:00.375346 containerd[1581]: time="2025-01-30T13:57:00.375270702Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:00.376182 containerd[1581]: time="2025-01-30T13:57:00.375659366Z" level=info msg="RemovePodSandbox \"09675ce253455c9d064d6c8a3d6f9332be8be5b230f7eb3888067ee67e575a5d\" returns successfully" Jan 30 13:57:00.378408 containerd[1581]: time="2025-01-30T13:57:00.378350002Z" level=info msg="StopPodSandbox for \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\"" Jan 30 13:57:00.421636 systemd[1]: Started sshd@9-146.190.136.39:22-147.75.109.163:43632.service - OpenSSH per-connection server daemon (147.75.109.163:43632). Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.456 [WARNING][5501] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"59569bfa-ceee-4967-bb23-bc58916a113d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6", Pod:"coredns-7db6d8ff4d-fnpc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali955eba4949f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.460 [INFO][5501] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.460 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" iface="eth0" netns="" Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.460 [INFO][5501] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.460 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.498 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.500 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.501 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.511 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.511 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.520 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:00.530712 containerd[1581]: 2025-01-30 13:57:00.527 [INFO][5501] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:57:00.531655 containerd[1581]: time="2025-01-30T13:57:00.530762152Z" level=info msg="TearDown network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\" successfully" Jan 30 13:57:00.531655 containerd[1581]: time="2025-01-30T13:57:00.530802700Z" level=info msg="StopPodSandbox for \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\" returns successfully" Jan 30 13:57:00.533751 containerd[1581]: time="2025-01-30T13:57:00.533306630Z" level=info msg="RemovePodSandbox for \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\"" Jan 30 13:57:00.533751 containerd[1581]: time="2025-01-30T13:57:00.533369557Z" level=info msg="Forcibly stopping sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\"" Jan 30 13:57:00.546103 sshd[5506]: Accepted publickey for core from 147.75.109.163 port 43632 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:00.550516 sshd[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:00.560098 systemd-logind[1558]: New session 10 of user core. Jan 30 13:57:00.565799 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:57:00.657269 systemd[1]: run-containerd-runc-k8s.io-77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96-runc.g5RTMV.mount: Deactivated successfully. Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.612 [WARNING][5527] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"59569bfa-ceee-4967-bb23-bc58916a113d", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"2c19d7fe93f20650aec8b2986da913984879052e195ecad3b335a8136936f1a6", Pod:"coredns-7db6d8ff4d-fnpc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali955eba4949f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.613 [INFO][5527] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.613 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" iface="eth0" netns="" Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.613 [INFO][5527] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.613 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.694 [INFO][5536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.695 [INFO][5536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.695 [INFO][5536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.708 [WARNING][5536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.708 [INFO][5536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" HandleID="k8s-pod-network.f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Workload="ci--4081.3.0--8--baee985ae6-k8s-coredns--7db6d8ff4d--fnpc6-eth0" Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.711 [INFO][5536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:00.725765 containerd[1581]: 2025-01-30 13:57:00.721 [INFO][5527] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18" Jan 30 13:57:00.727211 containerd[1581]: time="2025-01-30T13:57:00.726595418Z" level=info msg="TearDown network for sandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\" successfully" Jan 30 13:57:00.730986 containerd[1581]: time="2025-01-30T13:57:00.730936503Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:00.731470 containerd[1581]: time="2025-01-30T13:57:00.731103045Z" level=info msg="RemovePodSandbox \"f43b3e4bf21496a1500140f00cbdf91cd5ed3789e7c44dd4079c60c4c079fa18\" returns successfully" Jan 30 13:57:00.983329 sshd[5506]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:00.990743 systemd[1]: sshd@9-146.190.136.39:22-147.75.109.163:43632.service: Deactivated successfully. Jan 30 13:57:00.995464 systemd-logind[1558]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:57:00.995512 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:57:00.999826 systemd-logind[1558]: Removed session 10. Jan 30 13:57:01.862082 kubelet[2727]: I0130 13:57:01.861523 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:05.994644 systemd[1]: Started sshd@10-146.190.136.39:22-147.75.109.163:43636.service - OpenSSH per-connection server daemon (147.75.109.163:43636). Jan 30 13:57:06.084456 sshd[5601]: Accepted publickey for core from 147.75.109.163 port 43636 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:06.086897 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:06.094718 systemd-logind[1558]: New session 11 of user core. Jan 30 13:57:06.101746 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:57:06.334012 sshd[5601]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:06.345359 systemd[1]: Started sshd@11-146.190.136.39:22-147.75.109.163:43642.service - OpenSSH per-connection server daemon (147.75.109.163:43642). Jan 30 13:57:06.346819 systemd[1]: sshd@10-146.190.136.39:22-147.75.109.163:43636.service: Deactivated successfully. Jan 30 13:57:06.355512 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:57:06.359299 systemd-logind[1558]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:57:06.362249 systemd-logind[1558]: Removed session 11. Jan 30 13:57:06.412483 sshd[5614]: Accepted publickey for core from 147.75.109.163 port 43642 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:06.414565 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:06.421412 systemd-logind[1558]: New session 12 of user core. Jan 30 13:57:06.427742 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:57:06.760022 systemd[1]: Started sshd@12-146.190.136.39:22-147.75.109.163:43654.service - OpenSSH per-connection server daemon (147.75.109.163:43654). Jan 30 13:57:06.769314 sshd[5614]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:06.792887 systemd[1]: sshd@11-146.190.136.39:22-147.75.109.163:43642.service: Deactivated successfully. Jan 30 13:57:06.815159 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:57:06.823712 systemd-logind[1558]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:57:06.834156 systemd-logind[1558]: Removed session 12. Jan 30 13:57:06.899584 sshd[5626]: Accepted publickey for core from 147.75.109.163 port 43654 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:06.902341 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:06.909987 systemd-logind[1558]: New session 13 of user core. Jan 30 13:57:06.918688 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:57:07.111090 sshd[5626]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:07.115420 systemd-logind[1558]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:57:07.116801 systemd[1]: sshd@12-146.190.136.39:22-147.75.109.163:43654.service: Deactivated successfully. Jan 30 13:57:07.123195 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:57:07.125089 systemd-logind[1558]: Removed session 13. Jan 30 13:57:12.122287 systemd[1]: Started sshd@13-146.190.136.39:22-147.75.109.163:49426.service - OpenSSH per-connection server daemon (147.75.109.163:49426). Jan 30 13:57:12.187668 sshd[5647]: Accepted publickey for core from 147.75.109.163 port 49426 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:12.190711 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:12.198119 systemd-logind[1558]: New session 14 of user core. Jan 30 13:57:12.204765 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:57:12.376332 sshd[5647]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:12.381726 systemd[1]: sshd@13-146.190.136.39:22-147.75.109.163:49426.service: Deactivated successfully. Jan 30 13:57:12.387805 systemd-logind[1558]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:57:12.388596 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:57:12.390497 systemd-logind[1558]: Removed session 14. Jan 30 13:57:13.960305 kubelet[2727]: E0130 13:57:13.959840 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:17.391779 systemd[1]: Started sshd@14-146.190.136.39:22-147.75.109.163:36356.service - OpenSSH per-connection server daemon (147.75.109.163:36356). Jan 30 13:57:17.489002 sshd[5671]: Accepted publickey for core from 147.75.109.163 port 36356 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:17.493717 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:17.505260 systemd-logind[1558]: New session 15 of user core. Jan 30 13:57:17.511677 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:57:17.685915 sshd[5671]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:17.689973 systemd[1]: sshd@14-146.190.136.39:22-147.75.109.163:36356.service: Deactivated successfully. Jan 30 13:57:17.696436 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:57:17.697806 systemd-logind[1558]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:57:17.699428 systemd-logind[1558]: Removed session 15. Jan 30 13:57:20.962877 kubelet[2727]: E0130 13:57:20.962809 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:21.141692 containerd[1581]: time="2025-01-30T13:57:21.141557314Z" level=info msg="StopContainer for \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\" with timeout 300 (s)" Jan 30 13:57:21.150044 containerd[1581]: time="2025-01-30T13:57:21.149988470Z" level=info msg="Stop container \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\" with signal terminated" Jan 30 13:57:21.353771 containerd[1581]: time="2025-01-30T13:57:21.353562164Z" level=info msg="StopContainer for \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\" with timeout 30 (s)" Jan 30 13:57:21.356404 containerd[1581]: time="2025-01-30T13:57:21.356348713Z" level=info msg="Stop container \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\" with signal terminated" Jan 30 13:57:21.467056 containerd[1581]: time="2025-01-30T13:57:21.466784944Z" level=info msg="shim disconnected" id=77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96 namespace=k8s.io Jan 30 13:57:21.467056 containerd[1581]: time="2025-01-30T13:57:21.466862005Z" level=warning msg="cleaning up after shim disconnected" id=77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96 namespace=k8s.io Jan 30 13:57:21.467056 containerd[1581]: time="2025-01-30T13:57:21.466872299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:21.470025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96-rootfs.mount: Deactivated successfully. Jan 30 13:57:21.512372 containerd[1581]: time="2025-01-30T13:57:21.511965918Z" level=info msg="StopContainer for \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\" returns successfully" Jan 30 13:57:21.513998 containerd[1581]: time="2025-01-30T13:57:21.513487381Z" level=info msg="StopPodSandbox for \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\"" Jan 30 13:57:21.514249 containerd[1581]: time="2025-01-30T13:57:21.514215534Z" level=info msg="Container to stop \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:57:21.525504 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8-shm.mount: Deactivated successfully. Jan 30 13:57:21.587311 containerd[1581]: time="2025-01-30T13:57:21.585915257Z" level=info msg="shim disconnected" id=a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8 namespace=k8s.io Jan 30 13:57:21.587311 containerd[1581]: time="2025-01-30T13:57:21.586013293Z" level=warning msg="cleaning up after shim disconnected" id=a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8 namespace=k8s.io Jan 30 13:57:21.587311 containerd[1581]: time="2025-01-30T13:57:21.586030044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:21.593938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8-rootfs.mount: Deactivated successfully. Jan 30 13:57:21.795248 systemd-networkd[1218]: cali9718234dd52: Link DOWN Jan 30 13:57:21.795272 systemd-networkd[1218]: cali9718234dd52: Lost carrier Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.789 [INFO][5771] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.789 [INFO][5771] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" iface="eth0" netns="/var/run/netns/cni-73e3f908-1df2-d783-5ffa-6884a1439d1e" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.790 [INFO][5771] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" iface="eth0" netns="/var/run/netns/cni-73e3f908-1df2-d783-5ffa-6884a1439d1e" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.803 [INFO][5771] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" after=13.621313ms iface="eth0" netns="/var/run/netns/cni-73e3f908-1df2-d783-5ffa-6884a1439d1e" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.803 [INFO][5771] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.803 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.876 [INFO][5779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.876 [INFO][5779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.876 [INFO][5779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.972 [INFO][5779] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.976 [INFO][5779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.981 [INFO][5779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:21.993124 containerd[1581]: 2025-01-30 13:57:21.988 [INFO][5771] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:57:21.999205 containerd[1581]: time="2025-01-30T13:57:21.996126627Z" level=info msg="TearDown network for sandbox \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\" successfully" Jan 30 13:57:21.999205 containerd[1581]: time="2025-01-30T13:57:21.996222207Z" level=info msg="StopPodSandbox for \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\" returns successfully" Jan 30 13:57:22.008566 systemd[1]: run-netns-cni\x2d73e3f908\x2d1df2\x2dd783\x2d5ffa\x2d6884a1439d1e.mount: Deactivated successfully. Jan 30 13:57:22.218560 kubelet[2727]: I0130 13:57:22.216734 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzvch\" (UniqueName: \"kubernetes.io/projected/948fd5df-85f6-4117-955d-ae954df34712-kube-api-access-nzvch\") pod \"948fd5df-85f6-4117-955d-ae954df34712\" (UID: \"948fd5df-85f6-4117-955d-ae954df34712\") " Jan 30 13:57:22.219293 kubelet[2727]: I0130 13:57:22.218653 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/948fd5df-85f6-4117-955d-ae954df34712-tigera-ca-bundle\") pod \"948fd5df-85f6-4117-955d-ae954df34712\" (UID: \"948fd5df-85f6-4117-955d-ae954df34712\") " Jan 30 13:57:22.258104 kubelet[2727]: I0130 13:57:22.256997 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948fd5df-85f6-4117-955d-ae954df34712-kube-api-access-nzvch" (OuterVolumeSpecName: "kube-api-access-nzvch") pod "948fd5df-85f6-4117-955d-ae954df34712" (UID: "948fd5df-85f6-4117-955d-ae954df34712"). InnerVolumeSpecName "kube-api-access-nzvch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:57:22.265091 systemd[1]: var-lib-kubelet-pods-948fd5df\x2d85f6\x2d4117\x2d955d\x2dae954df34712-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnzvch.mount: Deactivated successfully. Jan 30 13:57:22.300729 kubelet[2727]: I0130 13:57:22.300616 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/948fd5df-85f6-4117-955d-ae954df34712-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "948fd5df-85f6-4117-955d-ae954df34712" (UID: "948fd5df-85f6-4117-955d-ae954df34712"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:57:22.319107 kubelet[2727]: I0130 13:57:22.319032 2727 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/948fd5df-85f6-4117-955d-ae954df34712-tigera-ca-bundle\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:22.319107 kubelet[2727]: I0130 13:57:22.319085 2727 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nzvch\" (UniqueName: \"kubernetes.io/projected/948fd5df-85f6-4117-955d-ae954df34712-kube-api-access-nzvch\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:22.471811 systemd[1]: var-lib-kubelet-pods-948fd5df\x2d85f6\x2d4117\x2d955d\x2dae954df34712-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 30 13:57:22.630952 kubelet[2727]: I0130 13:57:22.626236 2727 scope.go:117] "RemoveContainer" containerID="77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96" Jan 30 13:57:22.634217 containerd[1581]: time="2025-01-30T13:57:22.633073835Z" level=info msg="RemoveContainer for \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\"" Jan 30 13:57:22.646815 containerd[1581]: time="2025-01-30T13:57:22.644643448Z" level=info msg="RemoveContainer for \"77569c78679f550be4ddacf62c9622ffb7d43139455e724546a4f33997d22b96\" returns successfully" Jan 30 13:57:22.702759 systemd[1]: Started sshd@15-146.190.136.39:22-147.75.109.163:36362.service - OpenSSH per-connection server daemon (147.75.109.163:36362). Jan 30 13:57:22.765533 kubelet[2727]: I0130 13:57:22.764814 2727 topology_manager.go:215] "Topology Admit Handler" podUID="28190bf2-2c0b-423c-b70c-215d8388a88b" podNamespace="calico-system" podName="calico-kube-controllers-7df656fc87-74c27" Jan 30 13:57:22.789664 kubelet[2727]: E0130 13:57:22.789591 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="948fd5df-85f6-4117-955d-ae954df34712" containerName="calico-kube-controllers" Jan 30 13:57:22.789889 kubelet[2727]: I0130 13:57:22.789767 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="948fd5df-85f6-4117-955d-ae954df34712" containerName="calico-kube-controllers" Jan 30 13:57:22.824246 kubelet[2727]: I0130 13:57:22.822498 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc5gh\" (UniqueName: \"kubernetes.io/projected/28190bf2-2c0b-423c-b70c-215d8388a88b-kube-api-access-zc5gh\") pod \"calico-kube-controllers-7df656fc87-74c27\" (UID: \"28190bf2-2c0b-423c-b70c-215d8388a88b\") " pod="calico-system/calico-kube-controllers-7df656fc87-74c27" Jan 30 13:57:22.824246 kubelet[2727]: I0130 13:57:22.822570 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28190bf2-2c0b-423c-b70c-215d8388a88b-tigera-ca-bundle\") pod \"calico-kube-controllers-7df656fc87-74c27\" (UID: \"28190bf2-2c0b-423c-b70c-215d8388a88b\") " pod="calico-system/calico-kube-controllers-7df656fc87-74c27" Jan 30 13:57:22.884350 sshd[5799]: Accepted publickey for core from 147.75.109.163 port 36362 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:22.888530 sshd[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:22.895866 systemd-logind[1558]: New session 16 of user core. Jan 30 13:57:22.900702 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:57:22.963258 kubelet[2727]: I0130 13:57:22.963072 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948fd5df-85f6-4117-955d-ae954df34712" path="/var/lib/kubelet/pods/948fd5df-85f6-4117-955d-ae954df34712/volumes" Jan 30 13:57:23.118865 containerd[1581]: time="2025-01-30T13:57:23.118693881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df656fc87-74c27,Uid:28190bf2-2c0b-423c-b70c-215d8388a88b,Namespace:calico-system,Attempt:0,}" Jan 30 13:57:23.215586 sshd[5799]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:23.238977 systemd[1]: sshd@15-146.190.136.39:22-147.75.109.163:36362.service: Deactivated successfully. Jan 30 13:57:23.251573 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:57:23.254789 systemd-logind[1558]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:57:23.262542 systemd-logind[1558]: Removed session 16. Jan 30 13:57:23.467844 systemd-networkd[1218]: calic3a352cfbb0: Link UP Jan 30 13:57:23.472855 systemd-networkd[1218]: calic3a352cfbb0: Gained carrier Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.249 [INFO][5811] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0 calico-kube-controllers-7df656fc87- calico-system 28190bf2-2c0b-423c-b70c-215d8388a88b 1291 0 2025-01-30 13:57:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7df656fc87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-8-baee985ae6 calico-kube-controllers-7df656fc87-74c27 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic3a352cfbb0 [] []}} ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Namespace="calico-system" Pod="calico-kube-controllers-7df656fc87-74c27" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.250 [INFO][5811] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Namespace="calico-system" Pod="calico-kube-controllers-7df656fc87-74c27" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.347 [INFO][5830] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" HandleID="k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.378 [INFO][5830] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" HandleID="k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003057a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-8-baee985ae6", "pod":"calico-kube-controllers-7df656fc87-74c27", "timestamp":"2025-01-30 13:57:23.347758174 +0000 UTC"}, Hostname:"ci-4081.3.0-8-baee985ae6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.378 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.379 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.379 [INFO][5830] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-8-baee985ae6' Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.383 [INFO][5830] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.391 [INFO][5830] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.400 [INFO][5830] ipam/ipam.go 489: Trying affinity for 192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.405 [INFO][5830] ipam/ipam.go 155: Attempting to load block cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.412 [INFO][5830] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.9.64/26 host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.412 [INFO][5830] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.9.64/26 handle="k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.415 [INFO][5830] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92 Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.426 [INFO][5830] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.9.64/26 handle="k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.446 [INFO][5830] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.9.71/26] block=192.168.9.64/26 handle="k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.446 [INFO][5830] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.9.71/26] handle="k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" host="ci-4081.3.0-8-baee985ae6" Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.446 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:23.507808 containerd[1581]: 2025-01-30 13:57:23.446 [INFO][5830] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.9.71/26] IPv6=[] ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" HandleID="k8s-pod-network.b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" Jan 30 13:57:23.508794 containerd[1581]: 2025-01-30 13:57:23.453 [INFO][5811] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Namespace="calico-system" Pod="calico-kube-controllers-7df656fc87-74c27" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0", GenerateName:"calico-kube-controllers-7df656fc87-", Namespace:"calico-system", SelfLink:"", UID:"28190bf2-2c0b-423c-b70c-215d8388a88b", ResourceVersion:"1291", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7df656fc87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"", Pod:"calico-kube-controllers-7df656fc87-74c27", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3a352cfbb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:23.508794 containerd[1581]: 2025-01-30 13:57:23.454 [INFO][5811] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.9.71/32] ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Namespace="calico-system" Pod="calico-kube-controllers-7df656fc87-74c27" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" Jan 30 13:57:23.508794 containerd[1581]: 2025-01-30 13:57:23.454 [INFO][5811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3a352cfbb0 ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Namespace="calico-system" Pod="calico-kube-controllers-7df656fc87-74c27" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" Jan 30 13:57:23.508794 containerd[1581]: 2025-01-30 13:57:23.466 [INFO][5811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Namespace="calico-system" Pod="calico-kube-controllers-7df656fc87-74c27" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" Jan 30 13:57:23.508794 containerd[1581]: 2025-01-30 13:57:23.467 [INFO][5811] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Namespace="calico-system" Pod="calico-kube-controllers-7df656fc87-74c27" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0", GenerateName:"calico-kube-controllers-7df656fc87-", Namespace:"calico-system", SelfLink:"", UID:"28190bf2-2c0b-423c-b70c-215d8388a88b", ResourceVersion:"1291", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7df656fc87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-8-baee985ae6", ContainerID:"b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92", Pod:"calico-kube-controllers-7df656fc87-74c27", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3a352cfbb0", MAC:"1a:98:72:e9:ac:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:23.508794 containerd[1581]: 2025-01-30 13:57:23.488 [INFO][5811] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92" Namespace="calico-system" Pod="calico-kube-controllers-7df656fc87-74c27" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7df656fc87--74c27-eth0" Jan 30 13:57:23.586391 containerd[1581]: time="2025-01-30T13:57:23.585913526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:23.586391 containerd[1581]: time="2025-01-30T13:57:23.586054233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:23.586391 containerd[1581]: time="2025-01-30T13:57:23.586081165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:23.587545 containerd[1581]: time="2025-01-30T13:57:23.586303314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:23.717865 containerd[1581]: time="2025-01-30T13:57:23.717817033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7df656fc87-74c27,Uid:28190bf2-2c0b-423c-b70c-215d8388a88b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92\"" Jan 30 13:57:23.748971 containerd[1581]: time="2025-01-30T13:57:23.748564817Z" level=info msg="CreateContainer within sandbox \"b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:57:23.782322 containerd[1581]: time="2025-01-30T13:57:23.782260548Z" level=info msg="CreateContainer within sandbox \"b0b3c9da44cb255b3dd0e6252fdb9f33e90dfbc5a9d9ef46c6c65486f4776d92\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9e1b47f4ef3677d212d99b12fb1976011432dbc9cb539fff7bc811c841d2482d\"" Jan 30 13:57:23.785094 containerd[1581]: time="2025-01-30T13:57:23.784996302Z" level=info msg="StartContainer for \"9e1b47f4ef3677d212d99b12fb1976011432dbc9cb539fff7bc811c841d2482d\"" Jan 30 13:57:23.889876 containerd[1581]: time="2025-01-30T13:57:23.889817266Z" level=info msg="StartContainer for \"9e1b47f4ef3677d212d99b12fb1976011432dbc9cb539fff7bc811c841d2482d\" returns successfully" Jan 30 13:57:24.704627 kubelet[2727]: I0130 13:57:24.700675 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7df656fc87-74c27" podStartSLOduration=2.700598277 podStartE2EDuration="2.700598277s" podCreationTimestamp="2025-01-30 13:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:57:24.69528363 +0000 UTC m=+85.959272028" watchObservedRunningTime="2025-01-30 13:57:24.700598277 +0000 UTC m=+85.964586678" Jan 30 13:57:24.960887 kubelet[2727]: E0130 13:57:24.959702 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:25.441812 systemd-networkd[1218]: calic3a352cfbb0: Gained IPv6LL Jan 30 13:57:25.884438 containerd[1581]: time="2025-01-30T13:57:25.884217717Z" level=info msg="shim disconnected" id=ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07 namespace=k8s.io Jan 30 13:57:25.884438 containerd[1581]: time="2025-01-30T13:57:25.884303963Z" level=warning msg="cleaning up after shim disconnected" id=ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07 namespace=k8s.io Jan 30 13:57:25.884438 containerd[1581]: time="2025-01-30T13:57:25.884317535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:25.893661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07-rootfs.mount: Deactivated successfully. Jan 30 13:57:25.942144 containerd[1581]: time="2025-01-30T13:57:25.942083739Z" level=info msg="StopContainer for \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\" returns successfully" Jan 30 13:57:25.943243 containerd[1581]: time="2025-01-30T13:57:25.942934310Z" level=info msg="StopPodSandbox for \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\"" Jan 30 13:57:25.943426 containerd[1581]: time="2025-01-30T13:57:25.943390047Z" level=info msg="Container to stop \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:57:25.961340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e-shm.mount: Deactivated successfully. Jan 30 13:57:26.048522 containerd[1581]: time="2025-01-30T13:57:26.048371742Z" level=info msg="shim disconnected" id=3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e namespace=k8s.io Jan 30 13:57:26.048522 containerd[1581]: time="2025-01-30T13:57:26.048461809Z" level=warning msg="cleaning up after shim disconnected" id=3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e namespace=k8s.io Jan 30 13:57:26.048522 containerd[1581]: time="2025-01-30T13:57:26.048475666Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:26.060744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e-rootfs.mount: Deactivated successfully. Jan 30 13:57:26.083207 containerd[1581]: time="2025-01-30T13:57:26.083017732Z" level=info msg="TearDown network for sandbox \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\" successfully" Jan 30 13:57:26.083207 containerd[1581]: time="2025-01-30T13:57:26.083067541Z" level=info msg="StopPodSandbox for \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\" returns successfully" Jan 30 13:57:26.160062 kubelet[2727]: I0130 13:57:26.158341 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f8ee8baf-fdef-4896-9369-b72a1778c36a-typha-certs\") pod \"f8ee8baf-fdef-4896-9369-b72a1778c36a\" (UID: \"f8ee8baf-fdef-4896-9369-b72a1778c36a\") " Jan 30 13:57:26.160062 kubelet[2727]: I0130 13:57:26.158426 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ee8baf-fdef-4896-9369-b72a1778c36a-tigera-ca-bundle\") pod \"f8ee8baf-fdef-4896-9369-b72a1778c36a\" (UID: \"f8ee8baf-fdef-4896-9369-b72a1778c36a\") " Jan 30 13:57:26.160062 kubelet[2727]: I0130 13:57:26.158467 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcvn2\" (UniqueName: \"kubernetes.io/projected/f8ee8baf-fdef-4896-9369-b72a1778c36a-kube-api-access-wcvn2\") pod \"f8ee8baf-fdef-4896-9369-b72a1778c36a\" (UID: \"f8ee8baf-fdef-4896-9369-b72a1778c36a\") " Jan 30 13:57:26.179816 kubelet[2727]: I0130 13:57:26.178191 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ee8baf-fdef-4896-9369-b72a1778c36a-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "f8ee8baf-fdef-4896-9369-b72a1778c36a" (UID: "f8ee8baf-fdef-4896-9369-b72a1778c36a"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:57:26.187649 systemd[1]: var-lib-kubelet-pods-f8ee8baf\x2dfdef\x2d4896\x2d9369\x2db72a1778c36a-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 30 13:57:26.189631 kubelet[2727]: I0130 13:57:26.189530 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8ee8baf-fdef-4896-9369-b72a1778c36a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f8ee8baf-fdef-4896-9369-b72a1778c36a" (UID: "f8ee8baf-fdef-4896-9369-b72a1778c36a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:57:26.190437 kubelet[2727]: I0130 13:57:26.190358 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ee8baf-fdef-4896-9369-b72a1778c36a-kube-api-access-wcvn2" (OuterVolumeSpecName: "kube-api-access-wcvn2") pod "f8ee8baf-fdef-4896-9369-b72a1778c36a" (UID: "f8ee8baf-fdef-4896-9369-b72a1778c36a"). InnerVolumeSpecName "kube-api-access-wcvn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:57:26.200699 systemd[1]: var-lib-kubelet-pods-f8ee8baf\x2dfdef\x2d4896\x2d9369\x2db72a1778c36a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 30 13:57:26.202341 systemd[1]: var-lib-kubelet-pods-f8ee8baf\x2dfdef\x2d4896\x2d9369\x2db72a1778c36a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwcvn2.mount: Deactivated successfully. Jan 30 13:57:26.259229 kubelet[2727]: I0130 13:57:26.259135 2727 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f8ee8baf-fdef-4896-9369-b72a1778c36a-typha-certs\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:26.259229 kubelet[2727]: I0130 13:57:26.259229 2727 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wcvn2\" (UniqueName: \"kubernetes.io/projected/f8ee8baf-fdef-4896-9369-b72a1778c36a-kube-api-access-wcvn2\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:26.259485 kubelet[2727]: I0130 13:57:26.259256 2727 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ee8baf-fdef-4896-9369-b72a1778c36a-tigera-ca-bundle\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:26.674661 kubelet[2727]: I0130 13:57:26.674093 2727 scope.go:117] "RemoveContainer" containerID="ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07" Jan 30 13:57:26.706223 containerd[1581]: time="2025-01-30T13:57:26.706064741Z" level=info msg="RemoveContainer for \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\"" Jan 30 13:57:26.726864 containerd[1581]: time="2025-01-30T13:57:26.725395042Z" level=info msg="RemoveContainer for \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\" returns successfully" Jan 30 13:57:26.728361 kubelet[2727]: I0130 13:57:26.726938 2727 scope.go:117] "RemoveContainer" containerID="ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07" Jan 30 13:57:26.762658 containerd[1581]: time="2025-01-30T13:57:26.727341975Z" level=error msg="ContainerStatus for \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\": not found" Jan 30 13:57:26.764654 kubelet[2727]: E0130 13:57:26.763459 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\": not found" containerID="ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07" Jan 30 13:57:26.764654 kubelet[2727]: I0130 13:57:26.763529 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07"} err="failed to get container status \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca545b101ec87fa85f996ec6e9901ecef936cb123c026a50779713e448d9fb07\": not found" Jan 30 13:57:26.962205 kubelet[2727]: I0130 13:57:26.962114 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ee8baf-fdef-4896-9369-b72a1778c36a" path="/var/lib/kubelet/pods/f8ee8baf-fdef-4896-9369-b72a1778c36a/volumes" Jan 30 13:57:28.226751 systemd[1]: Started sshd@16-146.190.136.39:22-147.75.109.163:56342.service - OpenSSH per-connection server daemon (147.75.109.163:56342). Jan 30 13:57:28.280074 sshd[6149]: Accepted publickey for core from 147.75.109.163 port 56342 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:28.281954 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:28.289884 systemd-logind[1558]: New session 17 of user core. Jan 30 13:57:28.299815 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:57:28.487438 sshd[6149]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:28.494621 systemd[1]: sshd@16-146.190.136.39:22-147.75.109.163:56342.service: Deactivated successfully. Jan 30 13:57:28.502830 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:57:28.503351 systemd-logind[1558]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:57:28.505723 systemd-logind[1558]: Removed session 17. Jan 30 13:57:33.502749 systemd[1]: Started sshd@17-146.190.136.39:22-147.75.109.163:56348.service - OpenSSH per-connection server daemon (147.75.109.163:56348). Jan 30 13:57:33.564718 sshd[6250]: Accepted publickey for core from 147.75.109.163 port 56348 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:33.568385 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:33.579438 systemd-logind[1558]: New session 18 of user core. Jan 30 13:57:33.587713 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:57:33.764453 sshd[6250]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:33.780262 systemd[1]: Started sshd@18-146.190.136.39:22-147.75.109.163:56354.service - OpenSSH per-connection server daemon (147.75.109.163:56354). Jan 30 13:57:33.782700 systemd[1]: sshd@17-146.190.136.39:22-147.75.109.163:56348.service: Deactivated successfully. Jan 30 13:57:33.788089 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:57:33.798283 systemd-logind[1558]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:57:33.801005 systemd-logind[1558]: Removed session 18. Jan 30 13:57:33.844135 sshd[6269]: Accepted publickey for core from 147.75.109.163 port 56354 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:33.846610 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:33.855524 systemd-logind[1558]: New session 19 of user core. Jan 30 13:57:33.860874 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:57:34.400451 sshd[6269]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:34.404869 systemd[1]: Started sshd@19-146.190.136.39:22-147.75.109.163:56360.service - OpenSSH per-connection server daemon (147.75.109.163:56360). Jan 30 13:57:34.420559 systemd[1]: sshd@18-146.190.136.39:22-147.75.109.163:56354.service: Deactivated successfully. Jan 30 13:57:34.428157 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:57:34.428929 systemd-logind[1558]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:57:34.432240 systemd-logind[1558]: Removed session 19. Jan 30 13:57:34.492002 sshd[6287]: Accepted publickey for core from 147.75.109.163 port 56360 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:34.495835 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:34.504551 systemd-logind[1558]: New session 20 of user core. Jan 30 13:57:34.511694 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:57:37.355194 sshd[6287]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:37.368139 systemd[1]: Started sshd@20-146.190.136.39:22-147.75.109.163:46710.service - OpenSSH per-connection server daemon (147.75.109.163:46710). Jan 30 13:57:37.375870 systemd[1]: sshd@19-146.190.136.39:22-147.75.109.163:56360.service: Deactivated successfully. Jan 30 13:57:37.393273 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:57:37.400861 systemd-logind[1558]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:57:37.410975 systemd-logind[1558]: Removed session 20. Jan 30 13:57:37.478328 sshd[6353]: Accepted publickey for core from 147.75.109.163 port 46710 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:37.484041 sshd[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:37.492843 systemd-logind[1558]: New session 21 of user core. Jan 30 13:57:37.498600 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:57:38.501039 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:57:38.500380 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:57:38.500390 systemd-resolved[1468]: Flushed all caches. Jan 30 13:57:38.512114 sshd[6353]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:38.525994 systemd[1]: Started sshd@21-146.190.136.39:22-147.75.109.163:46720.service - OpenSSH per-connection server daemon (147.75.109.163:46720). Jan 30 13:57:38.533767 systemd[1]: sshd@20-146.190.136.39:22-147.75.109.163:46710.service: Deactivated successfully. Jan 30 13:57:38.547290 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:57:38.550461 systemd-logind[1558]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:57:38.558811 systemd-logind[1558]: Removed session 21. Jan 30 13:57:38.635217 sshd[6386]: Accepted publickey for core from 147.75.109.163 port 46720 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:38.642561 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:38.654260 systemd-logind[1558]: New session 22 of user core. Jan 30 13:57:38.664853 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:57:38.883988 sshd[6386]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:38.890741 systemd[1]: sshd@21-146.190.136.39:22-147.75.109.163:46720.service: Deactivated successfully. Jan 30 13:57:38.896343 systemd-logind[1558]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:57:38.897246 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:57:38.899893 systemd-logind[1558]: Removed session 22. Jan 30 13:57:40.545892 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:57:40.548128 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:57:40.545907 systemd-resolved[1468]: Flushed all caches. Jan 30 13:57:41.959682 kubelet[2727]: E0130 13:57:41.959594 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:43.895564 systemd[1]: Started sshd@22-146.190.136.39:22-147.75.109.163:46724.service - OpenSSH per-connection server daemon (147.75.109.163:46724). Jan 30 13:57:44.005594 sshd[6489]: Accepted publickey for core from 147.75.109.163 port 46724 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:44.010354 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:44.018801 systemd-logind[1558]: New session 23 of user core. Jan 30 13:57:44.025692 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:57:44.435373 sshd[6489]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:44.442940 systemd[1]: sshd@22-146.190.136.39:22-147.75.109.163:46724.service: Deactivated successfully. Jan 30 13:57:44.450102 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:57:44.451418 systemd-logind[1558]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:57:44.453048 systemd-logind[1558]: Removed session 23. Jan 30 13:57:49.445527 systemd[1]: Started sshd@23-146.190.136.39:22-147.75.109.163:60698.service - OpenSSH per-connection server daemon (147.75.109.163:60698). Jan 30 13:57:49.532240 sshd[6609]: Accepted publickey for core from 147.75.109.163 port 60698 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:49.535802 sshd[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:49.542806 systemd-logind[1558]: New session 24 of user core. Jan 30 13:57:49.549753 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:57:49.774822 sshd[6609]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:49.781194 systemd[1]: sshd@23-146.190.136.39:22-147.75.109.163:60698.service: Deactivated successfully. Jan 30 13:57:49.789114 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:57:49.793639 systemd-logind[1558]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:57:49.795834 systemd-logind[1558]: Removed session 24. Jan 30 13:57:52.768728 containerd[1581]: time="2025-01-30T13:57:52.768650613Z" level=info msg="StopContainer for \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\" with timeout 5 (s)" Jan 30 13:57:52.771228 containerd[1581]: time="2025-01-30T13:57:52.770634980Z" level=info msg="Stop container \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\" with signal terminated" Jan 30 13:57:52.832468 containerd[1581]: time="2025-01-30T13:57:52.832367897Z" level=info msg="shim disconnected" id=4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3 namespace=k8s.io Jan 30 13:57:52.832962 containerd[1581]: time="2025-01-30T13:57:52.832756453Z" level=warning msg="cleaning up after shim disconnected" id=4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3 namespace=k8s.io Jan 30 13:57:52.832962 containerd[1581]: time="2025-01-30T13:57:52.832787648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:52.836576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3-rootfs.mount: Deactivated successfully. Jan 30 13:57:52.874714 containerd[1581]: time="2025-01-30T13:57:52.874630220Z" level=info msg="StopContainer for \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\" returns successfully" Jan 30 13:57:52.875573 containerd[1581]: time="2025-01-30T13:57:52.875362952Z" level=info msg="StopPodSandbox for \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\"" Jan 30 13:57:52.875573 containerd[1581]: time="2025-01-30T13:57:52.875410720Z" level=info msg="Container to stop \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:57:52.875573 containerd[1581]: time="2025-01-30T13:57:52.875423658Z" level=info msg="Container to stop \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:57:52.875573 containerd[1581]: time="2025-01-30T13:57:52.875435857Z" level=info msg="Container to stop \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:57:52.880966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e-shm.mount: Deactivated successfully. Jan 30 13:57:52.917039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e-rootfs.mount: Deactivated successfully. Jan 30 13:57:52.920767 containerd[1581]: time="2025-01-30T13:57:52.920639111Z" level=info msg="shim disconnected" id=86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e namespace=k8s.io Jan 30 13:57:52.920767 containerd[1581]: time="2025-01-30T13:57:52.920706238Z" level=warning msg="cleaning up after shim disconnected" id=86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e namespace=k8s.io Jan 30 13:57:52.920767 containerd[1581]: time="2025-01-30T13:57:52.920715861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:52.947074 containerd[1581]: time="2025-01-30T13:57:52.947014455Z" level=info msg="TearDown network for sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" successfully" Jan 30 13:57:52.948311 containerd[1581]: time="2025-01-30T13:57:52.947472419Z" level=info msg="StopPodSandbox for \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" returns successfully" Jan 30 13:57:52.988665 kubelet[2727]: I0130 13:57:52.988573 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-flexvol-driver-host\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990193 kubelet[2727]: I0130 13:57:52.989484 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-policysync\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990193 kubelet[2727]: I0130 13:57:52.989518 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-var-lib-calico\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990193 kubelet[2727]: I0130 13:57:52.989539 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-bin-dir\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990193 kubelet[2727]: I0130 13:57:52.989556 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-net-dir\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990193 kubelet[2727]: I0130 13:57:52.989572 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-xtables-lock\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990193 kubelet[2727]: I0130 13:57:52.989595 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/154b3faf-3122-49e2-8769-6e33faef8fe5-node-certs\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990586 kubelet[2727]: I0130 13:57:52.989610 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-lib-modules\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990586 kubelet[2727]: I0130 13:57:52.989655 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:52.990586 kubelet[2727]: I0130 13:57:52.989732 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:52.990586 kubelet[2727]: I0130 13:57:52.989760 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-policysync" (OuterVolumeSpecName: "policysync") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:52.990586 kubelet[2727]: I0130 13:57:52.989783 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:52.990780 kubelet[2727]: I0130 13:57:52.989801 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:52.990780 kubelet[2727]: I0130 13:57:52.989818 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:52.990780 kubelet[2727]: I0130 13:57:52.989984 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/154b3faf-3122-49e2-8769-6e33faef8fe5-tigera-ca-bundle\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990780 kubelet[2727]: I0130 13:57:52.990007 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-log-dir\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990780 kubelet[2727]: I0130 13:57:52.990032 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spmqd\" (UniqueName: \"kubernetes.io/projected/154b3faf-3122-49e2-8769-6e33faef8fe5-kube-api-access-spmqd\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990922 kubelet[2727]: I0130 13:57:52.990057 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-var-run-calico\") pod \"154b3faf-3122-49e2-8769-6e33faef8fe5\" (UID: \"154b3faf-3122-49e2-8769-6e33faef8fe5\") " Jan 30 13:57:52.990922 kubelet[2727]: I0130 13:57:52.990142 2727 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-bin-dir\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:52.991250 kubelet[2727]: I0130 13:57:52.990157 2727 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-net-dir\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:52.991250 kubelet[2727]: I0130 13:57:52.991030 2727 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-xtables-lock\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:52.991250 kubelet[2727]: I0130 13:57:52.991041 2727 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-policysync\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:52.991250 kubelet[2727]: I0130 13:57:52.991049 2727 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-var-lib-calico\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:52.991250 kubelet[2727]: I0130 13:57:52.991058 2727 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-flexvol-driver-host\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:52.991250 kubelet[2727]: I0130 13:57:52.991109 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:52.991250 kubelet[2727]: I0130 13:57:52.991136 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:53.007047 kubelet[2727]: I0130 13:57:53.005608 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/154b3faf-3122-49e2-8769-6e33faef8fe5-node-certs" (OuterVolumeSpecName: "node-certs") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:57:53.009804 kubelet[2727]: I0130 13:57:53.007437 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:57:53.008081 systemd[1]: var-lib-kubelet-pods-154b3faf\x2d3122\x2d49e2\x2d8769\x2d6e33faef8fe5-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 30 13:57:53.018457 kubelet[2727]: I0130 13:57:53.016851 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154b3faf-3122-49e2-8769-6e33faef8fe5-kube-api-access-spmqd" (OuterVolumeSpecName: "kube-api-access-spmqd") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "kube-api-access-spmqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:57:53.019181 kubelet[2727]: I0130 13:57:53.018816 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/154b3faf-3122-49e2-8769-6e33faef8fe5-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "154b3faf-3122-49e2-8769-6e33faef8fe5" (UID: "154b3faf-3122-49e2-8769-6e33faef8fe5"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:57:53.043199 kubelet[2727]: I0130 13:57:53.040710 2727 topology_manager.go:215] "Topology Admit Handler" podUID="924d1012-bbe1-4869-a26c-0042b49415da" podNamespace="calico-system" podName="calico-node-hs75h" Jan 30 13:57:53.043199 kubelet[2727]: E0130 13:57:53.040842 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="154b3faf-3122-49e2-8769-6e33faef8fe5" containerName="install-cni" Jan 30 13:57:53.043199 kubelet[2727]: E0130 13:57:53.040857 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="154b3faf-3122-49e2-8769-6e33faef8fe5" containerName="flexvol-driver" Jan 30 13:57:53.043199 kubelet[2727]: E0130 13:57:53.040866 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8ee8baf-fdef-4896-9369-b72a1778c36a" containerName="calico-typha" Jan 30 13:57:53.043199 kubelet[2727]: E0130 13:57:53.040872 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="154b3faf-3122-49e2-8769-6e33faef8fe5" containerName="calico-node" Jan 30 13:57:53.043199 kubelet[2727]: I0130 13:57:53.040910 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8ee8baf-fdef-4896-9369-b72a1778c36a" containerName="calico-typha" Jan 30 13:57:53.043199 kubelet[2727]: I0130 13:57:53.040918 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="154b3faf-3122-49e2-8769-6e33faef8fe5" containerName="calico-node" Jan 30 13:57:53.092619 kubelet[2727]: I0130 13:57:53.092408 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-var-run-calico\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092619 kubelet[2727]: I0130 13:57:53.092467 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-lib-modules\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092619 kubelet[2727]: I0130 13:57:53.092519 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-cni-log-dir\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092619 kubelet[2727]: I0130 13:57:53.092542 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxpk\" (UniqueName: \"kubernetes.io/projected/924d1012-bbe1-4869-a26c-0042b49415da-kube-api-access-rjxpk\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092619 kubelet[2727]: I0130 13:57:53.092561 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/924d1012-bbe1-4869-a26c-0042b49415da-tigera-ca-bundle\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092919 kubelet[2727]: I0130 13:57:53.092579 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-xtables-lock\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092919 kubelet[2727]: I0130 13:57:53.092596 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-cni-net-dir\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092919 kubelet[2727]: I0130 13:57:53.092665 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-cni-bin-dir\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092919 kubelet[2727]: I0130 13:57:53.092707 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/924d1012-bbe1-4869-a26c-0042b49415da-node-certs\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.092919 kubelet[2727]: I0130 13:57:53.092735 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-policysync\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.094353 kubelet[2727]: I0130 13:57:53.092765 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-var-lib-calico\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.094353 kubelet[2727]: I0130 13:57:53.092783 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/924d1012-bbe1-4869-a26c-0042b49415da-flexvol-driver-host\") pod \"calico-node-hs75h\" (UID: \"924d1012-bbe1-4869-a26c-0042b49415da\") " pod="calico-system/calico-node-hs75h" Jan 30 13:57:53.094353 kubelet[2727]: I0130 13:57:53.092815 2727 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-spmqd\" (UniqueName: \"kubernetes.io/projected/154b3faf-3122-49e2-8769-6e33faef8fe5-kube-api-access-spmqd\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:53.094353 kubelet[2727]: I0130 13:57:53.092827 2727 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-var-run-calico\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:53.094353 kubelet[2727]: I0130 13:57:53.092838 2727 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-cni-log-dir\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:53.094353 kubelet[2727]: I0130 13:57:53.092848 2727 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/154b3faf-3122-49e2-8769-6e33faef8fe5-lib-modules\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:53.094353 kubelet[2727]: I0130 13:57:53.092857 2727 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/154b3faf-3122-49e2-8769-6e33faef8fe5-node-certs\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:53.094626 kubelet[2727]: I0130 13:57:53.092865 2727 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/154b3faf-3122-49e2-8769-6e33faef8fe5-tigera-ca-bundle\") on node \"ci-4081.3.0-8-baee985ae6\" DevicePath \"\"" Jan 30 13:57:53.366151 kubelet[2727]: E0130 13:57:53.365586 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:53.367010 containerd[1581]: time="2025-01-30T13:57:53.366762903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hs75h,Uid:924d1012-bbe1-4869-a26c-0042b49415da,Namespace:calico-system,Attempt:0,}" Jan 30 13:57:53.422005 containerd[1581]: time="2025-01-30T13:57:53.421503293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:53.422005 containerd[1581]: time="2025-01-30T13:57:53.421580688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:53.422005 containerd[1581]: time="2025-01-30T13:57:53.421597795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:53.422005 containerd[1581]: time="2025-01-30T13:57:53.421740245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:53.487510 containerd[1581]: time="2025-01-30T13:57:53.487431124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hs75h,Uid:924d1012-bbe1-4869-a26c-0042b49415da,Namespace:calico-system,Attempt:0,} returns sandbox id \"1176f6445e9bd0a6118d05394d72fff85939c44760b519f417a1b26bcd0288b1\"" Jan 30 13:57:53.490049 kubelet[2727]: E0130 13:57:53.489899 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:53.497275 containerd[1581]: time="2025-01-30T13:57:53.496199322Z" level=info msg="CreateContainer within sandbox \"1176f6445e9bd0a6118d05394d72fff85939c44760b519f417a1b26bcd0288b1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:57:53.515814 containerd[1581]: time="2025-01-30T13:57:53.515750316Z" level=info msg="CreateContainer within sandbox \"1176f6445e9bd0a6118d05394d72fff85939c44760b519f417a1b26bcd0288b1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"961dda9a64d5ca3b0fecb52a459795cd212bdc531202abea8bab29f96031efc0\"" Jan 30 13:57:53.516883 containerd[1581]: time="2025-01-30T13:57:53.516834230Z" level=info msg="StartContainer for \"961dda9a64d5ca3b0fecb52a459795cd212bdc531202abea8bab29f96031efc0\"" Jan 30 13:57:53.610097 containerd[1581]: time="2025-01-30T13:57:53.609936861Z" level=info msg="StartContainer for \"961dda9a64d5ca3b0fecb52a459795cd212bdc531202abea8bab29f96031efc0\" returns successfully" Jan 30 13:57:53.638610 systemd[1]: var-lib-kubelet-pods-154b3faf\x2d3122\x2d49e2\x2d8769\x2d6e33faef8fe5-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 30 13:57:53.638817 systemd[1]: var-lib-kubelet-pods-154b3faf\x2d3122\x2d49e2\x2d8769\x2d6e33faef8fe5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dspmqd.mount: Deactivated successfully. Jan 30 13:57:53.707275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-961dda9a64d5ca3b0fecb52a459795cd212bdc531202abea8bab29f96031efc0-rootfs.mount: Deactivated successfully. Jan 30 13:57:53.710984 containerd[1581]: time="2025-01-30T13:57:53.710781742Z" level=info msg="shim disconnected" id=961dda9a64d5ca3b0fecb52a459795cd212bdc531202abea8bab29f96031efc0 namespace=k8s.io Jan 30 13:57:53.710984 containerd[1581]: time="2025-01-30T13:57:53.710861542Z" level=warning msg="cleaning up after shim disconnected" id=961dda9a64d5ca3b0fecb52a459795cd212bdc531202abea8bab29f96031efc0 namespace=k8s.io Jan 30 13:57:53.710984 containerd[1581]: time="2025-01-30T13:57:53.710877114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:53.731716 containerd[1581]: time="2025-01-30T13:57:53.731636920Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:57:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:57:53.777722 kubelet[2727]: I0130 13:57:53.777371 2727 scope.go:117] "RemoveContainer" containerID="4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3" Jan 30 13:57:53.784265 containerd[1581]: time="2025-01-30T13:57:53.783700657Z" level=info msg="RemoveContainer for \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\"" Jan 30 13:57:53.788628 kubelet[2727]: E0130 13:57:53.788584 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:53.838691 containerd[1581]: time="2025-01-30T13:57:53.790979141Z" level=info msg="RemoveContainer for \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\" returns successfully" Jan 30 13:57:53.839743 containerd[1581]: time="2025-01-30T13:57:53.798404755Z" level=info msg="CreateContainer within sandbox \"1176f6445e9bd0a6118d05394d72fff85939c44760b519f417a1b26bcd0288b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:57:53.863097 kubelet[2727]: I0130 13:57:53.846617 2727 scope.go:117] "RemoveContainer" containerID="b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054" Jan 30 13:57:53.863097 kubelet[2727]: I0130 13:57:53.857838 2727 scope.go:117] "RemoveContainer" containerID="797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17" Jan 30 13:57:53.863349 containerd[1581]: time="2025-01-30T13:57:53.848790581Z" level=info msg="RemoveContainer for \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\"" Jan 30 13:57:53.863349 containerd[1581]: time="2025-01-30T13:57:53.854104406Z" level=info msg="RemoveContainer for \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\" returns successfully" Jan 30 13:57:53.863349 containerd[1581]: time="2025-01-30T13:57:53.860870469Z" level=info msg="RemoveContainer for \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\"" Jan 30 13:57:53.870312 containerd[1581]: time="2025-01-30T13:57:53.867699940Z" level=info msg="RemoveContainer for \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\" returns successfully" Jan 30 13:57:53.870837 kubelet[2727]: I0130 13:57:53.870692 2727 scope.go:117] "RemoveContainer" containerID="4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3" Jan 30 13:57:53.873662 containerd[1581]: time="2025-01-30T13:57:53.873605066Z" level=error msg="ContainerStatus for \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\": not found" Jan 30 13:57:53.878787 kubelet[2727]: E0130 13:57:53.876627 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\": not found" containerID="4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3" Jan 30 13:57:53.878787 kubelet[2727]: I0130 13:57:53.876693 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3"} err="failed to get container status \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4daa0a226f71b430acf35fd1cebf02321348dd5a27e49dfb64dd00dcab4cdaa3\": not found" Jan 30 13:57:53.878787 kubelet[2727]: I0130 13:57:53.876734 2727 scope.go:117] "RemoveContainer" containerID="b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054" Jan 30 13:57:53.879384 containerd[1581]: time="2025-01-30T13:57:53.879325955Z" level=error msg="ContainerStatus for \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\": not found" Jan 30 13:57:53.879878 kubelet[2727]: E0130 13:57:53.879773 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\": not found" containerID="b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054" Jan 30 13:57:53.879878 kubelet[2727]: I0130 13:57:53.879831 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054"} err="failed to get container status \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\": rpc error: code = NotFound desc = an error occurred when try to find container \"b278697ea1a42966c89bce2a3de765a72768f86d6e5907997349125f1b4e8054\": not found" Jan 30 13:57:53.880401 kubelet[2727]: I0130 13:57:53.879892 2727 scope.go:117] "RemoveContainer" containerID="797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17" Jan 30 13:57:53.880471 containerd[1581]: time="2025-01-30T13:57:53.880239361Z" level=error msg="ContainerStatus for \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\": not found" Jan 30 13:57:53.886986 kubelet[2727]: E0130 13:57:53.885212 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\": not found" containerID="797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17" Jan 30 13:57:53.886986 kubelet[2727]: I0130 13:57:53.885267 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17"} err="failed to get container status \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\": rpc error: code = NotFound desc = an error occurred when try to find container \"797640cdd4ae7840571db45489f22bb45adc77d1af9e57dda0c5fce75d929d17\": not found" Jan 30 13:57:53.912676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348712164.mount: Deactivated successfully. Jan 30 13:57:53.913109 containerd[1581]: time="2025-01-30T13:57:53.912655909Z" level=info msg="CreateContainer within sandbox \"1176f6445e9bd0a6118d05394d72fff85939c44760b519f417a1b26bcd0288b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f9be0402a8778dbace942b561b439fd826e6a411340f8fb853fb5a93ea5050c1\"" Jan 30 13:57:53.914726 containerd[1581]: time="2025-01-30T13:57:53.914419577Z" level=info msg="StartContainer for \"f9be0402a8778dbace942b561b439fd826e6a411340f8fb853fb5a93ea5050c1\"" Jan 30 13:57:54.007097 containerd[1581]: time="2025-01-30T13:57:54.007036992Z" level=info msg="StartContainer for \"f9be0402a8778dbace942b561b439fd826e6a411340f8fb853fb5a93ea5050c1\" returns successfully" Jan 30 13:57:54.798298 systemd[1]: Started sshd@24-146.190.136.39:22-147.75.109.163:60710.service - OpenSSH per-connection server daemon (147.75.109.163:60710). Jan 30 13:57:55.002180 kubelet[2727]: I0130 13:57:55.002097 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="154b3faf-3122-49e2-8769-6e33faef8fe5" path="/var/lib/kubelet/pods/154b3faf-3122-49e2-8769-6e33faef8fe5/volumes" Jan 30 13:57:55.057436 sshd[6932]: Accepted publickey for core from 147.75.109.163 port 60710 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:55.072343 sshd[6932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:55.098812 systemd-logind[1558]: New session 25 of user core. Jan 30 13:57:55.109737 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:57:55.129458 kubelet[2727]: E0130 13:57:55.128393 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:56.161073 sshd[6932]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:56.173576 systemd[1]: sshd@24-146.190.136.39:22-147.75.109.163:60710.service: Deactivated successfully. Jan 30 13:57:56.182450 systemd-logind[1558]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:57:56.184489 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:57:56.187526 systemd-logind[1558]: Removed session 25. Jan 30 13:57:56.221489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9be0402a8778dbace942b561b439fd826e6a411340f8fb853fb5a93ea5050c1-rootfs.mount: Deactivated successfully. Jan 30 13:57:56.227134 containerd[1581]: time="2025-01-30T13:57:56.226683492Z" level=info msg="shim disconnected" id=f9be0402a8778dbace942b561b439fd826e6a411340f8fb853fb5a93ea5050c1 namespace=k8s.io Jan 30 13:57:56.227134 containerd[1581]: time="2025-01-30T13:57:56.226778561Z" level=warning msg="cleaning up after shim disconnected" id=f9be0402a8778dbace942b561b439fd826e6a411340f8fb853fb5a93ea5050c1 namespace=k8s.io Jan 30 13:57:56.227134 containerd[1581]: time="2025-01-30T13:57:56.226796843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:56.546836 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:57:56.549387 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:57:56.546852 systemd-resolved[1468]: Flushed all caches. Jan 30 13:57:57.182667 kubelet[2727]: E0130 13:57:57.182580 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:57.274483 containerd[1581]: time="2025-01-30T13:57:57.273940225Z" level=info msg="CreateContainer within sandbox \"1176f6445e9bd0a6118d05394d72fff85939c44760b519f417a1b26bcd0288b1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:57:57.373191 containerd[1581]: time="2025-01-30T13:57:57.373096989Z" level=info msg="CreateContainer within sandbox \"1176f6445e9bd0a6118d05394d72fff85939c44760b519f417a1b26bcd0288b1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f87d79030b4900fda276d1c772878b1bda0da7e90766477625858249da5aab2\"" Jan 30 13:57:57.375216 containerd[1581]: time="2025-01-30T13:57:57.374493442Z" level=info msg="StartContainer for \"9f87d79030b4900fda276d1c772878b1bda0da7e90766477625858249da5aab2\"" Jan 30 13:57:57.515378 containerd[1581]: time="2025-01-30T13:57:57.515310426Z" level=info msg="StartContainer for \"9f87d79030b4900fda276d1c772878b1bda0da7e90766477625858249da5aab2\" returns successfully" Jan 30 13:57:58.180193 kubelet[2727]: E0130 13:57:58.178924 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:58.595483 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:57:58.593892 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:57:58.593901 systemd-resolved[1468]: Flushed all caches. Jan 30 13:57:59.017759 kubelet[2727]: E0130 13:57:59.017688 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:59.351522 kubelet[2727]: E0130 13:57:59.350407 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:00.643754 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:58:00.641297 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:58:00.641310 systemd-resolved[1468]: Flushed all caches. Jan 30 13:58:00.773911 containerd[1581]: time="2025-01-30T13:58:00.773756934Z" level=info msg="StopPodSandbox for \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\"" Jan 30 13:58:00.775627 containerd[1581]: time="2025-01-30T13:58:00.774322649Z" level=info msg="TearDown network for sandbox \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\" successfully" Jan 30 13:58:00.775627 containerd[1581]: time="2025-01-30T13:58:00.774365033Z" level=info msg="StopPodSandbox for \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\" returns successfully" Jan 30 13:58:00.818276 containerd[1581]: time="2025-01-30T13:58:00.818204050Z" level=info msg="RemovePodSandbox for \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\"" Jan 30 13:58:00.818459 containerd[1581]: time="2025-01-30T13:58:00.818295594Z" level=info msg="Forcibly stopping sandbox \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\"" Jan 30 13:58:00.818459 containerd[1581]: time="2025-01-30T13:58:00.818416495Z" level=info msg="TearDown network for sandbox \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\" successfully" Jan 30 13:58:00.899608 containerd[1581]: time="2025-01-30T13:58:00.899144854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:00.901457 containerd[1581]: time="2025-01-30T13:58:00.900267697Z" level=info msg="RemovePodSandbox \"3e726a9aebcbef8d484a4ecf282166bd28481249335a0519587e729fe5ca389e\" returns successfully" Jan 30 13:58:00.901457 containerd[1581]: time="2025-01-30T13:58:00.901121007Z" level=info msg="StopPodSandbox for \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\"" Jan 30 13:58:00.902540 containerd[1581]: time="2025-01-30T13:58:00.901777448Z" level=info msg="TearDown network for sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" successfully" Jan 30 13:58:00.902540 containerd[1581]: time="2025-01-30T13:58:00.901824593Z" level=info msg="StopPodSandbox for \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" returns successfully" Jan 30 13:58:00.904438 containerd[1581]: time="2025-01-30T13:58:00.903279070Z" level=info msg="RemovePodSandbox for \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\"" Jan 30 13:58:00.904438 containerd[1581]: time="2025-01-30T13:58:00.903320761Z" level=info msg="Forcibly stopping sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\"" Jan 30 13:58:00.904438 containerd[1581]: time="2025-01-30T13:58:00.903417420Z" level=info msg="TearDown network for sandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" successfully" Jan 30 13:58:00.923753 containerd[1581]: time="2025-01-30T13:58:00.923507788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:00.923753 containerd[1581]: time="2025-01-30T13:58:00.923614250Z" level=info msg="RemovePodSandbox \"86cc9655ed3955b8f1a1c304657f573d6e424debb48a6eaf4fac5459685d1a3e\" returns successfully" Jan 30 13:58:00.925142 containerd[1581]: time="2025-01-30T13:58:00.924595984Z" level=info msg="StopPodSandbox for \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\"" Jan 30 13:58:01.177137 systemd[1]: Started sshd@25-146.190.136.39:22-147.75.109.163:36174.service - OpenSSH per-connection server daemon (147.75.109.163:36174). Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.154 [WARNING][7280] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.161 [INFO][7280] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.161 [INFO][7280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" iface="eth0" netns="" Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.162 [INFO][7280] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.162 [INFO][7280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.249 [INFO][7289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.249 [INFO][7289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.250 [INFO][7289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.261 [WARNING][7289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.261 [INFO][7289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.267 [INFO][7289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:01.277957 containerd[1581]: 2025-01-30 13:58:01.273 [INFO][7280] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:58:01.277957 containerd[1581]: time="2025-01-30T13:58:01.277780971Z" level=info msg="TearDown network for sandbox \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\" successfully" Jan 30 13:58:01.277957 containerd[1581]: time="2025-01-30T13:58:01.277822709Z" level=info msg="StopPodSandbox for \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\" returns successfully" Jan 30 13:58:01.283089 containerd[1581]: time="2025-01-30T13:58:01.280431641Z" level=info msg="RemovePodSandbox for \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\"" Jan 30 13:58:01.283089 containerd[1581]: time="2025-01-30T13:58:01.280543039Z" level=info msg="Forcibly stopping sandbox \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\"" Jan 30 13:58:01.337374 sshd[7288]: Accepted publickey for core from 147.75.109.163 port 36174 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:01.344238 sshd[7288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:01.364072 systemd-logind[1558]: New session 26 of user core. Jan 30 13:58:01.372282 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.367 [WARNING][7308] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" WorkloadEndpoint="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.368 [INFO][7308] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.368 [INFO][7308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" iface="eth0" netns="" Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.368 [INFO][7308] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.368 [INFO][7308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.438 [INFO][7315] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.438 [INFO][7315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.439 [INFO][7315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.456 [WARNING][7315] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.458 [INFO][7315] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" HandleID="k8s-pod-network.a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Workload="ci--4081.3.0--8--baee985ae6-k8s-calico--kube--controllers--7d4b699786--9bmxf-eth0" Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.478 [INFO][7315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:01.498196 containerd[1581]: 2025-01-30 13:58:01.487 [INFO][7308] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8" Jan 30 13:58:01.498196 containerd[1581]: time="2025-01-30T13:58:01.496814273Z" level=info msg="TearDown network for sandbox \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\" successfully" Jan 30 13:58:01.596822 containerd[1581]: time="2025-01-30T13:58:01.596242343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:01.596822 containerd[1581]: time="2025-01-30T13:58:01.596552999Z" level=info msg="RemovePodSandbox \"a40e7267aaa7793ce40af910c79f81558d8675378adb313901f6c61536418ed8\" returns successfully" Jan 30 13:58:02.213472 sshd[7288]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:02.219684 systemd[1]: sshd@25-146.190.136.39:22-147.75.109.163:36174.service: Deactivated successfully. Jan 30 13:58:02.227556 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:58:02.229457 systemd-logind[1558]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:58:02.232283 systemd-logind[1558]: Removed session 26. Jan 30 13:58:02.692658 systemd-journald[1139]: Under memory pressure, flushing caches. Jan 30 13:58:02.689570 systemd-resolved[1468]: Under memory pressure, flushing caches. Jan 30 13:58:02.689583 systemd-resolved[1468]: Flushed all caches.