Jan 30 13:56:54.075004 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:56:54.075046 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:56:54.075069 kernel: BIOS-provided physical RAM map: Jan 30 13:56:54.075084 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:56:54.075098 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:56:54.075114 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:56:54.075133 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 13:56:54.075150 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 13:56:54.075165 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:56:54.075185 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:56:54.075201 kernel: NX (Execute Disable) protection: active Jan 30 13:56:54.075217 kernel: APIC: Static calls initialized Jan 30 13:56:54.075236 kernel: SMBIOS 2.8 present. Jan 30 13:56:54.075253 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 13:56:54.075274 kernel: Hypervisor detected: KVM Jan 30 13:56:54.075296 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:56:54.075317 kernel: kvm-clock: using sched offset of 3769386011 cycles Jan 30 13:56:54.075336 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:56:54.075354 kernel: tsc: Detected 2294.608 MHz processor Jan 30 13:56:54.075373 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:56:54.075391 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:56:54.075410 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 13:56:54.075428 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:56:54.075446 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:56:54.075469 kernel: ACPI: Early table checksum verification disabled Jan 30 13:56:54.075487 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 13:56:54.075505 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:54.075524 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:54.075542 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:54.075560 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 13:56:54.075578 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:54.075596 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:54.075629 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:54.075657 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:56:54.075678 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 13:56:54.075700 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 13:56:54.075721 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 13:56:54.075746 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 13:56:54.075761 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 13:56:54.075776 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 13:56:54.075803 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 13:56:54.075820 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:56:54.075845 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:56:54.075867 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:56:54.075904 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:56:54.075928 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 13:56:54.075948 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 13:56:54.075973 kernel: Zone ranges: Jan 30 13:56:54.075993 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:56:54.076012 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 13:56:54.076032 kernel: Normal empty Jan 30 13:56:54.076051 kernel: Movable zone start for each node Jan 30 13:56:54.076072 kernel: Early memory node ranges Jan 30 13:56:54.076097 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:56:54.076117 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 13:56:54.076131 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 13:56:54.076153 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:56:54.076174 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:56:54.076198 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 13:56:54.076217 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:56:54.076237 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:56:54.076257 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:56:54.076277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:56:54.076297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:56:54.076316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:56:54.076340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:56:54.076359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:56:54.076379 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:56:54.076398 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:56:54.076418 kernel: TSC deadline timer available Jan 30 13:56:54.076438 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:56:54.076457 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:56:54.076477 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 13:56:54.076499 kernel: Booting paravirtualized kernel on KVM Jan 30 13:56:54.076523 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:56:54.076542 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:56:54.076562 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:56:54.076581 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:56:54.076600 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:56:54.076619 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:56:54.076640 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:56:54.076660 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:56:54.076683 kernel: random: crng init done Jan 30 13:56:54.076702 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:56:54.076722 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:56:54.076741 kernel: Fallback order for Node 0: 0 Jan 30 13:56:54.076761 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 13:56:54.076781 kernel: Policy zone: DMA32 Jan 30 13:56:54.076800 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:56:54.076820 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 13:56:54.076839 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:56:54.076862 kernel: Kernel/User page tables isolation: enabled Jan 30 13:56:54.076882 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:56:54.076911 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:56:54.076925 kernel: Dynamic Preempt: voluntary Jan 30 13:56:54.076938 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:56:54.076960 kernel: rcu: RCU event tracing is enabled. Jan 30 13:56:54.076975 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:56:54.076988 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:56:54.077001 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:56:54.077020 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:56:54.077037 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:56:54.077056 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:56:54.077077 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:56:54.077101 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:56:54.077127 kernel: Console: colour VGA+ 80x25 Jan 30 13:56:54.077147 kernel: printk: console [tty0] enabled Jan 30 13:56:54.077167 kernel: printk: console [ttyS0] enabled Jan 30 13:56:54.077186 kernel: ACPI: Core revision 20230628 Jan 30 13:56:54.077210 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:56:54.077230 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:56:54.077250 kernel: x2apic enabled Jan 30 13:56:54.077269 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:56:54.077289 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:56:54.077308 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 30 13:56:54.077329 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 30 13:56:54.077348 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:56:54.077370 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:56:54.077405 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:56:54.077426 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:56:54.077447 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:56:54.077471 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:56:54.077491 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 13:56:54.077512 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:56:54.077533 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:56:54.077553 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:56:54.077578 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:56:54.077607 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:56:54.077629 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:56:54.077650 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:56:54.077670 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:56:54.077691 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:56:54.077712 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:56:54.077740 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:56:54.077762 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:56:54.077786 kernel: landlock: Up and running. Jan 30 13:56:54.077807 kernel: SELinux: Initializing. Jan 30 13:56:54.077828 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:56:54.077849 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:56:54.077870 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 13:56:54.077917 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:56:54.077938 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:56:54.077960 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:56:54.077985 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 13:56:54.078005 kernel: signal: max sigframe size: 1776 Jan 30 13:56:54.078026 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:56:54.078048 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:56:54.078075 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:56:54.078094 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:56:54.078117 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:56:54.078138 kernel: .... node #0, CPUs: #1 Jan 30 13:56:54.078159 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:56:54.078194 kernel: smpboot: Max logical packages: 1 Jan 30 13:56:54.078213 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 30 13:56:54.078228 kernel: devtmpfs: initialized Jan 30 13:56:54.078243 kernel: x86/mm: Memory block size: 128MB Jan 30 13:56:54.078259 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:56:54.078283 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:56:54.078304 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:56:54.078325 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:56:54.078346 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:56:54.078372 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:56:54.078393 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:56:54.078420 kernel: audit: type=2000 audit(1738245412.785:1): state=initialized audit_enabled=0 res=1 Jan 30 13:56:54.078435 kernel: cpuidle: using governor menu Jan 30 13:56:54.078453 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:56:54.078475 kernel: dca service started, version 1.12.1 Jan 30 13:56:54.078497 kernel: PCI: Using configuration type 1 for base access Jan 30 13:56:54.078518 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:56:54.078539 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:56:54.078565 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:56:54.078587 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:56:54.078600 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:56:54.078614 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:56:54.078628 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:56:54.078642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:56:54.078656 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:56:54.078669 kernel: ACPI: Interpreter enabled Jan 30 13:56:54.078683 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:56:54.078697 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:56:54.078718 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:56:54.078732 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:56:54.078746 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:56:54.078762 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:56:54.079076 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:56:54.079264 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:56:54.079427 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:56:54.079459 kernel: acpiphp: Slot [3] registered Jan 30 13:56:54.079476 kernel: acpiphp: Slot [4] registered Jan 30 13:56:54.079491 kernel: acpiphp: Slot [5] registered Jan 30 13:56:54.079506 kernel: acpiphp: Slot [6] registered Jan 30 13:56:54.079519 kernel: acpiphp: Slot [7] registered Jan 30 13:56:54.079531 kernel: acpiphp: Slot [8] registered Jan 30 13:56:54.079544 kernel: acpiphp: Slot [9] registered Jan 30 13:56:54.079558 kernel: acpiphp: Slot [10] registered Jan 30 13:56:54.079571 kernel: acpiphp: Slot [11] registered Jan 30 13:56:54.079592 kernel: acpiphp: Slot [12] registered Jan 30 13:56:54.079605 kernel: acpiphp: Slot [13] registered Jan 30 13:56:54.079643 kernel: acpiphp: Slot [14] registered Jan 30 13:56:54.079667 kernel: acpiphp: Slot [15] registered Jan 30 13:56:54.079692 kernel: acpiphp: Slot [16] registered Jan 30 13:56:54.079713 kernel: acpiphp: Slot [17] registered Jan 30 13:56:54.079728 kernel: acpiphp: Slot [18] registered Jan 30 13:56:54.079740 kernel: acpiphp: Slot [19] registered Jan 30 13:56:54.079754 kernel: acpiphp: Slot [20] registered Jan 30 13:56:54.079774 kernel: acpiphp: Slot [21] registered Jan 30 13:56:54.079786 kernel: acpiphp: Slot [22] registered Jan 30 13:56:54.079800 kernel: acpiphp: Slot [23] registered Jan 30 13:56:54.079814 kernel: acpiphp: Slot [24] registered Jan 30 13:56:54.079828 kernel: acpiphp: Slot [25] registered Jan 30 13:56:54.079844 kernel: acpiphp: Slot [26] registered Jan 30 13:56:54.079858 kernel: acpiphp: Slot [27] registered Jan 30 13:56:54.079871 kernel: acpiphp: Slot [28] registered Jan 30 13:56:54.079905 kernel: acpiphp: Slot [29] registered Jan 30 13:56:54.079920 kernel: acpiphp: Slot [30] registered Jan 30 13:56:54.079939 kernel: acpiphp: Slot [31] registered Jan 30 13:56:54.079952 kernel: PCI host bridge to bus 0000:00 Jan 30 13:56:54.080186 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:56:54.080345 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:56:54.080492 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:56:54.080632 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:56:54.080840 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 13:56:54.081781 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:56:54.082018 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:56:54.082218 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:56:54.082387 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:56:54.082539 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 13:56:54.082690 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:56:54.082853 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:56:54.083044 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:56:54.083201 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:56:54.083414 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 13:56:54.083576 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 13:56:54.083826 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:56:54.084151 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:56:54.084371 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:56:54.084596 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:56:54.084785 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:56:54.084949 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 13:56:54.085105 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 13:56:54.085254 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:56:54.085408 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:56:54.085665 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:56:54.085880 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 13:56:54.086060 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 13:56:54.086217 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 13:56:54.086410 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:56:54.086578 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 13:56:54.086761 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 13:56:54.087068 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 13:56:54.087265 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 13:56:54.087431 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 13:56:54.087601 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 13:56:54.087787 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 13:56:54.088013 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:56:54.088189 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:56:54.088349 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 13:56:54.088541 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 13:56:54.088723 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:56:54.088882 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 13:56:54.089826 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 13:56:54.090002 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 13:56:54.090198 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:56:54.090366 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 13:56:54.090523 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 13:56:54.090544 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:56:54.090560 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:56:54.090577 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:56:54.090593 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:56:54.090616 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:56:54.090634 kernel: iommu: Default domain type: Translated Jan 30 13:56:54.090649 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:56:54.090663 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:56:54.090678 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:56:54.090692 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:56:54.090708 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 13:56:54.090880 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:56:54.091080 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:56:54.091250 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:56:54.091270 kernel: vgaarb: loaded Jan 30 13:56:54.091287 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:56:54.091303 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:56:54.091319 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:56:54.091334 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:56:54.091351 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:56:54.091366 kernel: pnp: PnP ACPI init Jan 30 13:56:54.091382 kernel: pnp: PnP ACPI: found 4 devices Jan 30 13:56:54.091403 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:56:54.091419 kernel: NET: Registered PF_INET protocol family Jan 30 13:56:54.091434 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:56:54.091449 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:56:54.091464 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:56:54.091480 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:56:54.091495 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:56:54.091511 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:56:54.091527 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:56:54.091555 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:56:54.091570 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:56:54.091587 kernel: NET: Registered PF_XDP protocol family Jan 30 13:56:54.091770 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:56:54.092019 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:56:54.092173 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:56:54.092304 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:56:54.092433 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 13:56:54.092600 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:56:54.092760 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:56:54.092783 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:56:54.092965 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 42586 usecs Jan 30 13:56:54.092986 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:56:54.093003 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:56:54.093019 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 30 13:56:54.093033 kernel: Initialise system trusted keyrings Jan 30 13:56:54.093053 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:56:54.093068 kernel: Key type asymmetric registered Jan 30 13:56:54.093083 kernel: Asymmetric key parser 'x509' registered Jan 30 13:56:54.093097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:56:54.093112 kernel: io scheduler mq-deadline registered Jan 30 13:56:54.093127 kernel: io scheduler kyber registered Jan 30 13:56:54.093143 kernel: io scheduler bfq registered Jan 30 13:56:54.093158 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:56:54.093174 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:56:54.093195 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:56:54.093211 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:56:54.093229 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:56:54.093245 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:56:54.093261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:56:54.093277 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:56:54.093294 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:56:54.093506 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:56:54.093537 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:56:54.093707 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:56:54.096086 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:56:53 UTC (1738245413) Jan 30 13:56:54.096239 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:56:54.096266 kernel: intel_pstate: CPU model not supported Jan 30 13:56:54.096288 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:56:54.096309 kernel: Segment Routing with IPv6 Jan 30 13:56:54.096330 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:56:54.096351 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:56:54.096379 kernel: Key type dns_resolver registered Jan 30 13:56:54.096400 kernel: IPI shorthand broadcast: enabled Jan 30 13:56:54.096421 kernel: sched_clock: Marking stable (1149002768, 200390350)->(1541906629, -192513511) Jan 30 13:56:54.096442 kernel: registered taskstats version 1 Jan 30 13:56:54.096463 kernel: Loading compiled-in X.509 certificates Jan 30 13:56:54.096484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:56:54.096505 kernel: Key type .fscrypt registered Jan 30 13:56:54.096526 kernel: Key type fscrypt-provisioning registered Jan 30 13:56:54.096547 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:56:54.096572 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:56:54.096593 kernel: ima: No architecture policies found Jan 30 13:56:54.096614 kernel: clk: Disabling unused clocks Jan 30 13:56:54.096635 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:56:54.096656 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:56:54.096707 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:56:54.096731 kernel: Run /init as init process Jan 30 13:56:54.096757 kernel: with arguments: Jan 30 13:56:54.096784 kernel: /init Jan 30 13:56:54.096811 kernel: with environment: Jan 30 13:56:54.096833 kernel: HOME=/ Jan 30 13:56:54.096854 kernel: TERM=linux Jan 30 13:56:54.096876 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:56:54.098006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:56:54.098024 systemd[1]: Detected virtualization kvm. Jan 30 13:56:54.098035 systemd[1]: Detected architecture x86-64. Jan 30 13:56:54.098051 systemd[1]: Running in initrd. Jan 30 13:56:54.098062 systemd[1]: No hostname configured, using default hostname. Jan 30 13:56:54.098072 systemd[1]: Hostname set to . Jan 30 13:56:54.098082 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:56:54.098093 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:56:54.098103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:56:54.098114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:56:54.098126 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:56:54.098140 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:56:54.098150 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:56:54.098161 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:56:54.098173 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:56:54.098184 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:56:54.098194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:56:54.098205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:56:54.098218 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:56:54.098228 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:56:54.098239 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:56:54.098252 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:56:54.098262 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:56:54.098273 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:56:54.098286 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:56:54.098297 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:56:54.098308 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:56:54.098318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:56:54.098329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:56:54.098339 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:56:54.098349 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:56:54.098360 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:56:54.098373 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:56:54.098386 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:56:54.098397 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:56:54.098407 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:56:54.098417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:54.098463 systemd-journald[182]: Collecting audit messages is disabled. Jan 30 13:56:54.098491 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:56:54.098502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:56:54.098513 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:56:54.098525 systemd-journald[182]: Journal started Jan 30 13:56:54.098551 systemd-journald[182]: Runtime Journal (/run/log/journal/7bbea259d09e462480e79221605532bb) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:56:54.085847 systemd-modules-load[183]: Inserted module 'overlay' Jan 30 13:56:54.106947 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:56:54.132219 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:56:54.165670 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:56:54.165715 kernel: Bridge firewalling registered Jan 30 13:56:54.152496 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 30 13:56:54.173283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:56:54.174199 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:56:54.181277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:54.185310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:56:54.186625 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:56:54.196147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:56:54.201224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:56:54.206064 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:56:54.223360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:56:54.226726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:56:54.237182 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:56:54.239235 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:56:54.245121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:56:54.278177 dracut-cmdline[220]: dracut-dracut-053 Jan 30 13:56:54.285917 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:56:54.288757 systemd-resolved[218]: Positive Trust Anchors: Jan 30 13:56:54.288772 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:56:54.288868 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:56:54.299233 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 30 13:56:54.301445 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:56:54.302972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:56:54.398932 kernel: SCSI subsystem initialized Jan 30 13:56:54.412933 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:56:54.427924 kernel: iscsi: registered transport (tcp) Jan 30 13:56:54.460197 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:56:54.460297 kernel: QLogic iSCSI HBA Driver Jan 30 13:56:54.530371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:56:54.537232 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:56:54.571971 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:56:54.572100 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:56:54.573242 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:56:54.624991 kernel: raid6: avx2x4 gen() 23854 MB/s Jan 30 13:56:54.639962 kernel: raid6: avx2x2 gen() 22445 MB/s Jan 30 13:56:54.658365 kernel: raid6: avx2x1 gen() 16965 MB/s Jan 30 13:56:54.658505 kernel: raid6: using algorithm avx2x4 gen() 23854 MB/s Jan 30 13:56:54.677122 kernel: raid6: .... xor() 4769 MB/s, rmw enabled Jan 30 13:56:54.677273 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:56:54.704968 kernel: xor: automatically using best checksumming function avx Jan 30 13:56:54.964957 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:56:54.984278 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:56:54.992336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:56:55.023247 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 30 13:56:55.032789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:56:55.041325 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:56:55.074675 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 13:56:55.130981 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:56:55.139268 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:56:55.227478 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:56:55.236257 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:56:55.260131 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:56:55.268377 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:56:55.269929 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:56:55.272705 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:56:55.279924 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:56:55.318364 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:56:55.383952 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:56:55.386932 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:56:55.390927 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 13:56:55.465320 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 13:56:55.465531 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:56:55.465557 kernel: ACPI: bus type USB registered Jan 30 13:56:55.465575 kernel: usbcore: registered new interface driver usbfs Jan 30 13:56:55.465610 kernel: AES CTR mode by8 optimization enabled Jan 30 13:56:55.465644 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:56:55.465664 kernel: GPT:9289727 != 125829119 Jan 30 13:56:55.465684 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:56:55.465704 kernel: GPT:9289727 != 125829119 Jan 30 13:56:55.465723 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:56:55.465743 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:55.465763 kernel: usbcore: registered new interface driver hub Jan 30 13:56:55.465782 kernel: usbcore: registered new device driver usb Jan 30 13:56:55.449059 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:56:55.472123 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 13:56:55.473481 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 30 13:56:55.449202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:56:55.463716 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:56:55.464461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:56:55.464679 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:55.471100 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:55.484105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:56:55.498980 kernel: libata version 3.00 loaded. Jan 30 13:56:55.522483 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:56:55.549203 kernel: scsi host1: ata_piix Jan 30 13:56:55.549531 kernel: scsi host2: ata_piix Jan 30 13:56:55.549764 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 13:56:55.549789 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 13:56:55.573637 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:56:55.632095 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Jan 30 13:56:55.632145 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (448) Jan 30 13:56:55.632173 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 13:56:55.632469 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 13:56:55.632660 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 13:56:55.632852 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 13:56:55.633091 kernel: hub 1-0:1.0: USB hub found Jan 30 13:56:55.633414 kernel: hub 1-0:1.0: 2 ports detected Jan 30 13:56:55.638441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:55.655266 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:56:55.670345 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:56:55.671295 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:56:55.681331 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:56:55.699312 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:56:55.704242 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:56:55.723166 disk-uuid[540]: Primary Header is updated. Jan 30 13:56:55.723166 disk-uuid[540]: Secondary Entries is updated. Jan 30 13:56:55.723166 disk-uuid[540]: Secondary Header is updated. Jan 30 13:56:55.730938 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:55.740931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:55.746148 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:56:55.755931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:56.752060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:56:56.752181 disk-uuid[541]: The operation has completed successfully. Jan 30 13:56:56.819980 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:56:56.820186 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:56:56.840235 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:56:56.856510 sh[562]: Success Jan 30 13:56:56.877941 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:56:56.973021 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:56:56.988113 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:56:56.989348 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:56:57.042682 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:56:57.042782 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:56:57.045231 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:56:57.049369 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:56:57.049460 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:56:57.074336 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:56:57.076148 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:56:57.082213 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:56:57.090104 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:56:57.112867 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:57.112961 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:56:57.112990 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:56:57.120918 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:56:57.139295 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:57.138724 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:56:57.155484 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:56:57.164430 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:56:57.260017 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:56:57.289443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:56:57.330757 systemd-networkd[746]: lo: Link UP Jan 30 13:56:57.331819 systemd-networkd[746]: lo: Gained carrier Jan 30 13:56:57.336585 systemd-networkd[746]: Enumeration completed Jan 30 13:56:57.336762 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:56:57.337615 systemd[1]: Reached target network.target - Network. Jan 30 13:56:57.339589 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:56:57.339594 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 13:56:57.343219 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:56:57.343227 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:56:57.346423 systemd-networkd[746]: eth0: Link UP Jan 30 13:56:57.346432 systemd-networkd[746]: eth0: Gained carrier Jan 30 13:56:57.346454 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:56:57.356546 systemd-networkd[746]: eth1: Link UP Jan 30 13:56:57.356554 systemd-networkd[746]: eth1: Gained carrier Jan 30 13:56:57.356579 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:56:57.371138 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.2/20 acquired from 169.254.169.253 Jan 30 13:56:57.377043 systemd-networkd[746]: eth0: DHCPv4 address 64.227.111.225/20, gateway 64.227.96.1 acquired from 169.254.169.253 Jan 30 13:56:57.401035 ignition[669]: Ignition 2.19.0 Jan 30 13:56:57.401056 ignition[669]: Stage: fetch-offline Jan 30 13:56:57.401120 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:57.403845 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:56:57.401136 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:57.401332 ignition[669]: parsed url from cmdline: "" Jan 30 13:56:57.401339 ignition[669]: no config URL provided Jan 30 13:56:57.401349 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:56:57.401364 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:56:57.401373 ignition[669]: failed to fetch config: resource requires networking Jan 30 13:56:57.402123 ignition[669]: Ignition finished successfully Jan 30 13:56:57.414302 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:56:57.443595 ignition[756]: Ignition 2.19.0 Jan 30 13:56:57.443613 ignition[756]: Stage: fetch Jan 30 13:56:57.443937 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:57.443956 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:57.444125 ignition[756]: parsed url from cmdline: "" Jan 30 13:56:57.444131 ignition[756]: no config URL provided Jan 30 13:56:57.444140 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:56:57.444155 ignition[756]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:56:57.444186 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 13:56:57.461340 ignition[756]: GET result: OK Jan 30 13:56:57.462170 ignition[756]: parsing config with SHA512: 0c95b252965c1bf12d86b47b973a3239c1071ded128c6e6e9740625d6cb1d300f368d312f7c7ed01ef1662127a06522b7ba1a911a17e67873d58ed296b1269cc Jan 30 13:56:57.469816 unknown[756]: fetched base config from "system" Jan 30 13:56:57.469835 unknown[756]: fetched base config from "system" Jan 30 13:56:57.469844 unknown[756]: fetched user config from "digitalocean" Jan 30 13:56:57.474334 ignition[756]: fetch: fetch complete Jan 30 13:56:57.474371 ignition[756]: fetch: fetch passed Jan 30 13:56:57.474500 ignition[756]: Ignition finished successfully Jan 30 13:56:57.477707 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:56:57.490310 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:56:57.517821 ignition[762]: Ignition 2.19.0 Jan 30 13:56:57.517841 ignition[762]: Stage: kargs Jan 30 13:56:57.518166 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:57.518186 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:57.519805 ignition[762]: kargs: kargs passed Jan 30 13:56:57.524077 ignition[762]: Ignition finished successfully Jan 30 13:56:57.526353 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:56:57.534198 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:56:57.576633 ignition[768]: Ignition 2.19.0 Jan 30 13:56:57.576651 ignition[768]: Stage: disks Jan 30 13:56:57.577013 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:57.577033 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:57.580578 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:56:57.578610 ignition[768]: disks: disks passed Jan 30 13:56:57.582216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:56:57.578696 ignition[768]: Ignition finished successfully Jan 30 13:56:57.588464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:56:57.589647 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:56:57.590836 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:56:57.592461 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:56:57.600292 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:56:57.632870 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:56:57.641408 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:56:57.648095 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:56:57.780919 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:56:57.781957 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:56:57.783682 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:56:57.797134 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:56:57.800534 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:56:57.803142 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 13:56:57.809163 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:56:57.811399 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:56:57.811449 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:56:57.831096 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (784) Jan 30 13:56:57.831141 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:57.831171 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:56:57.831200 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:56:57.829316 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:56:57.835434 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:56:57.843368 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:56:57.856777 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:56:57.940253 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:56:57.953110 coreos-metadata[786]: Jan 30 13:56:57.952 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:56:57.957559 coreos-metadata[787]: Jan 30 13:56:57.956 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:56:57.960499 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:56:57.969666 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:56:57.972496 coreos-metadata[786]: Jan 30 13:56:57.972 INFO Fetch successful Jan 30 13:56:57.974585 coreos-metadata[787]: Jan 30 13:56:57.974 INFO Fetch successful Jan 30 13:56:57.985329 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:56:57.988972 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 13:56:57.992697 coreos-metadata[787]: Jan 30 13:56:57.991 INFO wrote hostname ci-4081.3.0-2-c6825061e7 to /sysroot/etc/hostname Jan 30 13:56:57.992049 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 13:56:57.994331 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:56:58.133603 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:56:58.151126 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:56:58.154151 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:56:58.165522 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:56:58.166998 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:58.205409 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:56:58.207589 ignition[904]: INFO : Ignition 2.19.0 Jan 30 13:56:58.207589 ignition[904]: INFO : Stage: mount Jan 30 13:56:58.207589 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:58.207589 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:58.212072 ignition[904]: INFO : mount: mount passed Jan 30 13:56:58.212072 ignition[904]: INFO : Ignition finished successfully Jan 30 13:56:58.214012 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:56:58.220057 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:56:58.248246 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:56:58.264949 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (917) Jan 30 13:56:58.271171 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:56:58.271288 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:56:58.271303 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:56:58.278274 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:56:58.280568 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:56:58.312124 ignition[934]: INFO : Ignition 2.19.0 Jan 30 13:56:58.312124 ignition[934]: INFO : Stage: files Jan 30 13:56:58.313613 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:58.313613 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:58.315251 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:56:58.316065 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:56:58.316065 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:56:58.320424 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:56:58.321640 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:56:58.322481 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:56:58.321855 unknown[934]: wrote ssh authorized keys file for user: core Jan 30 13:56:58.324315 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:56:58.324315 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:56:58.368547 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:56:58.455982 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:56:58.455982 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:56:58.458406 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:56:58.471097 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:56:58.471097 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:56:58.471097 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:56:58.957222 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:56:58.982075 systemd-networkd[746]: eth0: Gained IPv6LL Jan 30 13:56:59.253140 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:56:59.255135 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:56:59.257387 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:56:59.257387 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:56:59.257387 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:56:59.257387 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:56:59.262343 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:56:59.262343 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:56:59.262343 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:56:59.262343 ignition[934]: INFO : files: files passed Jan 30 13:56:59.262343 ignition[934]: INFO : Ignition finished successfully Jan 30 13:56:59.260211 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:56:59.272364 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:56:59.277195 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:56:59.279719 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:56:59.279836 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:56:59.309080 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:56:59.309080 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:56:59.312245 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:56:59.313697 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:56:59.315168 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:56:59.329280 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:56:59.366483 systemd-networkd[746]: eth1: Gained IPv6LL Jan 30 13:56:59.388479 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:56:59.388673 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:56:59.391459 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:56:59.392342 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:56:59.393801 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:56:59.399372 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:56:59.431376 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:56:59.439330 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:56:59.476252 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:56:59.477263 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:56:59.479704 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:56:59.481045 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:56:59.481301 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:56:59.483385 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:56:59.485334 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:56:59.486671 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:56:59.487946 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:56:59.489523 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:56:59.490884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:56:59.492489 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:56:59.494006 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:56:59.495373 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:56:59.496738 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:56:59.497697 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:56:59.497970 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:56:59.499731 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:56:59.500503 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:56:59.501936 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:56:59.502147 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:56:59.504505 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:56:59.504779 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:56:59.506698 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:56:59.507093 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:56:59.508722 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:56:59.509037 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:56:59.509815 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:56:59.510064 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:56:59.527008 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:56:59.528540 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:56:59.528863 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:56:59.533327 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:56:59.534803 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:56:59.536055 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:56:59.538767 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:56:59.540377 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:56:59.548433 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:56:59.552715 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:56:59.575931 ignition[986]: INFO : Ignition 2.19.0 Jan 30 13:56:59.575931 ignition[986]: INFO : Stage: umount Jan 30 13:56:59.575931 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:56:59.575931 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:56:59.589608 ignition[986]: INFO : umount: umount passed Jan 30 13:56:59.589608 ignition[986]: INFO : Ignition finished successfully Jan 30 13:56:59.584983 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:56:59.588682 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:56:59.588875 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:56:59.592322 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:56:59.598287 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:56:59.599383 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:56:59.599536 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:56:59.617815 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:56:59.617971 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:56:59.619121 systemd[1]: Stopped target network.target - Network. Jan 30 13:56:59.620387 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:56:59.620530 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:56:59.621767 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:56:59.639188 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:56:59.642259 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:56:59.646044 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:56:59.647304 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:56:59.648472 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:56:59.648561 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:56:59.651484 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:56:59.651580 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:56:59.652642 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:56:59.652753 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:56:59.653836 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:56:59.653949 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:56:59.655559 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:56:59.657744 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:56:59.659499 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:56:59.659628 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:56:59.660592 systemd-networkd[746]: eth1: DHCPv6 lease lost Jan 30 13:56:59.663832 systemd-networkd[746]: eth0: DHCPv6 lease lost Jan 30 13:56:59.665419 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:56:59.665555 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:56:59.667392 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:56:59.667633 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:56:59.672013 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:56:59.672630 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:56:59.675727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:56:59.675850 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:56:59.683169 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:56:59.686389 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:56:59.686549 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:56:59.687997 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:56:59.688106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:56:59.689151 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:56:59.689239 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:56:59.690512 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:56:59.690602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:56:59.693395 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:56:59.716581 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:56:59.716880 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:56:59.720326 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:56:59.720443 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:56:59.723530 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:56:59.723613 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:56:59.724838 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:56:59.724963 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:56:59.728657 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:56:59.728776 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:56:59.730727 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:56:59.730840 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:56:59.740365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:56:59.741236 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:56:59.741378 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:56:59.742246 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:56:59.742322 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:56:59.747301 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:56:59.747521 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:56:59.762237 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:56:59.762433 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:56:59.764654 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:56:59.774424 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:56:59.789813 systemd[1]: Switching root. Jan 30 13:56:59.836165 systemd-journald[182]: Journal stopped Jan 30 13:57:02.103217 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 30 13:57:02.103371 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:57:02.103408 kernel: SELinux: policy capability open_perms=1 Jan 30 13:57:02.103439 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:57:02.103470 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:57:02.103506 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:57:02.103533 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:57:02.103559 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:57:02.103596 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:57:02.103621 kernel: audit: type=1403 audit(1738245420.104:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:57:02.103660 systemd[1]: Successfully loaded SELinux policy in 58.595ms. Jan 30 13:57:02.103705 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.076ms. Jan 30 13:57:02.103741 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:57:02.103780 systemd[1]: Detected virtualization kvm. Jan 30 13:57:02.103826 systemd[1]: Detected architecture x86-64. Jan 30 13:57:02.103847 systemd[1]: Detected first boot. Jan 30 13:57:02.103880 systemd[1]: Hostname set to . Jan 30 13:57:02.103954 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:57:02.103988 zram_generator::config[1029]: No configuration found. Jan 30 13:57:02.104017 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:57:02.104047 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:57:02.104074 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:57:02.104109 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:57:02.104144 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:57:02.104183 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:57:02.104213 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:57:02.104241 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:57:02.104270 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:57:02.104298 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:57:02.104327 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:57:02.104355 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:57:02.104390 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:57:02.104439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:57:02.104476 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:57:02.104508 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:57:02.104546 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:57:02.104583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:57:02.104625 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:57:02.104654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:57:02.104685 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:57:02.104719 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:57:02.104758 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:57:02.104795 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:57:02.104831 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:57:02.104859 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:57:02.119256 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:57:02.119349 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:57:02.119380 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:57:02.119425 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:57:02.119457 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:57:02.119607 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:57:02.119638 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:57:02.119665 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:57:02.119692 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:57:02.119720 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:57:02.119764 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:57:02.119791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:02.119823 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:57:02.119852 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:57:02.119879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:57:02.119939 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:57:02.119971 systemd[1]: Reached target machines.target - Containers. Jan 30 13:57:02.119999 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:57:02.120027 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:57:02.120056 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:57:02.120087 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:57:02.120114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:57:02.120140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:57:02.120168 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:57:02.120195 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:57:02.120222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:57:02.120250 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:57:02.120278 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:57:02.120305 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:57:02.120338 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:57:02.120366 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:57:02.120402 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:57:02.120426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:57:02.120452 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:57:02.120480 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:57:02.120508 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:57:02.120535 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:57:02.120562 systemd[1]: Stopped verity-setup.service. Jan 30 13:57:02.120594 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:02.120620 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:57:02.120647 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:57:02.120678 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:57:02.120705 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:57:02.120731 kernel: ACPI: bus type drm_connector registered Jan 30 13:57:02.120762 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:57:02.120789 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:57:02.120816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:57:02.120843 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:57:02.120870 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:57:02.120925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:57:02.120959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:57:02.120987 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:57:02.121149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:57:02.121178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:57:02.121206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:57:02.121234 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:57:02.121263 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:57:02.121394 kernel: fuse: init (API version 7.39) Jan 30 13:57:02.121423 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:57:02.121451 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:57:02.121482 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:57:02.121698 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:57:02.121736 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:57:02.121764 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:57:02.121791 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:57:02.122011 kernel: loop: module loaded Jan 30 13:57:02.122044 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:57:02.122073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:57:02.122148 systemd-journald[1105]: Collecting audit messages is disabled. Jan 30 13:57:02.122719 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:57:02.122754 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:57:02.122782 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:57:02.122818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:57:02.122854 systemd-journald[1105]: Journal started Jan 30 13:57:02.123496 systemd-journald[1105]: Runtime Journal (/run/log/journal/7bbea259d09e462480e79221605532bb) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:57:01.465451 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:57:01.497517 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:57:01.498252 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:57:02.137388 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:57:02.137478 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:57:02.140871 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:57:02.146830 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:57:02.147171 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:57:02.149733 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:57:02.150678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:57:02.157044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:57:02.158752 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:57:02.163004 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:57:02.210052 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:57:02.225961 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:57:02.251194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:57:02.263994 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:57:02.273202 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:57:02.285707 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:57:02.285189 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:57:02.302625 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:57:02.303822 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:57:02.317751 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:57:02.335154 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 13:57:02.326256 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:57:02.332728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:57:02.368280 systemd-journald[1105]: Time spent on flushing to /var/log/journal/7bbea259d09e462480e79221605532bb is 95.507ms for 996 entries. Jan 30 13:57:02.368280 systemd-journald[1105]: System Journal (/var/log/journal/7bbea259d09e462480e79221605532bb) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:57:02.506433 systemd-journald[1105]: Received client request to flush runtime journal. Jan 30 13:57:02.506521 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:57:02.506550 kernel: loop3: detected capacity change from 0 to 8 Jan 30 13:57:02.377438 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:57:02.381811 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:57:02.414413 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:57:02.469263 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:57:02.481189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:57:02.514101 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:57:02.562986 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:57:02.596410 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 30 13:57:02.596444 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 30 13:57:02.629208 kernel: loop5: detected capacity change from 0 to 218376 Jan 30 13:57:02.638085 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:57:02.663952 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 13:57:02.715958 kernel: loop7: detected capacity change from 0 to 8 Jan 30 13:57:02.719425 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 13:57:02.720369 (sd-merge)[1173]: Merged extensions into '/usr'. Jan 30 13:57:02.732641 systemd[1]: Reloading requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:57:02.732676 systemd[1]: Reloading... Jan 30 13:57:02.892917 zram_generator::config[1200]: No configuration found. Jan 30 13:57:03.184613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:57:03.339739 ldconfig[1126]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:57:03.367188 systemd[1]: Reloading finished in 633 ms. Jan 30 13:57:03.416204 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:57:03.420204 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:57:03.431230 systemd[1]: Starting ensure-sysext.service... Jan 30 13:57:03.442194 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:57:03.468114 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:57:03.468142 systemd[1]: Reloading... Jan 30 13:57:03.529714 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:57:03.530413 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:57:03.534622 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:57:03.535154 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 13:57:03.535280 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 30 13:57:03.546555 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:57:03.546572 systemd-tmpfiles[1244]: Skipping /boot Jan 30 13:57:03.581918 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:57:03.581937 systemd-tmpfiles[1244]: Skipping /boot Jan 30 13:57:03.652921 zram_generator::config[1271]: No configuration found. Jan 30 13:57:03.877934 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:57:03.974685 systemd[1]: Reloading finished in 505 ms. Jan 30 13:57:03.994316 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:57:04.014253 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:57:04.019562 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:57:04.024207 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:57:04.037385 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:57:04.044168 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:57:04.058207 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:04.058656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:57:04.067601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:57:04.081409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:57:04.093907 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:57:04.094728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:57:04.095006 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:04.101444 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:04.101840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:57:04.104069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:57:04.111384 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:57:04.112256 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:04.117518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:04.117929 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:57:04.125458 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:57:04.126400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:57:04.126684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:04.128059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:57:04.128295 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:57:04.149535 systemd[1]: Finished ensure-sysext.service. Jan 30 13:57:04.160995 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:57:04.162425 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:57:04.178731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:57:04.180061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:57:04.181043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:57:04.186754 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:57:04.188715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:57:04.190468 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:57:04.190695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:57:04.197759 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:57:04.210680 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:57:04.232437 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:57:04.244340 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:57:04.255247 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:57:04.262557 augenrules[1351]: No rules Jan 30 13:57:04.263029 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:57:04.265971 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:57:04.269771 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:57:04.278599 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:57:04.290986 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:57:04.347290 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Jan 30 13:57:04.409868 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:57:04.411319 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:57:04.415187 systemd-resolved[1319]: Positive Trust Anchors: Jan 30 13:57:04.415744 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:57:04.415890 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:57:04.420944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:57:04.423142 systemd-resolved[1319]: Using system hostname 'ci-4081.3.0-2-c6825061e7'. Jan 30 13:57:04.431175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:57:04.433118 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:57:04.434962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:57:04.520612 systemd-networkd[1369]: lo: Link UP Jan 30 13:57:04.520623 systemd-networkd[1369]: lo: Gained carrier Jan 30 13:57:04.522433 systemd-networkd[1369]: Enumeration completed Jan 30 13:57:04.522581 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:57:04.523703 systemd[1]: Reached target network.target - Network. Jan 30 13:57:04.535189 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:57:04.610105 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 13:57:04.610964 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:04.611125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:57:04.619116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:57:04.624918 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1375) Jan 30 13:57:04.629154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:57:04.637208 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:57:04.639116 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:57:04.639167 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:57:04.639186 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:57:04.642131 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:57:04.667917 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 13:57:04.671188 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 13:57:04.676390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:57:04.676673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:57:04.677859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:57:04.678189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:57:04.682544 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:57:04.682832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:57:04.689741 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:57:04.689880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:57:04.696920 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:57:04.719318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:57:04.725171 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:57:04.742510 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:57:04.727192 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:57:04.749292 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:57:04.767411 systemd-networkd[1369]: eth0: Configuring with /run/systemd/network/10-9e:0a:fd:9d:44:ad.network. Jan 30 13:57:04.769714 systemd-networkd[1369]: eth1: Configuring with /run/systemd/network/10-7a:aa:34:13:b2:62.network. Jan 30 13:57:04.771234 systemd-networkd[1369]: eth0: Link UP Jan 30 13:57:04.771245 systemd-networkd[1369]: eth0: Gained carrier Jan 30 13:57:04.779790 systemd-networkd[1369]: eth1: Link UP Jan 30 13:57:04.780010 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:57:04.779804 systemd-networkd[1369]: eth1: Gained carrier Jan 30 13:57:04.787529 systemd-timesyncd[1336]: Network configuration changed, trying to establish connection. Jan 30 13:57:04.876177 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:57:04.881927 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:57:04.883914 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:57:04.894234 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:57:04.891373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:57:04.896188 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:57:04.896270 kernel: [drm] features: -context_init Jan 30 13:57:04.899225 kernel: [drm] number of scanouts: 1 Jan 30 13:57:04.899309 kernel: [drm] number of cap sets: 0 Jan 30 13:57:04.906007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:57:04.915378 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:57:04.906211 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:57:04.918222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:57:04.934943 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:57:04.938325 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:57:04.989555 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:57:04.975717 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:57:04.976038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:57:05.003903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:57:05.075956 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:57:05.101995 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:57:05.116344 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:57:05.118303 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:57:05.138473 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:57:05.178431 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:57:05.179604 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:57:05.179795 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:57:05.180322 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:57:05.183487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:57:05.184852 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:57:05.185301 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:57:05.185483 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:57:05.185623 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:57:05.185675 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:57:05.185789 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:57:05.187675 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:57:05.196383 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:57:05.202361 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:57:05.205424 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:57:05.207672 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:57:05.209798 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:57:05.212075 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:57:05.212647 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:57:05.212693 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:57:05.224072 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:57:05.227546 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:57:05.238233 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:57:05.243058 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:57:05.253852 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:57:05.265239 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:57:05.266061 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:57:05.272266 jq[1433]: false Jan 30 13:57:05.276146 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:57:05.288028 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:57:05.300257 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:57:05.313599 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:57:05.324429 coreos-metadata[1431]: Jan 30 13:57:05.324 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:57:05.325552 dbus-daemon[1432]: [system] SELinux support is enabled Jan 30 13:57:05.327108 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:57:05.328153 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:57:05.328747 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:57:05.339985 coreos-metadata[1431]: Jan 30 13:57:05.337 INFO Fetch successful Jan 30 13:57:05.340384 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:57:05.354043 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:57:05.355317 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:57:05.364702 jq[1444]: true Jan 30 13:57:05.364841 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:57:05.376504 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:57:05.376977 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:57:05.386711 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:57:05.387769 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:57:05.415833 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:57:05.417008 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:57:05.417729 extend-filesystems[1434]: Found loop4 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found loop5 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found loop6 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found loop7 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found vda Jan 30 13:57:05.425063 extend-filesystems[1434]: Found vda1 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found vda2 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found vda3 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found usr Jan 30 13:57:05.425063 extend-filesystems[1434]: Found vda4 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found vda6 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found vda7 Jan 30 13:57:05.425063 extend-filesystems[1434]: Found vda9 Jan 30 13:57:05.425063 extend-filesystems[1434]: Checking size of /dev/vda9 Jan 30 13:57:05.422666 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:57:05.533724 update_engine[1443]: I20250130 13:57:05.461364 1443 main.cc:92] Flatcar Update Engine starting Jan 30 13:57:05.533724 update_engine[1443]: I20250130 13:57:05.480065 1443 update_check_scheduler.cc:74] Next update check in 4m49s Jan 30 13:57:05.534174 tar[1449]: linux-amd64/LICENSE Jan 30 13:57:05.534174 tar[1449]: linux-amd64/helm Jan 30 13:57:05.422838 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 13:57:05.540257 jq[1451]: true Jan 30 13:57:05.422871 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:57:05.465510 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:57:05.465795 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:57:05.490446 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:57:05.490904 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:57:05.499764 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:57:05.547648 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:57:05.553684 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:57:05.569095 extend-filesystems[1434]: Resized partition /dev/vda9 Jan 30 13:57:05.593604 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1378) Jan 30 13:57:05.593639 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:57:05.617196 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 13:57:05.658035 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:57:05.673189 systemd-logind[1442]: New seat seat0. Jan 30 13:57:05.701260 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:57:05.701281 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:57:05.703261 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:57:05.755565 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:57:05.761453 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:57:05.788096 systemd[1]: Starting sshkeys.service... Jan 30 13:57:05.800922 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 13:57:05.848259 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:57:05.856169 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:57:05.868363 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:57:05.877919 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:57:05.877919 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 13:57:05.877919 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 13:57:05.883566 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jan 30 13:57:05.883566 extend-filesystems[1434]: Found vdb Jan 30 13:57:05.880197 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:57:05.880691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:57:05.936125 coreos-metadata[1503]: Jan 30 13:57:05.933 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:57:05.946462 coreos-metadata[1503]: Jan 30 13:57:05.946 INFO Fetch successful Jan 30 13:57:05.964841 unknown[1503]: wrote ssh authorized keys file for user: core Jan 30 13:57:06.037738 update-ssh-keys[1512]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:57:06.038028 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:57:06.045206 systemd[1]: Finished sshkeys.service. Jan 30 13:57:06.065494 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:57:06.152748 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:57:06.166146 containerd[1465]: time="2025-01-30T13:57:06.166033930Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:57:06.168144 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:57:06.179669 systemd[1]: Started sshd@0-64.227.111.225:22-147.75.109.163:46608.service - OpenSSH per-connection server daemon (147.75.109.163:46608). Jan 30 13:57:06.227143 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:57:06.229020 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:57:06.241455 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:57:06.252310 containerd[1465]: time="2025-01-30T13:57:06.252248759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:57:06.255822 containerd[1465]: time="2025-01-30T13:57:06.255754310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:57:06.256013 containerd[1465]: time="2025-01-30T13:57:06.255988820Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:57:06.256587 containerd[1465]: time="2025-01-30T13:57:06.256093822Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:57:06.256587 containerd[1465]: time="2025-01-30T13:57:06.256329489Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:57:06.256587 containerd[1465]: time="2025-01-30T13:57:06.256366949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:57:06.256587 containerd[1465]: time="2025-01-30T13:57:06.256462936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:57:06.256587 containerd[1465]: time="2025-01-30T13:57:06.256485436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:57:06.257102 containerd[1465]: time="2025-01-30T13:57:06.257072543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:57:06.257197 containerd[1465]: time="2025-01-30T13:57:06.257182660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:57:06.257278 containerd[1465]: time="2025-01-30T13:57:06.257263113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:57:06.257707 containerd[1465]: time="2025-01-30T13:57:06.257337222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:57:06.257707 containerd[1465]: time="2025-01-30T13:57:06.257443787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:57:06.257707 containerd[1465]: time="2025-01-30T13:57:06.257672384Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:57:06.258182 containerd[1465]: time="2025-01-30T13:57:06.258149861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:57:06.258381 containerd[1465]: time="2025-01-30T13:57:06.258347060Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:57:06.258757 containerd[1465]: time="2025-01-30T13:57:06.258626983Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:57:06.258757 containerd[1465]: time="2025-01-30T13:57:06.258713333Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:57:06.274394 containerd[1465]: time="2025-01-30T13:57:06.274303407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:57:06.275959 containerd[1465]: time="2025-01-30T13:57:06.275066034Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:57:06.275959 containerd[1465]: time="2025-01-30T13:57:06.275130521Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:57:06.275959 containerd[1465]: time="2025-01-30T13:57:06.275171427Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:57:06.275959 containerd[1465]: time="2025-01-30T13:57:06.275726332Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:57:06.276332 containerd[1465]: time="2025-01-30T13:57:06.276309684Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278281750Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278476715Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278500190Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278518329Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278538885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278558133Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278575978Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278597919Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278619332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278637495Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278654601Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:57:06.279248 containerd[1465]: time="2025-01-30T13:57:06.278672089Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:57:06.280227 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:57:06.281199 containerd[1465]: time="2025-01-30T13:57:06.280729729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.281199 containerd[1465]: time="2025-01-30T13:57:06.280767500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.281199 containerd[1465]: time="2025-01-30T13:57:06.280800681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.281199 containerd[1465]: time="2025-01-30T13:57:06.280821848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.281199 containerd[1465]: time="2025-01-30T13:57:06.280912975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.281199 containerd[1465]: time="2025-01-30T13:57:06.280979481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.281199 containerd[1465]: time="2025-01-30T13:57:06.281003137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.281681919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282216275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282247821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282277874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282302268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282438928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282475062Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282531285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282552026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.282567408Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.284192922Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.284348975Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:57:06.284419 containerd[1465]: time="2025-01-30T13:57:06.284369542Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:57:06.285517 containerd[1465]: time="2025-01-30T13:57:06.284388011Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:57:06.285517 containerd[1465]: time="2025-01-30T13:57:06.284402854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.285517 containerd[1465]: time="2025-01-30T13:57:06.284679026Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:57:06.285517 containerd[1465]: time="2025-01-30T13:57:06.284692011Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:57:06.285517 containerd[1465]: time="2025-01-30T13:57:06.284702342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:57:06.288738 containerd[1465]: time="2025-01-30T13:57:06.286587652Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:57:06.288738 containerd[1465]: time="2025-01-30T13:57:06.287338690Z" level=info msg="Connect containerd service" Jan 30 13:57:06.288738 containerd[1465]: time="2025-01-30T13:57:06.287409564Z" level=info msg="using legacy CRI server" Jan 30 13:57:06.288738 containerd[1465]: time="2025-01-30T13:57:06.287420623Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:57:06.288738 containerd[1465]: time="2025-01-30T13:57:06.288093611Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:57:06.290518 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:57:06.293326 containerd[1465]: time="2025-01-30T13:57:06.291290089Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:57:06.293326 containerd[1465]: time="2025-01-30T13:57:06.291593082Z" level=info msg="Start subscribing containerd event" Jan 30 13:57:06.293326 containerd[1465]: time="2025-01-30T13:57:06.291657174Z" level=info msg="Start recovering state" Jan 30 13:57:06.293326 containerd[1465]: time="2025-01-30T13:57:06.291729386Z" level=info msg="Start event monitor" Jan 30 13:57:06.293326 containerd[1465]: time="2025-01-30T13:57:06.291750592Z" level=info msg="Start snapshots syncer" Jan 30 13:57:06.293326 containerd[1465]: time="2025-01-30T13:57:06.291760440Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:57:06.293326 containerd[1465]: time="2025-01-30T13:57:06.291768953Z" level=info msg="Start streaming server" Jan 30 13:57:06.294380 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:57:06.295561 containerd[1465]: time="2025-01-30T13:57:06.294845116Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:57:06.295561 containerd[1465]: time="2025-01-30T13:57:06.294938723Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:57:06.295561 containerd[1465]: time="2025-01-30T13:57:06.295012930Z" level=info msg="containerd successfully booted in 0.130073s" Jan 30 13:57:06.299941 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:57:06.302311 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:57:06.363943 sshd[1526]: Accepted publickey for core from 147.75.109.163 port 46608 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:06.366869 sshd[1526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:06.383013 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:57:06.392487 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:57:06.405326 systemd-logind[1442]: New session 1 of user core. Jan 30 13:57:06.426278 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:57:06.440006 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:57:06.450815 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:57:06.470041 systemd-networkd[1369]: eth0: Gained IPv6LL Jan 30 13:57:06.471331 systemd-timesyncd[1336]: Network configuration changed, trying to establish connection. Jan 30 13:57:06.479992 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:57:06.482317 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:57:06.500065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:06.514500 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:57:06.598137 systemd-networkd[1369]: eth1: Gained IPv6LL Jan 30 13:57:06.599425 systemd-timesyncd[1336]: Network configuration changed, trying to establish connection. Jan 30 13:57:06.615476 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:57:06.684578 systemd[1540]: Queued start job for default target default.target. Jan 30 13:57:06.692557 systemd[1540]: Created slice app.slice - User Application Slice. Jan 30 13:57:06.692619 systemd[1540]: Reached target paths.target - Paths. Jan 30 13:57:06.692637 systemd[1540]: Reached target timers.target - Timers. Jan 30 13:57:06.696375 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:57:06.730639 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:57:06.730818 systemd[1540]: Reached target sockets.target - Sockets. Jan 30 13:57:06.730847 systemd[1540]: Reached target basic.target - Basic System. Jan 30 13:57:06.731074 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:57:06.734821 systemd[1540]: Reached target default.target - Main User Target. Jan 30 13:57:06.734971 systemd[1540]: Startup finished in 274ms. Jan 30 13:57:06.740191 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:57:06.825392 systemd[1]: Started sshd@1-64.227.111.225:22-147.75.109.163:57736.service - OpenSSH per-connection server daemon (147.75.109.163:57736). Jan 30 13:57:06.956692 sshd[1562]: Accepted publickey for core from 147.75.109.163 port 57736 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:06.962342 tar[1449]: linux-amd64/README.md Jan 30 13:57:06.963734 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:06.984619 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:57:06.991025 systemd-logind[1442]: New session 2 of user core. Jan 30 13:57:06.997228 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:57:07.067862 sshd[1562]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:07.077448 systemd[1]: sshd@1-64.227.111.225:22-147.75.109.163:57736.service: Deactivated successfully. Jan 30 13:57:07.080326 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:57:07.083280 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:57:07.090423 systemd[1]: Started sshd@2-64.227.111.225:22-147.75.109.163:57746.service - OpenSSH per-connection server daemon (147.75.109.163:57746). Jan 30 13:57:07.094593 systemd-logind[1442]: Removed session 2. Jan 30 13:57:07.132183 sshd[1572]: Accepted publickey for core from 147.75.109.163 port 57746 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:07.133483 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:07.141383 systemd-logind[1442]: New session 3 of user core. Jan 30 13:57:07.149214 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:57:07.216657 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:07.221398 systemd[1]: sshd@2-64.227.111.225:22-147.75.109.163:57746.service: Deactivated successfully. Jan 30 13:57:07.224044 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:57:07.226746 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:57:07.230357 systemd-logind[1442]: Removed session 3. Jan 30 13:57:08.032430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:08.035589 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:57:08.037506 systemd[1]: Startup finished in 1.323s (kernel) + 6.344s (initrd) + 7.989s (userspace) = 15.657s. Jan 30 13:57:08.042988 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:57:08.990755 kubelet[1583]: E0130 13:57:08.990671 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:57:08.994359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:57:08.994566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:57:08.994996 systemd[1]: kubelet.service: Consumed 1.367s CPU time. Jan 30 13:57:17.231345 systemd[1]: Started sshd@3-64.227.111.225:22-147.75.109.163:34548.service - OpenSSH per-connection server daemon (147.75.109.163:34548). Jan 30 13:57:17.277876 sshd[1595]: Accepted publickey for core from 147.75.109.163 port 34548 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:17.280219 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:17.290476 systemd-logind[1442]: New session 4 of user core. Jan 30 13:57:17.297223 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:57:17.361419 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:17.374680 systemd[1]: sshd@3-64.227.111.225:22-147.75.109.163:34548.service: Deactivated successfully. Jan 30 13:57:17.377397 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:57:17.380160 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:57:17.386424 systemd[1]: Started sshd@4-64.227.111.225:22-147.75.109.163:34560.service - OpenSSH per-connection server daemon (147.75.109.163:34560). Jan 30 13:57:17.388876 systemd-logind[1442]: Removed session 4. Jan 30 13:57:17.429344 sshd[1602]: Accepted publickey for core from 147.75.109.163 port 34560 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:17.431254 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:17.441454 systemd-logind[1442]: New session 5 of user core. Jan 30 13:57:17.447262 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:57:17.504948 sshd[1602]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:17.518282 systemd[1]: sshd@4-64.227.111.225:22-147.75.109.163:34560.service: Deactivated successfully. Jan 30 13:57:17.521347 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:57:17.525261 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:57:17.530356 systemd[1]: Started sshd@5-64.227.111.225:22-147.75.109.163:34564.service - OpenSSH per-connection server daemon (147.75.109.163:34564). Jan 30 13:57:17.533404 systemd-logind[1442]: Removed session 5. Jan 30 13:57:17.575623 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 34564 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:17.577851 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:17.588015 systemd-logind[1442]: New session 6 of user core. Jan 30 13:57:17.596232 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:57:17.663574 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:17.677431 systemd[1]: sshd@5-64.227.111.225:22-147.75.109.163:34564.service: Deactivated successfully. Jan 30 13:57:17.680652 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:57:17.684261 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:57:17.690398 systemd[1]: Started sshd@6-64.227.111.225:22-147.75.109.163:34568.service - OpenSSH per-connection server daemon (147.75.109.163:34568). Jan 30 13:57:17.693095 systemd-logind[1442]: Removed session 6. Jan 30 13:57:17.733844 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 34568 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:17.735993 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:17.743269 systemd-logind[1442]: New session 7 of user core. Jan 30 13:57:17.756235 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:57:17.830152 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:57:17.831157 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:57:17.850080 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 30 13:57:17.855204 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:17.872446 systemd[1]: sshd@6-64.227.111.225:22-147.75.109.163:34568.service: Deactivated successfully. Jan 30 13:57:17.875262 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:57:17.877608 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:57:17.884538 systemd[1]: Started sshd@7-64.227.111.225:22-147.75.109.163:34580.service - OpenSSH per-connection server daemon (147.75.109.163:34580). Jan 30 13:57:17.886639 systemd-logind[1442]: Removed session 7. Jan 30 13:57:17.936463 sshd[1624]: Accepted publickey for core from 147.75.109.163 port 34580 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:17.938536 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:17.948044 systemd-logind[1442]: New session 8 of user core. Jan 30 13:57:17.954242 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:57:18.018839 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:57:18.019990 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:57:18.025564 sudo[1628]: pam_unix(sudo:session): session closed for user root Jan 30 13:57:18.034033 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:57:18.035135 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:57:18.056464 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:57:18.060275 auditctl[1631]: No rules Jan 30 13:57:18.060764 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:57:18.061018 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:57:18.068548 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:57:18.110387 augenrules[1649]: No rules Jan 30 13:57:18.111504 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:57:18.113405 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 30 13:57:18.117249 sshd[1624]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:18.130076 systemd[1]: sshd@7-64.227.111.225:22-147.75.109.163:34580.service: Deactivated successfully. Jan 30 13:57:18.132539 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:57:18.135261 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:57:18.141479 systemd[1]: Started sshd@8-64.227.111.225:22-147.75.109.163:34584.service - OpenSSH per-connection server daemon (147.75.109.163:34584). Jan 30 13:57:18.144660 systemd-logind[1442]: Removed session 8. Jan 30 13:57:18.187062 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 34584 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:18.189613 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:18.198645 systemd-logind[1442]: New session 9 of user core. Jan 30 13:57:18.201167 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:57:18.265918 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:57:18.266315 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:57:18.815368 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:57:18.817167 (dockerd)[1675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:57:19.014561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:57:19.026643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:19.207113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:19.221525 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:57:19.322988 kubelet[1688]: E0130 13:57:19.321559 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:57:19.325708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:57:19.325866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:57:19.430726 dockerd[1675]: time="2025-01-30T13:57:19.430636224Z" level=info msg="Starting up" Jan 30 13:57:19.614451 dockerd[1675]: time="2025-01-30T13:57:19.614219611Z" level=info msg="Loading containers: start." Jan 30 13:57:19.765059 kernel: Initializing XFRM netlink socket Jan 30 13:57:19.800848 systemd-timesyncd[1336]: Network configuration changed, trying to establish connection. Jan 30 13:57:19.868080 systemd-networkd[1369]: docker0: Link UP Jan 30 13:57:19.900827 dockerd[1675]: time="2025-01-30T13:57:19.900414187Z" level=info msg="Loading containers: done." Jan 30 13:57:19.927781 dockerd[1675]: time="2025-01-30T13:57:19.927710909Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:57:19.928118 dockerd[1675]: time="2025-01-30T13:57:19.927883171Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:57:19.928118 dockerd[1675]: time="2025-01-30T13:57:19.928083665Z" level=info msg="Daemon has completed initialization" Jan 30 13:57:20.640340 systemd-resolved[1319]: Clock change detected. Flushing caches. Jan 30 13:57:20.640480 systemd-timesyncd[1336]: Contacted time server 142.202.190.19:123 (2.flatcar.pool.ntp.org). Jan 30 13:57:20.640557 systemd-timesyncd[1336]: Initial clock synchronization to Thu 2025-01-30 13:57:20.640122 UTC. Jan 30 13:57:20.669604 dockerd[1675]: time="2025-01-30T13:57:20.669348112Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:57:20.669985 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:57:21.636321 containerd[1465]: time="2025-01-30T13:57:21.636274972Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:57:22.283124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184075385.mount: Deactivated successfully. Jan 30 13:57:23.819635 containerd[1465]: time="2025-01-30T13:57:23.819550937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:23.821673 containerd[1465]: time="2025-01-30T13:57:23.821612725Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 13:57:23.825308 containerd[1465]: time="2025-01-30T13:57:23.825202034Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:23.832490 containerd[1465]: time="2025-01-30T13:57:23.832404346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:23.835044 containerd[1465]: time="2025-01-30T13:57:23.834674643Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 2.198344273s" Jan 30 13:57:23.835044 containerd[1465]: time="2025-01-30T13:57:23.834753163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 13:57:23.835978 containerd[1465]: time="2025-01-30T13:57:23.835643254Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:57:25.477890 containerd[1465]: time="2025-01-30T13:57:25.477808363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:25.480177 containerd[1465]: time="2025-01-30T13:57:25.480095602Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 13:57:25.483512 containerd[1465]: time="2025-01-30T13:57:25.483453206Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:25.498848 containerd[1465]: time="2025-01-30T13:57:25.498746385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:25.502000 containerd[1465]: time="2025-01-30T13:57:25.501298814Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.665609098s" Jan 30 13:57:25.502000 containerd[1465]: time="2025-01-30T13:57:25.501396162Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 13:57:25.502987 containerd[1465]: time="2025-01-30T13:57:25.502925730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:57:26.891094 containerd[1465]: time="2025-01-30T13:57:26.891019704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:26.896844 containerd[1465]: time="2025-01-30T13:57:26.896278524Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 13:57:26.899961 containerd[1465]: time="2025-01-30T13:57:26.899861127Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:26.906080 containerd[1465]: time="2025-01-30T13:57:26.905970776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:26.908556 containerd[1465]: time="2025-01-30T13:57:26.908261315Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.405080981s" Jan 30 13:57:26.908556 containerd[1465]: time="2025-01-30T13:57:26.908320913Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 13:57:26.909701 containerd[1465]: time="2025-01-30T13:57:26.909361324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:57:26.913806 systemd-resolved[1319]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 13:57:28.072034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471141496.mount: Deactivated successfully. Jan 30 13:57:28.651762 containerd[1465]: time="2025-01-30T13:57:28.651689082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:28.654495 containerd[1465]: time="2025-01-30T13:57:28.654424468Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:57:28.657677 containerd[1465]: time="2025-01-30T13:57:28.657585179Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:28.662494 containerd[1465]: time="2025-01-30T13:57:28.662427318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:28.663713 containerd[1465]: time="2025-01-30T13:57:28.663493551Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.754083744s" Jan 30 13:57:28.663713 containerd[1465]: time="2025-01-30T13:57:28.663555587Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:57:28.664667 containerd[1465]: time="2025-01-30T13:57:28.664330438Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:57:29.314343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365342941.mount: Deactivated successfully. Jan 30 13:57:29.985276 systemd-resolved[1319]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 13:57:30.180212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:57:30.187318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:30.402208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:30.414738 (kubelet)[1955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:57:30.495899 kubelet[1955]: E0130 13:57:30.495820 1955 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:57:30.499222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:57:30.499426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:57:31.074784 containerd[1465]: time="2025-01-30T13:57:31.074706195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:31.077121 containerd[1465]: time="2025-01-30T13:57:31.077040770Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 13:57:31.080063 containerd[1465]: time="2025-01-30T13:57:31.079981508Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:31.087604 containerd[1465]: time="2025-01-30T13:57:31.087524103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:31.089305 containerd[1465]: time="2025-01-30T13:57:31.088897123Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.424522512s" Jan 30 13:57:31.089305 containerd[1465]: time="2025-01-30T13:57:31.088955809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 13:57:31.089856 containerd[1465]: time="2025-01-30T13:57:31.089675013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:57:31.657015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056397679.mount: Deactivated successfully. Jan 30 13:57:31.675288 containerd[1465]: time="2025-01-30T13:57:31.675189862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:31.678278 containerd[1465]: time="2025-01-30T13:57:31.678164682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:57:31.682423 containerd[1465]: time="2025-01-30T13:57:31.682321380Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:31.688391 containerd[1465]: time="2025-01-30T13:57:31.688287876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:31.690173 containerd[1465]: time="2025-01-30T13:57:31.689986334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 600.275749ms" Jan 30 13:57:31.690173 containerd[1465]: time="2025-01-30T13:57:31.690042364Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:57:31.691155 containerd[1465]: time="2025-01-30T13:57:31.690800316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:57:32.354793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811095772.mount: Deactivated successfully. Jan 30 13:57:34.369592 containerd[1465]: time="2025-01-30T13:57:34.369528488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:34.372685 containerd[1465]: time="2025-01-30T13:57:34.372605283Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 13:57:34.375582 containerd[1465]: time="2025-01-30T13:57:34.375517645Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:34.384971 containerd[1465]: time="2025-01-30T13:57:34.383855504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:34.386478 containerd[1465]: time="2025-01-30T13:57:34.386416400Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.695577784s" Jan 30 13:57:34.386478 containerd[1465]: time="2025-01-30T13:57:34.386477230Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 13:57:36.662125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:36.669467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:36.729716 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-9.scope)... Jan 30 13:57:36.729745 systemd[1]: Reloading... Jan 30 13:57:36.888008 zram_generator::config[2098]: No configuration found. Jan 30 13:57:37.087987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:57:37.211495 systemd[1]: Reloading finished in 481 ms. Jan 30 13:57:37.268787 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:57:37.269187 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:57:37.269795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:37.275579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:37.421140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:37.434781 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:57:37.501871 kubelet[2152]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:57:37.502310 kubelet[2152]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:57:37.502359 kubelet[2152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:57:37.502545 kubelet[2152]: I0130 13:57:37.502513 2152 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:57:37.809970 kubelet[2152]: I0130 13:57:37.808280 2152 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:57:37.809970 kubelet[2152]: I0130 13:57:37.808334 2152 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:57:37.809970 kubelet[2152]: I0130 13:57:37.808970 2152 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:57:37.846803 kubelet[2152]: I0130 13:57:37.846609 2152 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:57:37.850912 kubelet[2152]: E0130 13:57:37.850858 2152 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.227.111.225:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:37.861601 kubelet[2152]: E0130 13:57:37.861540 2152 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:57:37.861601 kubelet[2152]: I0130 13:57:37.861595 2152 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:57:37.868426 kubelet[2152]: I0130 13:57:37.868365 2152 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:57:37.868699 kubelet[2152]: I0130 13:57:37.868645 2152 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:57:37.868963 kubelet[2152]: I0130 13:57:37.868693 2152 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-2-c6825061e7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:57:37.869166 kubelet[2152]: I0130 13:57:37.868964 2152 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:57:37.869166 kubelet[2152]: I0130 13:57:37.868992 2152 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:57:37.869274 kubelet[2152]: I0130 13:57:37.869193 2152 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:57:37.880736 kubelet[2152]: I0130 13:57:37.880648 2152 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:57:37.880736 kubelet[2152]: I0130 13:57:37.880699 2152 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:57:37.880736 kubelet[2152]: I0130 13:57:37.880726 2152 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:57:37.880736 kubelet[2152]: I0130 13:57:37.880740 2152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:57:37.893910 kubelet[2152]: I0130 13:57:37.893733 2152 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:57:37.897618 kubelet[2152]: W0130 13:57:37.897523 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.111.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-2-c6825061e7&limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:37.897787 kubelet[2152]: E0130 13:57:37.897640 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.111.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-2-c6825061e7&limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:37.897857 kubelet[2152]: W0130 13:57:37.897781 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.111.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:37.897982 kubelet[2152]: E0130 13:57:37.897879 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.111.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:37.899730 kubelet[2152]: I0130 13:57:37.899685 2152 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:57:37.902427 kubelet[2152]: W0130 13:57:37.902370 2152 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:57:37.903256 kubelet[2152]: I0130 13:57:37.903227 2152 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:57:37.903333 kubelet[2152]: I0130 13:57:37.903272 2152 server.go:1287] "Started kubelet" Jan 30 13:57:37.905545 kubelet[2152]: I0130 13:57:37.905215 2152 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:57:37.918004 kubelet[2152]: I0130 13:57:37.917207 2152 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:57:37.918004 kubelet[2152]: I0130 13:57:37.917830 2152 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:57:37.918917 kubelet[2152]: I0130 13:57:37.918871 2152 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:57:37.935977 kubelet[2152]: E0130 13:57:37.926514 2152 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.111.225:6443/api/v1/namespaces/default/events\": dial tcp 64.227.111.225:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-2-c6825061e7.181f7d08cac25d77 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-2-c6825061e7,UID:ci-4081.3.0-2-c6825061e7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-2-c6825061e7,},FirstTimestamp:2025-01-30 13:57:37.903246711 +0000 UTC m=+0.462783070,LastTimestamp:2025-01-30 13:57:37.903246711 +0000 UTC m=+0.462783070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-2-c6825061e7,}" Jan 30 13:57:37.936653 kubelet[2152]: I0130 13:57:37.936616 2152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:57:37.937037 kubelet[2152]: I0130 13:57:37.937006 2152 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:57:37.940430 kubelet[2152]: E0130 13:57:37.940381 2152 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:37.940671 kubelet[2152]: I0130 13:57:37.940653 2152 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:57:37.941105 kubelet[2152]: I0130 13:57:37.941081 2152 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:57:37.941282 kubelet[2152]: I0130 13:57:37.941267 2152 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:57:37.941980 kubelet[2152]: W0130 13:57:37.941898 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.111.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:37.942231 kubelet[2152]: E0130 13:57:37.942202 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.111.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:37.942615 kubelet[2152]: I0130 13:57:37.942589 2152 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:57:37.942851 kubelet[2152]: I0130 13:57:37.942820 2152 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:57:37.943311 kubelet[2152]: E0130 13:57:37.943285 2152 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:57:37.947271 kubelet[2152]: I0130 13:57:37.946793 2152 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:57:37.948132 kubelet[2152]: E0130 13:57:37.948092 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.111.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-2-c6825061e7?timeout=10s\": dial tcp 64.227.111.225:6443: connect: connection refused" interval="200ms" Jan 30 13:57:37.979095 kubelet[2152]: I0130 13:57:37.979033 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:57:37.979378 kubelet[2152]: I0130 13:57:37.979204 2152 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:57:37.979575 kubelet[2152]: I0130 13:57:37.979555 2152 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:57:37.979633 kubelet[2152]: I0130 13:57:37.979587 2152 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:57:37.982622 kubelet[2152]: I0130 13:57:37.982575 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:57:37.982878 kubelet[2152]: I0130 13:57:37.982864 2152 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:57:37.983144 kubelet[2152]: I0130 13:57:37.983129 2152 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:57:37.983311 kubelet[2152]: I0130 13:57:37.983300 2152 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:57:37.983641 kubelet[2152]: E0130 13:57:37.983618 2152 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:57:37.984912 kubelet[2152]: W0130 13:57:37.984881 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.111.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:37.985149 kubelet[2152]: E0130 13:57:37.985122 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.111.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:38.040924 kubelet[2152]: E0130 13:57:38.040861 2152 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:38.086057 kubelet[2152]: E0130 13:57:38.084477 2152 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:57:38.141842 kubelet[2152]: E0130 13:57:38.141783 2152 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:38.166874 kubelet[2152]: E0130 13:57:38.166818 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.111.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-2-c6825061e7?timeout=10s\": dial tcp 64.227.111.225:6443: connect: connection refused" interval="400ms" Jan 30 13:57:38.242068 kubelet[2152]: E0130 13:57:38.241950 2152 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:38.285697 kubelet[2152]: E0130 13:57:38.285623 2152 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:57:38.343220 kubelet[2152]: E0130 13:57:38.343048 2152 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:38.443377 kubelet[2152]: E0130 13:57:38.443306 2152 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:38.544138 kubelet[2152]: E0130 13:57:38.544071 2152 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:38.563862 kubelet[2152]: I0130 13:57:38.563434 2152 policy_none.go:49] "None policy: Start" Jan 30 13:57:38.563862 kubelet[2152]: I0130 13:57:38.563497 2152 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:57:38.563862 kubelet[2152]: I0130 13:57:38.563522 2152 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:57:38.567868 kubelet[2152]: E0130 13:57:38.567795 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.111.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-2-c6825061e7?timeout=10s\": dial tcp 64.227.111.225:6443: connect: connection refused" interval="800ms" Jan 30 13:57:38.580364 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:57:38.597281 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:57:38.603509 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:57:38.616356 kubelet[2152]: I0130 13:57:38.615524 2152 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:57:38.616356 kubelet[2152]: I0130 13:57:38.615783 2152 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:57:38.616356 kubelet[2152]: I0130 13:57:38.615800 2152 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:57:38.616356 kubelet[2152]: I0130 13:57:38.616271 2152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:57:38.618556 kubelet[2152]: E0130 13:57:38.618526 2152 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:57:38.618700 kubelet[2152]: E0130 13:57:38.618582 2152 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:38.698139 systemd[1]: Created slice kubepods-burstable-pod9776841e1349230ff0bdd89b96696657.slice - libcontainer container kubepods-burstable-pod9776841e1349230ff0bdd89b96696657.slice. Jan 30 13:57:38.710197 kubelet[2152]: E0130 13:57:38.710135 2152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-2-c6825061e7\" not found" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.715026 systemd[1]: Created slice kubepods-burstable-pod44f92ca8e283fe6c62252a595f33f6ab.slice - libcontainer container kubepods-burstable-pod44f92ca8e283fe6c62252a595f33f6ab.slice. Jan 30 13:57:38.717369 kubelet[2152]: I0130 13:57:38.717326 2152 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.718114 kubelet[2152]: E0130 13:57:38.718055 2152 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://64.227.111.225:6443/api/v1/nodes\": dial tcp 64.227.111.225:6443: connect: connection refused" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.724985 kubelet[2152]: E0130 13:57:38.724704 2152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-2-c6825061e7\" not found" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.728749 systemd[1]: Created slice kubepods-burstable-pod67dd4853cd0fe722e13ea91ee1600125.slice - libcontainer container kubepods-burstable-pod67dd4853cd0fe722e13ea91ee1600125.slice. Jan 30 13:57:38.731819 kubelet[2152]: E0130 13:57:38.731774 2152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-2-c6825061e7\" not found" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745365 kubelet[2152]: I0130 13:57:38.745141 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9776841e1349230ff0bdd89b96696657-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-2-c6825061e7\" (UID: \"9776841e1349230ff0bdd89b96696657\") " pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745365 kubelet[2152]: I0130 13:57:38.745218 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745365 kubelet[2152]: I0130 13:57:38.745248 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745365 kubelet[2152]: I0130 13:57:38.745273 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745365 kubelet[2152]: I0130 13:57:38.745300 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745803 kubelet[2152]: I0130 13:57:38.745332 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9776841e1349230ff0bdd89b96696657-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-2-c6825061e7\" (UID: \"9776841e1349230ff0bdd89b96696657\") " pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745803 kubelet[2152]: I0130 13:57:38.745382 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745803 kubelet[2152]: I0130 13:57:38.745432 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67dd4853cd0fe722e13ea91ee1600125-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-2-c6825061e7\" (UID: \"67dd4853cd0fe722e13ea91ee1600125\") " pod="kube-system/kube-scheduler-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.745803 kubelet[2152]: I0130 13:57:38.745463 2152 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9776841e1349230ff0bdd89b96696657-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-2-c6825061e7\" (UID: \"9776841e1349230ff0bdd89b96696657\") " pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.900279 kubelet[2152]: W0130 13:57:38.900065 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.111.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:38.900279 kubelet[2152]: E0130 13:57:38.900175 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.111.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:38.919685 kubelet[2152]: I0130 13:57:38.919479 2152 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:38.920159 kubelet[2152]: E0130 13:57:38.919922 2152 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://64.227.111.225:6443/api/v1/nodes\": dial tcp 64.227.111.225:6443: connect: connection refused" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:39.011184 kubelet[2152]: E0130 13:57:39.011135 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:39.015773 containerd[1465]: time="2025-01-30T13:57:39.015326821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-2-c6825061e7,Uid:9776841e1349230ff0bdd89b96696657,Namespace:kube-system,Attempt:0,}" Jan 30 13:57:39.019350 systemd-resolved[1319]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 30 13:57:39.025827 kubelet[2152]: E0130 13:57:39.025748 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:39.026486 containerd[1465]: time="2025-01-30T13:57:39.026434156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-2-c6825061e7,Uid:44f92ca8e283fe6c62252a595f33f6ab,Namespace:kube-system,Attempt:0,}" Jan 30 13:57:39.033142 kubelet[2152]: E0130 13:57:39.033056 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:39.046520 containerd[1465]: time="2025-01-30T13:57:39.046340334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-2-c6825061e7,Uid:67dd4853cd0fe722e13ea91ee1600125,Namespace:kube-system,Attempt:0,}" Jan 30 13:57:39.255891 kubelet[2152]: W0130 13:57:39.254628 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.111.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-2-c6825061e7&limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:39.255891 kubelet[2152]: E0130 13:57:39.254759 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.111.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-2-c6825061e7&limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:39.267709 kubelet[2152]: W0130 13:57:39.267655 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.111.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:39.267885 kubelet[2152]: E0130 13:57:39.267729 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.111.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:39.321991 kubelet[2152]: I0130 13:57:39.321545 2152 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:39.321991 kubelet[2152]: E0130 13:57:39.321912 2152 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://64.227.111.225:6443/api/v1/nodes\": dial tcp 64.227.111.225:6443: connect: connection refused" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:39.368964 kubelet[2152]: E0130 13:57:39.368838 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.111.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-2-c6825061e7?timeout=10s\": dial tcp 64.227.111.225:6443: connect: connection refused" interval="1.6s" Jan 30 13:57:39.508321 kubelet[2152]: W0130 13:57:39.508093 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.111.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:39.508321 kubelet[2152]: E0130 13:57:39.508167 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.111.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:39.589746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176311523.mount: Deactivated successfully. Jan 30 13:57:39.613749 containerd[1465]: time="2025-01-30T13:57:39.613661105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:39.616139 containerd[1465]: time="2025-01-30T13:57:39.615912069Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:57:39.619216 containerd[1465]: time="2025-01-30T13:57:39.618986419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:39.622018 containerd[1465]: time="2025-01-30T13:57:39.621417139Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:39.625457 containerd[1465]: time="2025-01-30T13:57:39.625370987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:57:39.629402 containerd[1465]: time="2025-01-30T13:57:39.629330834Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:39.632639 containerd[1465]: time="2025-01-30T13:57:39.632136355Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:57:39.638133 containerd[1465]: time="2025-01-30T13:57:39.638076209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:57:39.639208 containerd[1465]: time="2025-01-30T13:57:39.639170343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.633317ms" Jan 30 13:57:39.643396 containerd[1465]: time="2025-01-30T13:57:39.643321490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.862282ms" Jan 30 13:57:39.680222 containerd[1465]: time="2025-01-30T13:57:39.680152330Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.695364ms" Jan 30 13:57:39.889705 kubelet[2152]: E0130 13:57:39.889535 2152 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.227.111.225:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:39.965808 containerd[1465]: time="2025-01-30T13:57:39.964898639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:39.965808 containerd[1465]: time="2025-01-30T13:57:39.964981557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:39.965808 containerd[1465]: time="2025-01-30T13:57:39.965003273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:39.965808 containerd[1465]: time="2025-01-30T13:57:39.965119764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:39.975647 containerd[1465]: time="2025-01-30T13:57:39.975149041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:39.975647 containerd[1465]: time="2025-01-30T13:57:39.975255766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:39.977028 containerd[1465]: time="2025-01-30T13:57:39.976817104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:39.977513 containerd[1465]: time="2025-01-30T13:57:39.977427675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:39.991419 containerd[1465]: time="2025-01-30T13:57:39.990906065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:39.991419 containerd[1465]: time="2025-01-30T13:57:39.991177376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:39.991419 containerd[1465]: time="2025-01-30T13:57:39.991211541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:39.992257 containerd[1465]: time="2025-01-30T13:57:39.992119270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:40.022300 systemd[1]: Started cri-containerd-e43c5dc8bf9a951b06a7f6bdeba8981eed7baf876f41d6d9c54f4c84e4136201.scope - libcontainer container e43c5dc8bf9a951b06a7f6bdeba8981eed7baf876f41d6d9c54f4c84e4136201. Jan 30 13:57:40.030398 systemd[1]: Started cri-containerd-50775356c5d82ea86f10f3beab5d39f29655287b96409cb2cdc4f57b853f28cc.scope - libcontainer container 50775356c5d82ea86f10f3beab5d39f29655287b96409cb2cdc4f57b853f28cc. Jan 30 13:57:40.067343 systemd[1]: Started cri-containerd-0e44ddec3e1f26dd3699f81714dcf16e35211b0fdcd6cdda72c85eb05033f564.scope - libcontainer container 0e44ddec3e1f26dd3699f81714dcf16e35211b0fdcd6cdda72c85eb05033f564. Jan 30 13:57:40.148083 kubelet[2152]: I0130 13:57:40.147907 2152 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:40.148874 kubelet[2152]: E0130 13:57:40.148620 2152 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://64.227.111.225:6443/api/v1/nodes\": dial tcp 64.227.111.225:6443: connect: connection refused" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:40.161544 containerd[1465]: time="2025-01-30T13:57:40.161367743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-2-c6825061e7,Uid:9776841e1349230ff0bdd89b96696657,Namespace:kube-system,Attempt:0,} returns sandbox id \"50775356c5d82ea86f10f3beab5d39f29655287b96409cb2cdc4f57b853f28cc\"" Jan 30 13:57:40.171981 containerd[1465]: time="2025-01-30T13:57:40.171264876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-2-c6825061e7,Uid:67dd4853cd0fe722e13ea91ee1600125,Namespace:kube-system,Attempt:0,} returns sandbox id \"e43c5dc8bf9a951b06a7f6bdeba8981eed7baf876f41d6d9c54f4c84e4136201\"" Jan 30 13:57:40.173567 kubelet[2152]: E0130 13:57:40.173289 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:40.173904 kubelet[2152]: E0130 13:57:40.173730 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:40.181923 containerd[1465]: time="2025-01-30T13:57:40.181807367Z" level=info msg="CreateContainer within sandbox \"e43c5dc8bf9a951b06a7f6bdeba8981eed7baf876f41d6d9c54f4c84e4136201\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:57:40.183803 containerd[1465]: time="2025-01-30T13:57:40.183423649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-2-c6825061e7,Uid:44f92ca8e283fe6c62252a595f33f6ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e44ddec3e1f26dd3699f81714dcf16e35211b0fdcd6cdda72c85eb05033f564\"" Jan 30 13:57:40.184907 containerd[1465]: time="2025-01-30T13:57:40.184853317Z" level=info msg="CreateContainer within sandbox \"50775356c5d82ea86f10f3beab5d39f29655287b96409cb2cdc4f57b853f28cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:57:40.185771 kubelet[2152]: E0130 13:57:40.185339 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:40.188864 containerd[1465]: time="2025-01-30T13:57:40.188819222Z" level=info msg="CreateContainer within sandbox \"0e44ddec3e1f26dd3699f81714dcf16e35211b0fdcd6cdda72c85eb05033f564\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:57:40.231564 containerd[1465]: time="2025-01-30T13:57:40.231481944Z" level=info msg="CreateContainer within sandbox \"0e44ddec3e1f26dd3699f81714dcf16e35211b0fdcd6cdda72c85eb05033f564\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"240eed5cd439c04626089025db7ced38501341fe7fcbca1b95b2bdea2aeb8d34\"" Jan 30 13:57:40.232669 containerd[1465]: time="2025-01-30T13:57:40.232627718Z" level=info msg="StartContainer for \"240eed5cd439c04626089025db7ced38501341fe7fcbca1b95b2bdea2aeb8d34\"" Jan 30 13:57:40.240457 containerd[1465]: time="2025-01-30T13:57:40.240288703Z" level=info msg="CreateContainer within sandbox \"e43c5dc8bf9a951b06a7f6bdeba8981eed7baf876f41d6d9c54f4c84e4136201\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0a51337d7e2cb5d405d419e4faa92e201d9b6f253ab083b18ad6d78043317f54\"" Jan 30 13:57:40.241333 containerd[1465]: time="2025-01-30T13:57:40.241286608Z" level=info msg="StartContainer for \"0a51337d7e2cb5d405d419e4faa92e201d9b6f253ab083b18ad6d78043317f54\"" Jan 30 13:57:40.248409 containerd[1465]: time="2025-01-30T13:57:40.248345979Z" level=info msg="CreateContainer within sandbox \"50775356c5d82ea86f10f3beab5d39f29655287b96409cb2cdc4f57b853f28cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12d3dd01be4098112a6a66cd28784a0ccca9a7bd1f7c0b9ec6a080473c65c156\"" Jan 30 13:57:40.249274 containerd[1465]: time="2025-01-30T13:57:40.249244364Z" level=info msg="StartContainer for \"12d3dd01be4098112a6a66cd28784a0ccca9a7bd1f7c0b9ec6a080473c65c156\"" Jan 30 13:57:40.301251 systemd[1]: Started cri-containerd-240eed5cd439c04626089025db7ced38501341fe7fcbca1b95b2bdea2aeb8d34.scope - libcontainer container 240eed5cd439c04626089025db7ced38501341fe7fcbca1b95b2bdea2aeb8d34. Jan 30 13:57:40.321802 systemd[1]: Started cri-containerd-0a51337d7e2cb5d405d419e4faa92e201d9b6f253ab083b18ad6d78043317f54.scope - libcontainer container 0a51337d7e2cb5d405d419e4faa92e201d9b6f253ab083b18ad6d78043317f54. Jan 30 13:57:40.334632 systemd[1]: Started cri-containerd-12d3dd01be4098112a6a66cd28784a0ccca9a7bd1f7c0b9ec6a080473c65c156.scope - libcontainer container 12d3dd01be4098112a6a66cd28784a0ccca9a7bd1f7c0b9ec6a080473c65c156. Jan 30 13:57:40.442781 containerd[1465]: time="2025-01-30T13:57:40.442483597Z" level=info msg="StartContainer for \"240eed5cd439c04626089025db7ced38501341fe7fcbca1b95b2bdea2aeb8d34\" returns successfully" Jan 30 13:57:40.451790 containerd[1465]: time="2025-01-30T13:57:40.451522983Z" level=info msg="StartContainer for \"0a51337d7e2cb5d405d419e4faa92e201d9b6f253ab083b18ad6d78043317f54\" returns successfully" Jan 30 13:57:40.460762 containerd[1465]: time="2025-01-30T13:57:40.460586889Z" level=info msg="StartContainer for \"12d3dd01be4098112a6a66cd28784a0ccca9a7bd1f7c0b9ec6a080473c65c156\" returns successfully" Jan 30 13:57:40.840688 kubelet[2152]: W0130 13:57:40.840509 2152 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.111.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.111.225:6443: connect: connection refused Jan 30 13:57:40.840688 kubelet[2152]: E0130 13:57:40.840621 2152 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.111.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.111.225:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:57:40.969889 kubelet[2152]: E0130 13:57:40.969823 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.111.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-2-c6825061e7?timeout=10s\": dial tcp 64.227.111.225:6443: connect: connection refused" interval="3.2s" Jan 30 13:57:40.999488 kubelet[2152]: E0130 13:57:40.999448 2152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-2-c6825061e7\" not found" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:40.999709 kubelet[2152]: E0130 13:57:40.999617 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:41.004345 kubelet[2152]: E0130 13:57:41.003722 2152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-2-c6825061e7\" not found" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:41.004345 kubelet[2152]: E0130 13:57:41.003970 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:41.009930 kubelet[2152]: E0130 13:57:41.008409 2152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-2-c6825061e7\" not found" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:41.009930 kubelet[2152]: E0130 13:57:41.008614 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:41.750123 kubelet[2152]: I0130 13:57:41.750059 2152 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:42.012263 kubelet[2152]: E0130 13:57:42.012142 2152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-2-c6825061e7\" not found" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:42.012738 kubelet[2152]: E0130 13:57:42.012341 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:42.012738 kubelet[2152]: E0130 13:57:42.012682 2152 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.0-2-c6825061e7\" not found" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:42.012860 kubelet[2152]: E0130 13:57:42.012814 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:43.384641 kubelet[2152]: I0130 13:57:43.383327 2152 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:43.445440 kubelet[2152]: I0130 13:57:43.445286 2152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:43.454746 kubelet[2152]: E0130 13:57:43.454675 2152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.0-2-c6825061e7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:43.454746 kubelet[2152]: I0130 13:57:43.454722 2152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:43.457748 kubelet[2152]: E0130 13:57:43.457698 2152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:43.457748 kubelet[2152]: I0130 13:57:43.457745 2152 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:43.461801 kubelet[2152]: E0130 13:57:43.460770 2152 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.0-2-c6825061e7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:43.889283 kubelet[2152]: I0130 13:57:43.889220 2152 apiserver.go:52] "Watching apiserver" Jan 30 13:57:43.941781 kubelet[2152]: I0130 13:57:43.941694 2152 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:57:45.916579 systemd[1]: Reloading requested from client PID 2429 ('systemctl') (unit session-9.scope)... Jan 30 13:57:45.916602 systemd[1]: Reloading... Jan 30 13:57:46.021973 zram_generator::config[2464]: No configuration found. Jan 30 13:57:46.257065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:57:46.457909 systemd[1]: Reloading finished in 540 ms. Jan 30 13:57:46.527089 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:46.543914 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:57:46.544227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:46.553537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:57:46.750629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:57:46.766047 (kubelet)[2519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:57:46.897193 kubelet[2519]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:57:46.897193 kubelet[2519]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:57:46.897193 kubelet[2519]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:57:46.897193 kubelet[2519]: I0130 13:57:46.896188 2519 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:57:46.908423 kubelet[2519]: I0130 13:57:46.907030 2519 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:57:46.908423 kubelet[2519]: I0130 13:57:46.907064 2519 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:57:46.908423 kubelet[2519]: I0130 13:57:46.907441 2519 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:57:46.915256 kubelet[2519]: I0130 13:57:46.915211 2519 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:57:46.918349 kubelet[2519]: I0130 13:57:46.918301 2519 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:57:46.924503 kubelet[2519]: E0130 13:57:46.923359 2519 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:57:46.924503 kubelet[2519]: I0130 13:57:46.923402 2519 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:57:46.927645 kubelet[2519]: I0130 13:57:46.927581 2519 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:57:46.927931 kubelet[2519]: I0130 13:57:46.927887 2519 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:57:46.929756 kubelet[2519]: I0130 13:57:46.927915 2519 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-2-c6825061e7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:57:46.929756 kubelet[2519]: I0130 13:57:46.929754 2519 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:57:46.930065 kubelet[2519]: I0130 13:57:46.929768 2519 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:57:46.930065 kubelet[2519]: I0130 13:57:46.929827 2519 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:57:46.930173 kubelet[2519]: I0130 13:57:46.930075 2519 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:57:46.930173 kubelet[2519]: I0130 13:57:46.930090 2519 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:57:46.931355 kubelet[2519]: I0130 13:57:46.930997 2519 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:57:46.931355 kubelet[2519]: I0130 13:57:46.931037 2519 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:57:46.936042 kubelet[2519]: I0130 13:57:46.935478 2519 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:57:46.936164 kubelet[2519]: I0130 13:57:46.936115 2519 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:57:46.936751 kubelet[2519]: I0130 13:57:46.936732 2519 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:57:46.936810 kubelet[2519]: I0130 13:57:46.936774 2519 server.go:1287] "Started kubelet" Jan 30 13:57:46.943131 kubelet[2519]: I0130 13:57:46.941727 2519 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:57:46.948914 kubelet[2519]: I0130 13:57:46.948420 2519 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:57:46.950770 kubelet[2519]: I0130 13:57:46.950742 2519 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:57:46.955644 kubelet[2519]: I0130 13:57:46.953610 2519 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:57:46.955644 kubelet[2519]: I0130 13:57:46.953864 2519 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:57:46.955644 kubelet[2519]: I0130 13:57:46.954215 2519 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:57:46.957915 kubelet[2519]: I0130 13:57:46.956610 2519 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:57:46.957915 kubelet[2519]: E0130 13:57:46.956860 2519 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4081.3.0-2-c6825061e7\" not found" Jan 30 13:57:46.968177 kubelet[2519]: I0130 13:57:46.965970 2519 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:57:46.968177 kubelet[2519]: I0130 13:57:46.967342 2519 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:57:46.988514 kubelet[2519]: I0130 13:57:46.987223 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:57:46.991345 kubelet[2519]: I0130 13:57:46.990717 2519 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:57:46.991345 kubelet[2519]: I0130 13:57:46.990776 2519 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:57:46.991345 kubelet[2519]: I0130 13:57:46.990801 2519 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:57:46.991345 kubelet[2519]: I0130 13:57:46.990808 2519 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:57:46.991345 kubelet[2519]: E0130 13:57:46.990877 2519 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:57:47.015048 kubelet[2519]: E0130 13:57:47.013255 2519 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:57:47.020369 kubelet[2519]: I0130 13:57:47.020336 2519 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:57:47.021543 kubelet[2519]: I0130 13:57:47.020358 2519 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:57:47.025016 kubelet[2519]: I0130 13:57:47.022201 2519 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:57:47.092689 kubelet[2519]: E0130 13:57:47.092643 2519 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:57:47.114171 kubelet[2519]: I0130 13:57:47.114139 2519 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:57:47.114171 kubelet[2519]: I0130 13:57:47.114160 2519 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:57:47.114424 kubelet[2519]: I0130 13:57:47.114188 2519 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:57:47.114424 kubelet[2519]: I0130 13:57:47.114392 2519 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:57:47.114424 kubelet[2519]: I0130 13:57:47.114404 2519 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:57:47.114424 kubelet[2519]: I0130 13:57:47.114425 2519 policy_none.go:49] "None policy: Start" Jan 30 13:57:47.114643 kubelet[2519]: I0130 13:57:47.114435 2519 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:57:47.114643 kubelet[2519]: I0130 13:57:47.114446 2519 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:57:47.114643 kubelet[2519]: I0130 13:57:47.114583 2519 state_mem.go:75] "Updated machine memory state" Jan 30 13:57:47.122078 kubelet[2519]: I0130 13:57:47.121565 2519 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:57:47.122078 kubelet[2519]: I0130 13:57:47.121756 2519 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:57:47.122078 kubelet[2519]: I0130 13:57:47.121768 2519 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:57:47.124288 kubelet[2519]: I0130 13:57:47.123396 2519 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:57:47.132337 kubelet[2519]: E0130 13:57:47.131821 2519 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:57:47.229705 kubelet[2519]: I0130 13:57:47.229379 2519 kubelet_node_status.go:76] "Attempting to register node" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.244214 kubelet[2519]: I0130 13:57:47.244164 2519 kubelet_node_status.go:125] "Node was previously registered" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.244435 kubelet[2519]: I0130 13:57:47.244261 2519 kubelet_node_status.go:79] "Successfully registered node" node="ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.296325 kubelet[2519]: I0130 13:57:47.294794 2519 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.297492 kubelet[2519]: I0130 13:57:47.296882 2519 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.297492 kubelet[2519]: I0130 13:57:47.297304 2519 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.315496 kubelet[2519]: W0130 13:57:47.315457 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:57:47.317121 kubelet[2519]: W0130 13:57:47.316775 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:57:47.317803 kubelet[2519]: W0130 13:57:47.317490 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:57:47.370654 kubelet[2519]: I0130 13:57:47.370559 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.370654 kubelet[2519]: I0130 13:57:47.370653 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.370929 kubelet[2519]: I0130 13:57:47.370688 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67dd4853cd0fe722e13ea91ee1600125-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-2-c6825061e7\" (UID: \"67dd4853cd0fe722e13ea91ee1600125\") " pod="kube-system/kube-scheduler-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.370929 kubelet[2519]: I0130 13:57:47.370720 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9776841e1349230ff0bdd89b96696657-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-2-c6825061e7\" (UID: \"9776841e1349230ff0bdd89b96696657\") " pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.370929 kubelet[2519]: I0130 13:57:47.370748 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.370929 kubelet[2519]: I0130 13:57:47.370775 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.370929 kubelet[2519]: I0130 13:57:47.370801 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/44f92ca8e283fe6c62252a595f33f6ab-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" (UID: \"44f92ca8e283fe6c62252a595f33f6ab\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.371318 kubelet[2519]: I0130 13:57:47.370823 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9776841e1349230ff0bdd89b96696657-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-2-c6825061e7\" (UID: \"9776841e1349230ff0bdd89b96696657\") " pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.371318 kubelet[2519]: I0130 13:57:47.370848 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9776841e1349230ff0bdd89b96696657-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-2-c6825061e7\" (UID: \"9776841e1349230ff0bdd89b96696657\") " pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:47.617457 kubelet[2519]: E0130 13:57:47.617281 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:47.619120 kubelet[2519]: E0130 13:57:47.618368 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:47.620835 kubelet[2519]: E0130 13:57:47.620041 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:47.947971 kubelet[2519]: I0130 13:57:47.947057 2519 apiserver.go:52] "Watching apiserver" Jan 30 13:57:47.967366 kubelet[2519]: I0130 13:57:47.967318 2519 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:57:48.053977 kubelet[2519]: I0130 13:57:48.052563 2519 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:48.053977 kubelet[2519]: E0130 13:57:48.052720 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:48.053977 kubelet[2519]: E0130 13:57:48.053296 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:48.084511 kubelet[2519]: I0130 13:57:48.084444 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-2-c6825061e7" podStartSLOduration=1.084422812 podStartE2EDuration="1.084422812s" podCreationTimestamp="2025-01-30 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:57:48.047477351 +0000 UTC m=+1.273927497" watchObservedRunningTime="2025-01-30 13:57:48.084422812 +0000 UTC m=+1.310872944" Jan 30 13:57:48.087170 kubelet[2519]: W0130 13:57:48.087126 2519 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:57:48.087354 kubelet[2519]: E0130 13:57:48.087201 2519 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.0-2-c6825061e7\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" Jan 30 13:57:48.087410 kubelet[2519]: E0130 13:57:48.087380 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:48.160322 kubelet[2519]: I0130 13:57:48.160140 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-2-c6825061e7" podStartSLOduration=1.160117623 podStartE2EDuration="1.160117623s" podCreationTimestamp="2025-01-30 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:57:48.088202311 +0000 UTC m=+1.314652444" watchObservedRunningTime="2025-01-30 13:57:48.160117623 +0000 UTC m=+1.386567887" Jan 30 13:57:48.203738 kubelet[2519]: I0130 13:57:48.202672 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-2-c6825061e7" podStartSLOduration=1.202634177 podStartE2EDuration="1.202634177s" podCreationTimestamp="2025-01-30 13:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:57:48.160505846 +0000 UTC m=+1.386955982" watchObservedRunningTime="2025-01-30 13:57:48.202634177 +0000 UTC m=+1.429084310" Jan 30 13:57:49.056121 kubelet[2519]: E0130 13:57:49.056088 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:49.058352 kubelet[2519]: E0130 13:57:49.056654 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:49.058352 kubelet[2519]: E0130 13:57:49.056869 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:51.051019 update_engine[1443]: I20250130 13:57:51.050344 1443 update_attempter.cc:509] Updating boot flags... Jan 30 13:57:51.112972 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2584) Jan 30 13:57:51.721407 kubelet[2519]: I0130 13:57:51.721339 2519 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:57:51.722281 containerd[1465]: time="2025-01-30T13:57:51.722144944Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:57:51.724067 kubelet[2519]: I0130 13:57:51.722624 2519 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:57:52.522264 systemd[1]: Created slice kubepods-besteffort-pod4d1024c9_0923_45f0_8670_5238baffdb48.slice - libcontainer container kubepods-besteffort-pod4d1024c9_0923_45f0_8670_5238baffdb48.slice. Jan 30 13:57:52.604875 kubelet[2519]: I0130 13:57:52.604814 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d1024c9-0923-45f0-8670-5238baffdb48-kube-proxy\") pod \"kube-proxy-fvdgd\" (UID: \"4d1024c9-0923-45f0-8670-5238baffdb48\") " pod="kube-system/kube-proxy-fvdgd" Jan 30 13:57:52.604875 kubelet[2519]: I0130 13:57:52.604884 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6qxm\" (UniqueName: \"kubernetes.io/projected/4d1024c9-0923-45f0-8670-5238baffdb48-kube-api-access-z6qxm\") pod \"kube-proxy-fvdgd\" (UID: \"4d1024c9-0923-45f0-8670-5238baffdb48\") " pod="kube-system/kube-proxy-fvdgd" Jan 30 13:57:52.605164 kubelet[2519]: I0130 13:57:52.604917 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d1024c9-0923-45f0-8670-5238baffdb48-lib-modules\") pod \"kube-proxy-fvdgd\" (UID: \"4d1024c9-0923-45f0-8670-5238baffdb48\") " pod="kube-system/kube-proxy-fvdgd" Jan 30 13:57:52.605164 kubelet[2519]: I0130 13:57:52.604950 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d1024c9-0923-45f0-8670-5238baffdb48-xtables-lock\") pod \"kube-proxy-fvdgd\" (UID: \"4d1024c9-0923-45f0-8670-5238baffdb48\") " pod="kube-system/kube-proxy-fvdgd" Jan 30 13:57:52.757062 kubelet[2519]: E0130 13:57:52.756683 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:52.832242 kubelet[2519]: E0130 13:57:52.831682 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:52.835082 containerd[1465]: time="2025-01-30T13:57:52.835040068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fvdgd,Uid:4d1024c9-0923-45f0-8670-5238baffdb48,Namespace:kube-system,Attempt:0,}" Jan 30 13:57:52.844998 systemd[1]: Created slice kubepods-besteffort-podc790e57c_2f9b_4bee_9e28_f1629d31f585.slice - libcontainer container kubepods-besteffort-podc790e57c_2f9b_4bee_9e28_f1629d31f585.slice. Jan 30 13:57:52.847358 kubelet[2519]: W0130 13:57:52.847326 2519 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-2-c6825061e7" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.0-2-c6825061e7' and this object Jan 30 13:57:52.849426 kubelet[2519]: E0130 13:57:52.849367 2519 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.0-2-c6825061e7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.0-2-c6825061e7' and this object" logger="UnhandledError" Jan 30 13:57:52.886386 containerd[1465]: time="2025-01-30T13:57:52.886249143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:52.886386 containerd[1465]: time="2025-01-30T13:57:52.886329969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:52.886859 containerd[1465]: time="2025-01-30T13:57:52.886755161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:52.887888 containerd[1465]: time="2025-01-30T13:57:52.887725054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:52.907345 kubelet[2519]: I0130 13:57:52.906449 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c790e57c-2f9b-4bee-9e28-f1629d31f585-var-lib-calico\") pod \"tigera-operator-7d68577dc5-9fdgq\" (UID: \"c790e57c-2f9b-4bee-9e28-f1629d31f585\") " pod="tigera-operator/tigera-operator-7d68577dc5-9fdgq" Jan 30 13:57:52.907345 kubelet[2519]: I0130 13:57:52.906514 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78ch6\" (UniqueName: \"kubernetes.io/projected/c790e57c-2f9b-4bee-9e28-f1629d31f585-kube-api-access-78ch6\") pod \"tigera-operator-7d68577dc5-9fdgq\" (UID: \"c790e57c-2f9b-4bee-9e28-f1629d31f585\") " pod="tigera-operator/tigera-operator-7d68577dc5-9fdgq" Jan 30 13:57:52.913201 systemd[1]: run-containerd-runc-k8s.io-3af31c4981621024362a4dbbb65bbdd51232ef2434564d5dfc8aecc5fead12a4-runc.5yc0TT.mount: Deactivated successfully. Jan 30 13:57:52.927400 systemd[1]: Started cri-containerd-3af31c4981621024362a4dbbb65bbdd51232ef2434564d5dfc8aecc5fead12a4.scope - libcontainer container 3af31c4981621024362a4dbbb65bbdd51232ef2434564d5dfc8aecc5fead12a4. Jan 30 13:57:52.959877 containerd[1465]: time="2025-01-30T13:57:52.959826072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fvdgd,Uid:4d1024c9-0923-45f0-8670-5238baffdb48,Namespace:kube-system,Attempt:0,} returns sandbox id \"3af31c4981621024362a4dbbb65bbdd51232ef2434564d5dfc8aecc5fead12a4\"" Jan 30 13:57:52.961011 kubelet[2519]: E0130 13:57:52.960977 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:52.965148 containerd[1465]: time="2025-01-30T13:57:52.965006662Z" level=info msg="CreateContainer within sandbox \"3af31c4981621024362a4dbbb65bbdd51232ef2434564d5dfc8aecc5fead12a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:57:53.002473 containerd[1465]: time="2025-01-30T13:57:53.002386634Z" level=info msg="CreateContainer within sandbox \"3af31c4981621024362a4dbbb65bbdd51232ef2434564d5dfc8aecc5fead12a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9be6fa202943624274be0b5f164511bacea0a64ac3e9511f38f7255ce7212ef5\"" Jan 30 13:57:53.005695 containerd[1465]: time="2025-01-30T13:57:53.005643073Z" level=info msg="StartContainer for \"9be6fa202943624274be0b5f164511bacea0a64ac3e9511f38f7255ce7212ef5\"" Jan 30 13:57:53.065248 systemd[1]: Started cri-containerd-9be6fa202943624274be0b5f164511bacea0a64ac3e9511f38f7255ce7212ef5.scope - libcontainer container 9be6fa202943624274be0b5f164511bacea0a64ac3e9511f38f7255ce7212ef5. Jan 30 13:57:53.081384 kubelet[2519]: E0130 13:57:53.081144 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:53.142212 containerd[1465]: time="2025-01-30T13:57:53.142041876Z" level=info msg="StartContainer for \"9be6fa202943624274be0b5f164511bacea0a64ac3e9511f38f7255ce7212ef5\" returns successfully" Jan 30 13:57:53.235792 sudo[1660]: pam_unix(sudo:session): session closed for user root Jan 30 13:57:53.240919 sshd[1657]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:53.246239 systemd[1]: sshd@8-64.227.111.225:22-147.75.109.163:34584.service: Deactivated successfully. Jan 30 13:57:53.248798 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:57:53.249173 systemd[1]: session-9.scope: Consumed 4.993s CPU time, 149.4M memory peak, 0B memory swap peak. Jan 30 13:57:53.251591 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:57:53.253545 systemd-logind[1442]: Removed session 9. Jan 30 13:57:54.054060 containerd[1465]: time="2025-01-30T13:57:54.053185181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-9fdgq,Uid:c790e57c-2f9b-4bee-9e28-f1629d31f585,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:57:54.093007 kubelet[2519]: E0130 13:57:54.092598 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:54.122042 kubelet[2519]: I0130 13:57:54.120611 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fvdgd" podStartSLOduration=2.120585777 podStartE2EDuration="2.120585777s" podCreationTimestamp="2025-01-30 13:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:57:54.118372991 +0000 UTC m=+7.344823144" watchObservedRunningTime="2025-01-30 13:57:54.120585777 +0000 UTC m=+7.347035914" Jan 30 13:57:54.126405 containerd[1465]: time="2025-01-30T13:57:54.126248872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:54.126405 containerd[1465]: time="2025-01-30T13:57:54.126325985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:54.126405 containerd[1465]: time="2025-01-30T13:57:54.126337754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:54.127113 containerd[1465]: time="2025-01-30T13:57:54.126484084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:54.171898 systemd[1]: Started cri-containerd-e1f6b7b1d01b283b590da2db17bde62ca1964dcc316a4e25ba86684d4d495f73.scope - libcontainer container e1f6b7b1d01b283b590da2db17bde62ca1964dcc316a4e25ba86684d4d495f73. Jan 30 13:57:54.250771 containerd[1465]: time="2025-01-30T13:57:54.250714382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-9fdgq,Uid:c790e57c-2f9b-4bee-9e28-f1629d31f585,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e1f6b7b1d01b283b590da2db17bde62ca1964dcc316a4e25ba86684d4d495f73\"" Jan 30 13:57:54.254930 containerd[1465]: time="2025-01-30T13:57:54.254656954Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:57:54.723283 systemd[1]: run-containerd-runc-k8s.io-e1f6b7b1d01b283b590da2db17bde62ca1964dcc316a4e25ba86684d4d495f73-runc.BZ5XX0.mount: Deactivated successfully. Jan 30 13:57:54.775978 kubelet[2519]: E0130 13:57:54.774972 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:55.096914 kubelet[2519]: E0130 13:57:55.096777 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:55.097406 kubelet[2519]: E0130 13:57:55.097198 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:55.262604 kubelet[2519]: E0130 13:57:55.262105 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:55.766881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3605632864.mount: Deactivated successfully. Jan 30 13:57:56.102184 kubelet[2519]: E0130 13:57:56.100392 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:56.299992 containerd[1465]: time="2025-01-30T13:57:56.299850593Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:56.304259 containerd[1465]: time="2025-01-30T13:57:56.304164778Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:57:56.306548 containerd[1465]: time="2025-01-30T13:57:56.306468403Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:56.312874 containerd[1465]: time="2025-01-30T13:57:56.312805902Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:56.313706 containerd[1465]: time="2025-01-30T13:57:56.313667630Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.058969513s" Jan 30 13:57:56.313820 containerd[1465]: time="2025-01-30T13:57:56.313711604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:57:56.321881 containerd[1465]: time="2025-01-30T13:57:56.321827368Z" level=info msg="CreateContainer within sandbox \"e1f6b7b1d01b283b590da2db17bde62ca1964dcc316a4e25ba86684d4d495f73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:57:56.346762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726839681.mount: Deactivated successfully. Jan 30 13:57:56.355003 containerd[1465]: time="2025-01-30T13:57:56.354286443Z" level=info msg="CreateContainer within sandbox \"e1f6b7b1d01b283b590da2db17bde62ca1964dcc316a4e25ba86684d4d495f73\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"37321cbad8d1c6b02b2a998897a17769c925adeb1cfa69c4c87c880450e6cd48\"" Jan 30 13:57:56.357049 containerd[1465]: time="2025-01-30T13:57:56.355342171Z" level=info msg="StartContainer for \"37321cbad8d1c6b02b2a998897a17769c925adeb1cfa69c4c87c880450e6cd48\"" Jan 30 13:57:56.404265 systemd[1]: Started cri-containerd-37321cbad8d1c6b02b2a998897a17769c925adeb1cfa69c4c87c880450e6cd48.scope - libcontainer container 37321cbad8d1c6b02b2a998897a17769c925adeb1cfa69c4c87c880450e6cd48. Jan 30 13:57:56.441761 containerd[1465]: time="2025-01-30T13:57:56.441594282Z" level=info msg="StartContainer for \"37321cbad8d1c6b02b2a998897a17769c925adeb1cfa69c4c87c880450e6cd48\" returns successfully" Jan 30 13:57:57.106934 kubelet[2519]: E0130 13:57:57.106860 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:59.536030 kubelet[2519]: I0130 13:57:59.535921 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-9fdgq" podStartSLOduration=5.468871556 podStartE2EDuration="7.535896432s" podCreationTimestamp="2025-01-30 13:57:52 +0000 UTC" firstStartedPulling="2025-01-30 13:57:54.252728259 +0000 UTC m=+7.479178373" lastFinishedPulling="2025-01-30 13:57:56.31975312 +0000 UTC m=+9.546203249" observedRunningTime="2025-01-30 13:57:57.129746632 +0000 UTC m=+10.356196787" watchObservedRunningTime="2025-01-30 13:57:59.535896432 +0000 UTC m=+12.762346566" Jan 30 13:57:59.567184 systemd[1]: Created slice kubepods-besteffort-podf2e9c190_653c_400d_9976_d2d3df54819a.slice - libcontainer container kubepods-besteffort-podf2e9c190_653c_400d_9976_d2d3df54819a.slice. Jan 30 13:57:59.649532 kubelet[2519]: I0130 13:57:59.649447 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f2e9c190-653c-400d-9976-d2d3df54819a-typha-certs\") pod \"calico-typha-54bd668b45-9jkf7\" (UID: \"f2e9c190-653c-400d-9976-d2d3df54819a\") " pod="calico-system/calico-typha-54bd668b45-9jkf7" Jan 30 13:57:59.649532 kubelet[2519]: I0130 13:57:59.649534 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2e9c190-653c-400d-9976-d2d3df54819a-tigera-ca-bundle\") pod \"calico-typha-54bd668b45-9jkf7\" (UID: \"f2e9c190-653c-400d-9976-d2d3df54819a\") " pod="calico-system/calico-typha-54bd668b45-9jkf7" Jan 30 13:57:59.649764 kubelet[2519]: I0130 13:57:59.649563 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sqww\" (UniqueName: \"kubernetes.io/projected/f2e9c190-653c-400d-9976-d2d3df54819a-kube-api-access-5sqww\") pod \"calico-typha-54bd668b45-9jkf7\" (UID: \"f2e9c190-653c-400d-9976-d2d3df54819a\") " pod="calico-system/calico-typha-54bd668b45-9jkf7" Jan 30 13:57:59.703252 systemd[1]: Created slice kubepods-besteffort-podd9f0b632_6cac_472a_9be8_cb4fb39b0ef9.slice - libcontainer container kubepods-besteffort-podd9f0b632_6cac_472a_9be8_cb4fb39b0ef9.slice. Jan 30 13:57:59.755571 kubelet[2519]: I0130 13:57:59.755292 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-lib-modules\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.755571 kubelet[2519]: I0130 13:57:59.755366 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-cni-log-dir\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.755571 kubelet[2519]: I0130 13:57:59.755402 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-policysync\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.755571 kubelet[2519]: I0130 13:57:59.755537 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-var-run-calico\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.755571 kubelet[2519]: I0130 13:57:59.755635 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8q94\" (UniqueName: \"kubernetes.io/projected/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-kube-api-access-q8q94\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.756099 kubelet[2519]: I0130 13:57:59.755663 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-xtables-lock\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.756099 kubelet[2519]: I0130 13:57:59.755720 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-var-lib-calico\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.756099 kubelet[2519]: I0130 13:57:59.755754 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-node-certs\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.756099 kubelet[2519]: I0130 13:57:59.755781 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-cni-bin-dir\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.756099 kubelet[2519]: I0130 13:57:59.755838 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-tigera-ca-bundle\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.756394 kubelet[2519]: I0130 13:57:59.755871 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-cni-net-dir\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.756394 kubelet[2519]: I0130 13:57:59.755898 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9f0b632-6cac-472a-9be8-cb4fb39b0ef9-flexvol-driver-host\") pod \"calico-node-sw4gr\" (UID: \"d9f0b632-6cac-472a-9be8-cb4fb39b0ef9\") " pod="calico-system/calico-node-sw4gr" Jan 30 13:57:59.875099 kubelet[2519]: E0130 13:57:59.873327 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.875099 kubelet[2519]: W0130 13:57:59.873377 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.875099 kubelet[2519]: E0130 13:57:59.873414 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.876834 kubelet[2519]: E0130 13:57:59.876792 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:57:59.877994 containerd[1465]: time="2025-01-30T13:57:59.877565469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54bd668b45-9jkf7,Uid:f2e9c190-653c-400d-9976-d2d3df54819a,Namespace:calico-system,Attempt:0,}" Jan 30 13:57:59.899491 kubelet[2519]: E0130 13:57:59.899426 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg6b9" podUID="e4fd20cc-1ebf-4c36-acf8-aae4903f42f0" Jan 30 13:57:59.914980 kubelet[2519]: E0130 13:57:59.911716 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.914980 kubelet[2519]: W0130 13:57:59.911753 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.914980 kubelet[2519]: E0130 13:57:59.911780 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.951014 kubelet[2519]: E0130 13:57:59.950932 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.951383 kubelet[2519]: W0130 13:57:59.951353 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.951578 kubelet[2519]: E0130 13:57:59.951555 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.952765 kubelet[2519]: E0130 13:57:59.952687 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.952765 kubelet[2519]: W0130 13:57:59.952714 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.952765 kubelet[2519]: E0130 13:57:59.952736 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.953634 kubelet[2519]: E0130 13:57:59.953585 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.953901 kubelet[2519]: W0130 13:57:59.953702 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.953901 kubelet[2519]: E0130 13:57:59.953723 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.960512 kubelet[2519]: E0130 13:57:59.960282 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.960512 kubelet[2519]: W0130 13:57:59.960313 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.960512 kubelet[2519]: E0130 13:57:59.960339 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.961081 kubelet[2519]: E0130 13:57:59.960948 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.961081 kubelet[2519]: W0130 13:57:59.960966 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.961081 kubelet[2519]: E0130 13:57:59.960984 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.961707 kubelet[2519]: E0130 13:57:59.961691 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.961909 kubelet[2519]: W0130 13:57:59.961785 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.961909 kubelet[2519]: E0130 13:57:59.961804 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.962686 kubelet[2519]: E0130 13:57:59.962549 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.962686 kubelet[2519]: W0130 13:57:59.962565 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.962686 kubelet[2519]: E0130 13:57:59.962580 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.964241 kubelet[2519]: E0130 13:57:59.964220 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.964551 kubelet[2519]: W0130 13:57:59.964385 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.964551 kubelet[2519]: E0130 13:57:59.964407 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.965738 kubelet[2519]: E0130 13:57:59.965605 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.965738 kubelet[2519]: W0130 13:57:59.965621 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.965738 kubelet[2519]: E0130 13:57:59.965635 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.970053 kubelet[2519]: E0130 13:57:59.965989 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.970053 kubelet[2519]: W0130 13:57:59.966002 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.970053 kubelet[2519]: E0130 13:57:59.966017 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.970053 kubelet[2519]: E0130 13:57:59.966329 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.970053 kubelet[2519]: W0130 13:57:59.966341 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.970053 kubelet[2519]: E0130 13:57:59.966352 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.970053 kubelet[2519]: E0130 13:57:59.966816 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.970053 kubelet[2519]: W0130 13:57:59.966830 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.970053 kubelet[2519]: E0130 13:57:59.966842 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.970053 kubelet[2519]: E0130 13:57:59.968253 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.971457 kubelet[2519]: W0130 13:57:59.968269 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.971457 kubelet[2519]: E0130 13:57:59.968281 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.971457 kubelet[2519]: E0130 13:57:59.968507 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.971457 kubelet[2519]: W0130 13:57:59.968516 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.971457 kubelet[2519]: E0130 13:57:59.968526 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.971457 kubelet[2519]: E0130 13:57:59.968814 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.971457 kubelet[2519]: W0130 13:57:59.968824 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.971457 kubelet[2519]: E0130 13:57:59.968834 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.972620 kubelet[2519]: E0130 13:57:59.971982 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.972620 kubelet[2519]: W0130 13:57:59.972003 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.972620 kubelet[2519]: E0130 13:57:59.972024 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.973413 kubelet[2519]: E0130 13:57:59.972999 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.973413 kubelet[2519]: W0130 13:57:59.973018 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.973413 kubelet[2519]: E0130 13:57:59.973041 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.973413 kubelet[2519]: I0130 13:57:59.973155 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e4fd20cc-1ebf-4c36-acf8-aae4903f42f0-varrun\") pod \"csi-node-driver-rg6b9\" (UID: \"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0\") " pod="calico-system/csi-node-driver-rg6b9" Jan 30 13:57:59.973999 kubelet[2519]: E0130 13:57:59.973561 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.973999 kubelet[2519]: W0130 13:57:59.973574 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.974226 kubelet[2519]: E0130 13:57:59.974095 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.974428 kubelet[2519]: I0130 13:57:59.974130 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e4fd20cc-1ebf-4c36-acf8-aae4903f42f0-socket-dir\") pod \"csi-node-driver-rg6b9\" (UID: \"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0\") " pod="calico-system/csi-node-driver-rg6b9" Jan 30 13:57:59.976521 kubelet[2519]: E0130 13:57:59.974520 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.976521 kubelet[2519]: W0130 13:57:59.974530 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.976521 kubelet[2519]: E0130 13:57:59.974550 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.976521 kubelet[2519]: E0130 13:57:59.975589 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.976521 kubelet[2519]: W0130 13:57:59.975626 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.976521 kubelet[2519]: E0130 13:57:59.976131 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.976521 kubelet[2519]: W0130 13:57:59.976145 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.978233 kubelet[2519]: E0130 13:57:59.978214 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.979757 kubelet[2519]: W0130 13:57:59.978790 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.981777 kubelet[2519]: E0130 13:57:59.981754 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.982198 kubelet[2519]: W0130 13:57:59.981892 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.982198 kubelet[2519]: E0130 13:57:59.981921 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.984013 kubelet[2519]: E0130 13:57:59.983592 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.984013 kubelet[2519]: E0130 13:57:59.983644 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.984013 kubelet[2519]: E0130 13:57:59.983668 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.984013 kubelet[2519]: I0130 13:57:59.983707 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e4fd20cc-1ebf-4c36-acf8-aae4903f42f0-kubelet-dir\") pod \"csi-node-driver-rg6b9\" (UID: \"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0\") " pod="calico-system/csi-node-driver-rg6b9" Jan 30 13:57:59.984481 kubelet[2519]: E0130 13:57:59.984463 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.984851 kubelet[2519]: W0130 13:57:59.984561 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.984851 kubelet[2519]: E0130 13:57:59.984589 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.985184 kubelet[2519]: E0130 13:57:59.985052 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.985184 kubelet[2519]: W0130 13:57:59.985067 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.985184 kubelet[2519]: E0130 13:57:59.985086 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.988264 kubelet[2519]: E0130 13:57:59.987194 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.988264 kubelet[2519]: W0130 13:57:59.987211 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.988264 kubelet[2519]: E0130 13:57:59.987661 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.988264 kubelet[2519]: E0130 13:57:59.987779 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.988264 kubelet[2519]: W0130 13:57:59.987789 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.988264 kubelet[2519]: E0130 13:57:59.987902 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.988264 kubelet[2519]: I0130 13:57:59.987959 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e4fd20cc-1ebf-4c36-acf8-aae4903f42f0-registration-dir\") pod \"csi-node-driver-rg6b9\" (UID: \"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0\") " pod="calico-system/csi-node-driver-rg6b9" Jan 30 13:57:59.988488 containerd[1465]: time="2025-01-30T13:57:59.986630633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:59.988488 containerd[1465]: time="2025-01-30T13:57:59.986738974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:59.988488 containerd[1465]: time="2025-01-30T13:57:59.986761087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:59.988488 containerd[1465]: time="2025-01-30T13:57:59.986892747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:59.988977 kubelet[2519]: E0130 13:57:59.988698 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.988977 kubelet[2519]: W0130 13:57:59.988719 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.988977 kubelet[2519]: E0130 13:57:59.988741 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.994377 kubelet[2519]: E0130 13:57:59.991225 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.994377 kubelet[2519]: W0130 13:57:59.991239 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.994377 kubelet[2519]: E0130 13:57:59.991318 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.994377 kubelet[2519]: E0130 13:57:59.993713 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.994377 kubelet[2519]: W0130 13:57:59.993731 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.994377 kubelet[2519]: E0130 13:57:59.993749 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:57:59.996052 kubelet[2519]: E0130 13:57:59.995989 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:57:59.997378 kubelet[2519]: W0130 13:57:59.997253 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:57:59.997378 kubelet[2519]: E0130 13:57:59.997291 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.001527 kubelet[2519]: E0130 13:58:00.001379 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.001527 kubelet[2519]: W0130 13:58:00.001423 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.001527 kubelet[2519]: E0130 13:58:00.001457 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.011720 kubelet[2519]: E0130 13:58:00.010613 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:00.012434 containerd[1465]: time="2025-01-30T13:58:00.012123142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sw4gr,Uid:d9f0b632-6cac-472a-9be8-cb4fb39b0ef9,Namespace:calico-system,Attempt:0,}" Jan 30 13:58:00.057347 systemd[1]: Started cri-containerd-165d43bd1a5206c96f48121fa395be9977a1137a4e6ac3428caa66375e0ff749.scope - libcontainer container 165d43bd1a5206c96f48121fa395be9977a1137a4e6ac3428caa66375e0ff749. Jan 30 13:58:00.093983 kubelet[2519]: E0130 13:58:00.092082 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.093983 kubelet[2519]: W0130 13:58:00.092130 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.093983 kubelet[2519]: E0130 13:58:00.092201 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.095148 kubelet[2519]: E0130 13:58:00.094499 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.095148 kubelet[2519]: W0130 13:58:00.094534 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.095148 kubelet[2519]: E0130 13:58:00.094591 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.095148 kubelet[2519]: I0130 13:58:00.095015 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7skl\" (UniqueName: \"kubernetes.io/projected/e4fd20cc-1ebf-4c36-acf8-aae4903f42f0-kube-api-access-z7skl\") pod \"csi-node-driver-rg6b9\" (UID: \"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0\") " pod="calico-system/csi-node-driver-rg6b9" Jan 30 13:58:00.096209 kubelet[2519]: E0130 13:58:00.096165 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.096209 kubelet[2519]: W0130 13:58:00.096192 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.096209 kubelet[2519]: E0130 13:58:00.096230 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.097699 kubelet[2519]: E0130 13:58:00.097389 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.097699 kubelet[2519]: W0130 13:58:00.097413 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.097699 kubelet[2519]: E0130 13:58:00.097466 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.100257 kubelet[2519]: E0130 13:58:00.098848 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.100257 kubelet[2519]: W0130 13:58:00.098879 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.100257 kubelet[2519]: E0130 13:58:00.099035 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.101766 kubelet[2519]: E0130 13:58:00.101329 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.101766 kubelet[2519]: W0130 13:58:00.101366 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.101962 kubelet[2519]: E0130 13:58:00.101879 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.101962 kubelet[2519]: W0130 13:58:00.101898 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.104287 kubelet[2519]: E0130 13:58:00.103185 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.104287 kubelet[2519]: W0130 13:58:00.103209 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.104287 kubelet[2519]: E0130 13:58:00.104151 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.104287 kubelet[2519]: W0130 13:58:00.104171 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.104287 kubelet[2519]: E0130 13:58:00.104195 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.105918 kubelet[2519]: E0130 13:58:00.105238 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.105918 kubelet[2519]: E0130 13:58:00.105280 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.105918 kubelet[2519]: E0130 13:58:00.105336 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.105918 kubelet[2519]: E0130 13:58:00.105437 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.105918 kubelet[2519]: W0130 13:58:00.105448 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.105918 kubelet[2519]: E0130 13:58:00.105463 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.106853 kubelet[2519]: E0130 13:58:00.106821 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.106853 kubelet[2519]: W0130 13:58:00.106846 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.106853 kubelet[2519]: E0130 13:58:00.106868 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.108906 kubelet[2519]: E0130 13:58:00.108843 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.108906 kubelet[2519]: W0130 13:58:00.108874 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.109955 kubelet[2519]: E0130 13:58:00.109487 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.113370 kubelet[2519]: E0130 13:58:00.113099 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.113370 kubelet[2519]: W0130 13:58:00.113133 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.113370 kubelet[2519]: E0130 13:58:00.113197 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.116387 kubelet[2519]: E0130 13:58:00.116084 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.116387 kubelet[2519]: W0130 13:58:00.116118 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.117906 kubelet[2519]: E0130 13:58:00.117363 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.119107 kubelet[2519]: E0130 13:58:00.119075 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.120920 kubelet[2519]: W0130 13:58:00.120238 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.121173 kubelet[2519]: E0130 13:58:00.120788 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.124000 kubelet[2519]: E0130 13:58:00.123058 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.124287 kubelet[2519]: W0130 13:58:00.124253 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.127084 kubelet[2519]: E0130 13:58:00.124869 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.129968 kubelet[2519]: E0130 13:58:00.127378 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.129968 kubelet[2519]: W0130 13:58:00.127411 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.129968 kubelet[2519]: E0130 13:58:00.127481 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.130814 containerd[1465]: time="2025-01-30T13:58:00.123716925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:58:00.130814 containerd[1465]: time="2025-01-30T13:58:00.123916491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:58:00.130814 containerd[1465]: time="2025-01-30T13:58:00.124066111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:00.130814 containerd[1465]: time="2025-01-30T13:58:00.125228922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:00.131107 kubelet[2519]: E0130 13:58:00.130549 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.131107 kubelet[2519]: W0130 13:58:00.130575 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.131107 kubelet[2519]: E0130 13:58:00.130631 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.134156 kubelet[2519]: E0130 13:58:00.133690 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.134156 kubelet[2519]: W0130 13:58:00.133724 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.134156 kubelet[2519]: E0130 13:58:00.133801 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.136640 kubelet[2519]: E0130 13:58:00.134887 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.136640 kubelet[2519]: W0130 13:58:00.134915 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.136640 kubelet[2519]: E0130 13:58:00.134965 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.141986 kubelet[2519]: E0130 13:58:00.140652 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.141986 kubelet[2519]: W0130 13:58:00.140689 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.141986 kubelet[2519]: E0130 13:58:00.140759 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.144795 kubelet[2519]: E0130 13:58:00.143007 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.144795 kubelet[2519]: W0130 13:58:00.143036 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.144795 kubelet[2519]: E0130 13:58:00.143068 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.148967 kubelet[2519]: E0130 13:58:00.148282 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.148967 kubelet[2519]: W0130 13:58:00.148315 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.148967 kubelet[2519]: E0130 13:58:00.148348 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.173750 systemd[1]: Started cri-containerd-c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e.scope - libcontainer container c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e. Jan 30 13:58:00.215053 kubelet[2519]: E0130 13:58:00.214975 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.215053 kubelet[2519]: W0130 13:58:00.215015 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.215351 kubelet[2519]: E0130 13:58:00.215086 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.216081 kubelet[2519]: E0130 13:58:00.216047 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.216311 kubelet[2519]: W0130 13:58:00.216074 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.216311 kubelet[2519]: E0130 13:58:00.216117 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.227980 kubelet[2519]: E0130 13:58:00.227276 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.227980 kubelet[2519]: W0130 13:58:00.227323 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.227980 kubelet[2519]: E0130 13:58:00.227360 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.231036 kubelet[2519]: E0130 13:58:00.230458 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.231036 kubelet[2519]: W0130 13:58:00.230494 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.231036 kubelet[2519]: E0130 13:58:00.230527 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.232868 kubelet[2519]: E0130 13:58:00.232541 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.232868 kubelet[2519]: W0130 13:58:00.232639 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.232868 kubelet[2519]: E0130 13:58:00.232682 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:00.266159 containerd[1465]: time="2025-01-30T13:58:00.266104565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sw4gr,Uid:d9f0b632-6cac-472a-9be8-cb4fb39b0ef9,Namespace:calico-system,Attempt:0,} returns sandbox id \"c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e\"" Jan 30 13:58:00.284888 kubelet[2519]: E0130 13:58:00.284650 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:00.299783 containerd[1465]: time="2025-01-30T13:58:00.299582373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54bd668b45-9jkf7,Uid:f2e9c190-653c-400d-9976-d2d3df54819a,Namespace:calico-system,Attempt:0,} returns sandbox id \"165d43bd1a5206c96f48121fa395be9977a1137a4e6ac3428caa66375e0ff749\"" Jan 30 13:58:00.306758 containerd[1465]: time="2025-01-30T13:58:00.306289450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:58:00.307376 kubelet[2519]: E0130 13:58:00.307165 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:00.307376 kubelet[2519]: E0130 13:58:00.307212 2519 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:58:00.307376 kubelet[2519]: W0130 13:58:00.307238 2519 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:58:00.307376 kubelet[2519]: E0130 13:58:00.307264 2519 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:58:01.759826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782598023.mount: Deactivated successfully. Jan 30 13:58:01.995011 containerd[1465]: time="2025-01-30T13:58:01.994827951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:02.004114 containerd[1465]: time="2025-01-30T13:58:02.003995296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:58:02.012675 containerd[1465]: time="2025-01-30T13:58:02.012485710Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:02.019981 kubelet[2519]: E0130 13:58:02.019469 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg6b9" podUID="e4fd20cc-1ebf-4c36-acf8-aae4903f42f0" Jan 30 13:58:02.020592 containerd[1465]: time="2025-01-30T13:58:02.019535457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:02.021629 containerd[1465]: time="2025-01-30T13:58:02.021441785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.714674758s" Jan 30 13:58:02.021629 containerd[1465]: time="2025-01-30T13:58:02.021509517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:58:02.023808 containerd[1465]: time="2025-01-30T13:58:02.023745887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:58:02.033161 containerd[1465]: time="2025-01-30T13:58:02.033072456Z" level=info msg="CreateContainer within sandbox \"c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:58:02.091253 containerd[1465]: time="2025-01-30T13:58:02.090678694Z" level=info msg="CreateContainer within sandbox \"c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a\"" Jan 30 13:58:02.092095 containerd[1465]: time="2025-01-30T13:58:02.091824419Z" level=info msg="StartContainer for \"c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a\"" Jan 30 13:58:02.144418 systemd[1]: Started cri-containerd-c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a.scope - libcontainer container c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a. Jan 30 13:58:02.205534 containerd[1465]: time="2025-01-30T13:58:02.205115129Z" level=info msg="StartContainer for \"c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a\" returns successfully" Jan 30 13:58:02.238294 systemd[1]: cri-containerd-c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a.scope: Deactivated successfully. Jan 30 13:58:02.282076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a-rootfs.mount: Deactivated successfully. Jan 30 13:58:02.290581 containerd[1465]: time="2025-01-30T13:58:02.290340584Z" level=info msg="shim disconnected" id=c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a namespace=k8s.io Jan 30 13:58:02.290581 containerd[1465]: time="2025-01-30T13:58:02.290431397Z" level=warning msg="cleaning up after shim disconnected" id=c7f414a95a08b9aab622efb534dc4046ffc387884c5f100a067e263bd6f5d24a namespace=k8s.io Jan 30 13:58:02.290581 containerd[1465]: time="2025-01-30T13:58:02.290446646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:58:03.152690 kubelet[2519]: E0130 13:58:03.151843 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:03.993981 kubelet[2519]: E0130 13:58:03.992792 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg6b9" podUID="e4fd20cc-1ebf-4c36-acf8-aae4903f42f0" Jan 30 13:58:04.562920 containerd[1465]: time="2025-01-30T13:58:04.562760628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:04.566342 containerd[1465]: time="2025-01-30T13:58:04.565726607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:58:04.568525 containerd[1465]: time="2025-01-30T13:58:04.568347637Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:04.576290 containerd[1465]: time="2025-01-30T13:58:04.576130028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:04.578616 containerd[1465]: time="2025-01-30T13:58:04.577770356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.553964441s" Jan 30 13:58:04.578616 containerd[1465]: time="2025-01-30T13:58:04.577835185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:58:04.581365 containerd[1465]: time="2025-01-30T13:58:04.581075249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:58:04.612043 containerd[1465]: time="2025-01-30T13:58:04.612001333Z" level=info msg="CreateContainer within sandbox \"165d43bd1a5206c96f48121fa395be9977a1137a4e6ac3428caa66375e0ff749\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:58:04.651199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961114400.mount: Deactivated successfully. Jan 30 13:58:04.660301 containerd[1465]: time="2025-01-30T13:58:04.660254461Z" level=info msg="CreateContainer within sandbox \"165d43bd1a5206c96f48121fa395be9977a1137a4e6ac3428caa66375e0ff749\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7f57d1ec51cc382c9aa495c6ad8eb3806440a3730474c11009c51fb4fdc6beb0\"" Jan 30 13:58:04.663695 containerd[1465]: time="2025-01-30T13:58:04.663554724Z" level=info msg="StartContainer for \"7f57d1ec51cc382c9aa495c6ad8eb3806440a3730474c11009c51fb4fdc6beb0\"" Jan 30 13:58:04.732387 systemd[1]: Started cri-containerd-7f57d1ec51cc382c9aa495c6ad8eb3806440a3730474c11009c51fb4fdc6beb0.scope - libcontainer container 7f57d1ec51cc382c9aa495c6ad8eb3806440a3730474c11009c51fb4fdc6beb0. Jan 30 13:58:04.832586 containerd[1465]: time="2025-01-30T13:58:04.832437113Z" level=info msg="StartContainer for \"7f57d1ec51cc382c9aa495c6ad8eb3806440a3730474c11009c51fb4fdc6beb0\" returns successfully" Jan 30 13:58:05.156161 kubelet[2519]: E0130 13:58:05.156016 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:05.179914 kubelet[2519]: I0130 13:58:05.178574 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54bd668b45-9jkf7" podStartSLOduration=1.9062623589999999 podStartE2EDuration="6.178546953s" podCreationTimestamp="2025-01-30 13:57:59 +0000 UTC" firstStartedPulling="2025-01-30 13:58:00.30818033 +0000 UTC m=+13.534630455" lastFinishedPulling="2025-01-30 13:58:04.580464919 +0000 UTC m=+17.806915049" observedRunningTime="2025-01-30 13:58:05.17806165 +0000 UTC m=+18.404511805" watchObservedRunningTime="2025-01-30 13:58:05.178546953 +0000 UTC m=+18.404997088" Jan 30 13:58:05.992023 kubelet[2519]: E0130 13:58:05.991930 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg6b9" podUID="e4fd20cc-1ebf-4c36-acf8-aae4903f42f0" Jan 30 13:58:06.158089 kubelet[2519]: I0130 13:58:06.157734 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:58:06.160178 kubelet[2519]: E0130 13:58:06.159131 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:07.991300 kubelet[2519]: E0130 13:58:07.991252 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rg6b9" podUID="e4fd20cc-1ebf-4c36-acf8-aae4903f42f0" Jan 30 13:58:08.338414 containerd[1465]: time="2025-01-30T13:58:08.338060358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:08.353575 containerd[1465]: time="2025-01-30T13:58:08.353321018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:58:08.362524 containerd[1465]: time="2025-01-30T13:58:08.362316468Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:08.364611 containerd[1465]: time="2025-01-30T13:58:08.363687151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.782551093s" Jan 30 13:58:08.364611 containerd[1465]: time="2025-01-30T13:58:08.363741280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:58:08.364611 containerd[1465]: time="2025-01-30T13:58:08.364419602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:08.368443 containerd[1465]: time="2025-01-30T13:58:08.368394463Z" level=info msg="CreateContainer within sandbox \"c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:58:08.427922 containerd[1465]: time="2025-01-30T13:58:08.427781634Z" level=info msg="CreateContainer within sandbox \"c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301\"" Jan 30 13:58:08.428968 containerd[1465]: time="2025-01-30T13:58:08.428719353Z" level=info msg="StartContainer for \"deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301\"" Jan 30 13:58:08.548669 systemd[1]: run-containerd-runc-k8s.io-deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301-runc.Mf8DF3.mount: Deactivated successfully. Jan 30 13:58:08.564271 systemd[1]: Started cri-containerd-deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301.scope - libcontainer container deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301. Jan 30 13:58:08.621205 containerd[1465]: time="2025-01-30T13:58:08.621036447Z" level=info msg="StartContainer for \"deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301\" returns successfully" Jan 30 13:58:09.210589 kubelet[2519]: E0130 13:58:09.210550 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:09.472260 systemd[1]: cri-containerd-deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301.scope: Deactivated successfully. Jan 30 13:58:09.519674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301-rootfs.mount: Deactivated successfully. Jan 30 13:58:09.524692 containerd[1465]: time="2025-01-30T13:58:09.524623918Z" level=info msg="shim disconnected" id=deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301 namespace=k8s.io Jan 30 13:58:09.525437 containerd[1465]: time="2025-01-30T13:58:09.524690090Z" level=warning msg="cleaning up after shim disconnected" id=deb9d514671a21a6c36023ebd5191789cd7786d2e16b370a80b9c435dd320301 namespace=k8s.io Jan 30 13:58:09.525437 containerd[1465]: time="2025-01-30T13:58:09.524710401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:58:09.547649 kubelet[2519]: I0130 13:58:09.547522 2519 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:58:09.599814 kubelet[2519]: I0130 13:58:09.599732 2519 status_manager.go:890] "Failed to get status for pod" podUID="be6e8263-d40b-423f-8220-c5dba67bce2a" pod="kube-system/coredns-668d6bf9bc-rvj9k" err="pods \"coredns-668d6bf9bc-rvj9k\" is forbidden: User \"system:node:ci-4081.3.0-2-c6825061e7\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.0-2-c6825061e7' and this object" Jan 30 13:58:09.600287 kubelet[2519]: W0130 13:58:09.600176 2519 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081.3.0-2-c6825061e7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-2-c6825061e7' and this object Jan 30 13:58:09.605277 systemd[1]: Created slice kubepods-burstable-podbe6e8263_d40b_423f_8220_c5dba67bce2a.slice - libcontainer container kubepods-burstable-podbe6e8263_d40b_423f_8220_c5dba67bce2a.slice. Jan 30 13:58:09.620235 kubelet[2519]: E0130 13:58:09.619716 2519 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081.3.0-2-c6825061e7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081.3.0-2-c6825061e7' and this object" logger="UnhandledError" Jan 30 13:58:09.632252 systemd[1]: Created slice kubepods-besteffort-podd4596862_5cca_4d1a_98a1_719edf3cebdc.slice - libcontainer container kubepods-besteffort-podd4596862_5cca_4d1a_98a1_719edf3cebdc.slice. Jan 30 13:58:09.645016 systemd[1]: Created slice kubepods-burstable-pod8df787c1_03f8_4203_9d8d_3a85d1fa0a95.slice - libcontainer container kubepods-burstable-pod8df787c1_03f8_4203_9d8d_3a85d1fa0a95.slice. Jan 30 13:58:09.658916 systemd[1]: Created slice kubepods-besteffort-pod4252853d_be36_4c01_b117_ed9b5390c193.slice - libcontainer container kubepods-besteffort-pod4252853d_be36_4c01_b117_ed9b5390c193.slice. Jan 30 13:58:09.671112 systemd[1]: Created slice kubepods-besteffort-pod3232a66a_b80f_4c5f_91a6_ce83f301a87d.slice - libcontainer container kubepods-besteffort-pod3232a66a_b80f_4c5f_91a6_ce83f301a87d.slice. Jan 30 13:58:09.699147 kubelet[2519]: I0130 13:58:09.699077 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8df787c1-03f8-4203-9d8d-3a85d1fa0a95-config-volume\") pod \"coredns-668d6bf9bc-grrxz\" (UID: \"8df787c1-03f8-4203-9d8d-3a85d1fa0a95\") " pod="kube-system/coredns-668d6bf9bc-grrxz" Jan 30 13:58:09.699147 kubelet[2519]: I0130 13:58:09.699158 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbr22\" (UniqueName: \"kubernetes.io/projected/4252853d-be36-4c01-b117-ed9b5390c193-kube-api-access-jbr22\") pod \"calico-apiserver-578cd5cfcf-lxt7l\" (UID: \"4252853d-be36-4c01-b117-ed9b5390c193\") " pod="calico-apiserver/calico-apiserver-578cd5cfcf-lxt7l" Jan 30 13:58:09.699424 kubelet[2519]: I0130 13:58:09.699192 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlxb\" (UniqueName: \"kubernetes.io/projected/8df787c1-03f8-4203-9d8d-3a85d1fa0a95-kube-api-access-wzlxb\") pod \"coredns-668d6bf9bc-grrxz\" (UID: \"8df787c1-03f8-4203-9d8d-3a85d1fa0a95\") " pod="kube-system/coredns-668d6bf9bc-grrxz" Jan 30 13:58:09.699424 kubelet[2519]: I0130 13:58:09.699217 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3232a66a-b80f-4c5f-91a6-ce83f301a87d-calico-apiserver-certs\") pod \"calico-apiserver-578cd5cfcf-m89qp\" (UID: \"3232a66a-b80f-4c5f-91a6-ce83f301a87d\") " pod="calico-apiserver/calico-apiserver-578cd5cfcf-m89qp" Jan 30 13:58:09.699424 kubelet[2519]: I0130 13:58:09.699249 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4596862-5cca-4d1a-98a1-719edf3cebdc-tigera-ca-bundle\") pod \"calico-kube-controllers-cd989f4bc-5k58q\" (UID: \"d4596862-5cca-4d1a-98a1-719edf3cebdc\") " pod="calico-system/calico-kube-controllers-cd989f4bc-5k58q" Jan 30 13:58:09.699424 kubelet[2519]: I0130 13:58:09.699277 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qslpk\" (UniqueName: \"kubernetes.io/projected/d4596862-5cca-4d1a-98a1-719edf3cebdc-kube-api-access-qslpk\") pod \"calico-kube-controllers-cd989f4bc-5k58q\" (UID: \"d4596862-5cca-4d1a-98a1-719edf3cebdc\") " pod="calico-system/calico-kube-controllers-cd989f4bc-5k58q" Jan 30 13:58:09.699424 kubelet[2519]: I0130 13:58:09.699306 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8dk5\" (UniqueName: \"kubernetes.io/projected/3232a66a-b80f-4c5f-91a6-ce83f301a87d-kube-api-access-q8dk5\") pod \"calico-apiserver-578cd5cfcf-m89qp\" (UID: \"3232a66a-b80f-4c5f-91a6-ce83f301a87d\") " pod="calico-apiserver/calico-apiserver-578cd5cfcf-m89qp" Jan 30 13:58:09.699807 kubelet[2519]: I0130 13:58:09.699333 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be6e8263-d40b-423f-8220-c5dba67bce2a-config-volume\") pod \"coredns-668d6bf9bc-rvj9k\" (UID: \"be6e8263-d40b-423f-8220-c5dba67bce2a\") " pod="kube-system/coredns-668d6bf9bc-rvj9k" Jan 30 13:58:09.699807 kubelet[2519]: I0130 13:58:09.699403 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4252853d-be36-4c01-b117-ed9b5390c193-calico-apiserver-certs\") pod \"calico-apiserver-578cd5cfcf-lxt7l\" (UID: \"4252853d-be36-4c01-b117-ed9b5390c193\") " pod="calico-apiserver/calico-apiserver-578cd5cfcf-lxt7l" Jan 30 13:58:09.699807 kubelet[2519]: I0130 13:58:09.699434 2519 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqw56\" (UniqueName: \"kubernetes.io/projected/be6e8263-d40b-423f-8220-c5dba67bce2a-kube-api-access-fqw56\") pod \"coredns-668d6bf9bc-rvj9k\" (UID: \"be6e8263-d40b-423f-8220-c5dba67bce2a\") " pod="kube-system/coredns-668d6bf9bc-rvj9k" Jan 30 13:58:09.940398 containerd[1465]: time="2025-01-30T13:58:09.939838079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd989f4bc-5k58q,Uid:d4596862-5cca-4d1a-98a1-719edf3cebdc,Namespace:calico-system,Attempt:0,}" Jan 30 13:58:09.967445 containerd[1465]: time="2025-01-30T13:58:09.967077595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578cd5cfcf-lxt7l,Uid:4252853d-be36-4c01-b117-ed9b5390c193,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:58:09.993010 containerd[1465]: time="2025-01-30T13:58:09.992656028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578cd5cfcf-m89qp,Uid:3232a66a-b80f-4c5f-91a6-ce83f301a87d,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:58:10.012791 systemd[1]: Created slice kubepods-besteffort-pode4fd20cc_1ebf_4c36_acf8_aae4903f42f0.slice - libcontainer container kubepods-besteffort-pode4fd20cc_1ebf_4c36_acf8_aae4903f42f0.slice. Jan 30 13:58:10.018495 containerd[1465]: time="2025-01-30T13:58:10.018444159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rg6b9,Uid:e4fd20cc-1ebf-4c36-acf8-aae4903f42f0,Namespace:calico-system,Attempt:0,}" Jan 30 13:58:10.222681 kubelet[2519]: E0130 13:58:10.221555 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:10.231487 containerd[1465]: time="2025-01-30T13:58:10.230891239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:58:10.522243 containerd[1465]: time="2025-01-30T13:58:10.519722967Z" level=error msg="Failed to destroy network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.529499 containerd[1465]: time="2025-01-30T13:58:10.529282124Z" level=error msg="Failed to destroy network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.533044 containerd[1465]: time="2025-01-30T13:58:10.529873565Z" level=error msg="encountered an error cleaning up failed sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.533044 containerd[1465]: time="2025-01-30T13:58:10.529974327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578cd5cfcf-lxt7l,Uid:4252853d-be36-4c01-b117-ed9b5390c193,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.533044 containerd[1465]: time="2025-01-30T13:58:10.530983309Z" level=error msg="encountered an error cleaning up failed sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.533044 containerd[1465]: time="2025-01-30T13:58:10.531077795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd989f4bc-5k58q,Uid:d4596862-5cca-4d1a-98a1-719edf3cebdc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.543683 containerd[1465]: time="2025-01-30T13:58:10.540106684Z" level=error msg="Failed to destroy network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.543683 containerd[1465]: time="2025-01-30T13:58:10.540446734Z" level=error msg="encountered an error cleaning up failed sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.543683 containerd[1465]: time="2025-01-30T13:58:10.540495421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578cd5cfcf-m89qp,Uid:3232a66a-b80f-4c5f-91a6-ce83f301a87d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.543683 containerd[1465]: time="2025-01-30T13:58:10.540613422Z" level=error msg="Failed to destroy network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.543683 containerd[1465]: time="2025-01-30T13:58:10.540968231Z" level=error msg="encountered an error cleaning up failed sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.543683 containerd[1465]: time="2025-01-30T13:58:10.541012297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rg6b9,Uid:e4fd20cc-1ebf-4c36-acf8-aae4903f42f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.545459 kubelet[2519]: E0130 13:58:10.542236 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.545459 kubelet[2519]: E0130 13:58:10.542320 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rg6b9" Jan 30 13:58:10.545459 kubelet[2519]: E0130 13:58:10.542346 2519 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rg6b9" Jan 30 13:58:10.545712 kubelet[2519]: E0130 13:58:10.542413 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rg6b9_calico-system(e4fd20cc-1ebf-4c36-acf8-aae4903f42f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rg6b9_calico-system(e4fd20cc-1ebf-4c36-acf8-aae4903f42f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rg6b9" podUID="e4fd20cc-1ebf-4c36-acf8-aae4903f42f0" Jan 30 13:58:10.545712 kubelet[2519]: E0130 13:58:10.542874 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.545712 kubelet[2519]: E0130 13:58:10.542959 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd989f4bc-5k58q" Jan 30 13:58:10.547679 kubelet[2519]: E0130 13:58:10.542988 2519 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd989f4bc-5k58q" Jan 30 13:58:10.547679 kubelet[2519]: E0130 13:58:10.543044 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd989f4bc-5k58q_calico-system(d4596862-5cca-4d1a-98a1-719edf3cebdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd989f4bc-5k58q_calico-system(d4596862-5cca-4d1a-98a1-719edf3cebdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd989f4bc-5k58q" podUID="d4596862-5cca-4d1a-98a1-719edf3cebdc" Jan 30 13:58:10.547679 kubelet[2519]: E0130 13:58:10.543094 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.548178 kubelet[2519]: E0130 13:58:10.543119 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-578cd5cfcf-lxt7l" Jan 30 13:58:10.548178 kubelet[2519]: E0130 13:58:10.543155 2519 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-578cd5cfcf-lxt7l" Jan 30 13:58:10.548178 kubelet[2519]: E0130 13:58:10.543191 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-578cd5cfcf-lxt7l_calico-apiserver(4252853d-be36-4c01-b117-ed9b5390c193)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-578cd5cfcf-lxt7l_calico-apiserver(4252853d-be36-4c01-b117-ed9b5390c193)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-578cd5cfcf-lxt7l" podUID="4252853d-be36-4c01-b117-ed9b5390c193" Jan 30 13:58:10.548326 kubelet[2519]: E0130 13:58:10.543238 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:10.548326 kubelet[2519]: E0130 13:58:10.543260 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-578cd5cfcf-m89qp" Jan 30 13:58:10.548326 kubelet[2519]: E0130 13:58:10.543285 2519 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-578cd5cfcf-m89qp" Jan 30 13:58:10.548437 kubelet[2519]: E0130 13:58:10.543316 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-578cd5cfcf-m89qp_calico-apiserver(3232a66a-b80f-4c5f-91a6-ce83f301a87d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-578cd5cfcf-m89qp_calico-apiserver(3232a66a-b80f-4c5f-91a6-ce83f301a87d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-578cd5cfcf-m89qp" podUID="3232a66a-b80f-4c5f-91a6-ce83f301a87d" Jan 30 13:58:10.550789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7-shm.mount: Deactivated successfully. Jan 30 13:58:10.550989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b-shm.mount: Deactivated successfully. Jan 30 13:58:10.551096 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb-shm.mount: Deactivated successfully. Jan 30 13:58:10.551202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6-shm.mount: Deactivated successfully. Jan 30 13:58:10.805774 kubelet[2519]: E0130 13:58:10.805335 2519 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:58:10.805774 kubelet[2519]: E0130 13:58:10.805381 2519 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:58:10.805774 kubelet[2519]: E0130 13:58:10.805475 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8df787c1-03f8-4203-9d8d-3a85d1fa0a95-config-volume podName:8df787c1-03f8-4203-9d8d-3a85d1fa0a95 nodeName:}" failed. No retries permitted until 2025-01-30 13:58:11.305447742 +0000 UTC m=+24.531897867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8df787c1-03f8-4203-9d8d-3a85d1fa0a95-config-volume") pod "coredns-668d6bf9bc-grrxz" (UID: "8df787c1-03f8-4203-9d8d-3a85d1fa0a95") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:58:10.805774 kubelet[2519]: E0130 13:58:10.805494 2519 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/be6e8263-d40b-423f-8220-c5dba67bce2a-config-volume podName:be6e8263-d40b-423f-8220-c5dba67bce2a nodeName:}" failed. No retries permitted until 2025-01-30 13:58:11.30548482 +0000 UTC m=+24.531934934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be6e8263-d40b-423f-8220-c5dba67bce2a-config-volume") pod "coredns-668d6bf9bc-rvj9k" (UID: "be6e8263-d40b-423f-8220-c5dba67bce2a") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:58:11.224315 kubelet[2519]: I0130 13:58:11.223628 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:11.228458 kubelet[2519]: I0130 13:58:11.227433 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:11.230814 containerd[1465]: time="2025-01-30T13:58:11.230399755Z" level=info msg="StopPodSandbox for \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\"" Jan 30 13:58:11.232584 containerd[1465]: time="2025-01-30T13:58:11.232511208Z" level=info msg="Ensure that sandbox 9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7 in task-service has been cleanup successfully" Jan 30 13:58:11.239383 kubelet[2519]: I0130 13:58:11.239347 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:11.241298 containerd[1465]: time="2025-01-30T13:58:11.241112478Z" level=info msg="StopPodSandbox for \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\"" Jan 30 13:58:11.242195 containerd[1465]: time="2025-01-30T13:58:11.242044584Z" level=info msg="Ensure that sandbox 5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b in task-service has been cleanup successfully" Jan 30 13:58:11.243477 containerd[1465]: time="2025-01-30T13:58:11.242896488Z" level=info msg="StopPodSandbox for \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\"" Jan 30 13:58:11.243477 containerd[1465]: time="2025-01-30T13:58:11.243189113Z" level=info msg="Ensure that sandbox 9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb in task-service has been cleanup successfully" Jan 30 13:58:11.248292 kubelet[2519]: I0130 13:58:11.248080 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:11.250977 containerd[1465]: time="2025-01-30T13:58:11.250900525Z" level=info msg="StopPodSandbox for \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\"" Jan 30 13:58:11.255036 containerd[1465]: time="2025-01-30T13:58:11.254988526Z" level=info msg="Ensure that sandbox 2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6 in task-service has been cleanup successfully" Jan 30 13:58:11.317308 containerd[1465]: time="2025-01-30T13:58:11.317168379Z" level=error msg="StopPodSandbox for \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\" failed" error="failed to destroy network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.318159 kubelet[2519]: E0130 13:58:11.317643 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:11.318159 kubelet[2519]: E0130 13:58:11.317716 2519 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7"} Jan 30 13:58:11.318159 kubelet[2519]: E0130 13:58:11.318074 2519 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:58:11.318159 kubelet[2519]: E0130 13:58:11.318111 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rg6b9" podUID="e4fd20cc-1ebf-4c36-acf8-aae4903f42f0" Jan 30 13:58:11.330290 containerd[1465]: time="2025-01-30T13:58:11.329329878Z" level=error msg="StopPodSandbox for \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\" failed" error="failed to destroy network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.330466 kubelet[2519]: E0130 13:58:11.330107 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:11.330466 kubelet[2519]: E0130 13:58:11.330160 2519 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6"} Jan 30 13:58:11.330466 kubelet[2519]: E0130 13:58:11.330207 2519 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4596862-5cca-4d1a-98a1-719edf3cebdc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:58:11.330466 kubelet[2519]: E0130 13:58:11.330232 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4596862-5cca-4d1a-98a1-719edf3cebdc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd989f4bc-5k58q" podUID="d4596862-5cca-4d1a-98a1-719edf3cebdc" Jan 30 13:58:11.337697 containerd[1465]: time="2025-01-30T13:58:11.337640974Z" level=error msg="StopPodSandbox for \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\" failed" error="failed to destroy network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.338307 kubelet[2519]: E0130 13:58:11.337920 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:11.338307 kubelet[2519]: E0130 13:58:11.337999 2519 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b"} Jan 30 13:58:11.338307 kubelet[2519]: E0130 13:58:11.338035 2519 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4252853d-be36-4c01-b117-ed9b5390c193\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:58:11.338307 kubelet[2519]: E0130 13:58:11.338090 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4252853d-be36-4c01-b117-ed9b5390c193\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-578cd5cfcf-lxt7l" podUID="4252853d-be36-4c01-b117-ed9b5390c193" Jan 30 13:58:11.340520 containerd[1465]: time="2025-01-30T13:58:11.340410567Z" level=error msg="StopPodSandbox for \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\" failed" error="failed to destroy network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.340827 kubelet[2519]: E0130 13:58:11.340778 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:11.340890 kubelet[2519]: E0130 13:58:11.340837 2519 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb"} Jan 30 13:58:11.340925 kubelet[2519]: E0130 13:58:11.340881 2519 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3232a66a-b80f-4c5f-91a6-ce83f301a87d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:58:11.341046 kubelet[2519]: E0130 13:58:11.340921 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3232a66a-b80f-4c5f-91a6-ce83f301a87d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-578cd5cfcf-m89qp" podUID="3232a66a-b80f-4c5f-91a6-ce83f301a87d" Jan 30 13:58:11.417865 kubelet[2519]: E0130 13:58:11.417414 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:11.418223 containerd[1465]: time="2025-01-30T13:58:11.418041893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvj9k,Uid:be6e8263-d40b-423f-8220-c5dba67bce2a,Namespace:kube-system,Attempt:0,}" Jan 30 13:58:11.450496 kubelet[2519]: E0130 13:58:11.450453 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:11.451566 containerd[1465]: time="2025-01-30T13:58:11.451503409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grrxz,Uid:8df787c1-03f8-4203-9d8d-3a85d1fa0a95,Namespace:kube-system,Attempt:0,}" Jan 30 13:58:11.545528 containerd[1465]: time="2025-01-30T13:58:11.545353069Z" level=error msg="Failed to destroy network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.550552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe-shm.mount: Deactivated successfully. Jan 30 13:58:11.551164 containerd[1465]: time="2025-01-30T13:58:11.550632364Z" level=error msg="encountered an error cleaning up failed sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.551164 containerd[1465]: time="2025-01-30T13:58:11.550725112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvj9k,Uid:be6e8263-d40b-423f-8220-c5dba67bce2a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.552533 kubelet[2519]: E0130 13:58:11.551056 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.552533 kubelet[2519]: E0130 13:58:11.551990 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rvj9k" Jan 30 13:58:11.552533 kubelet[2519]: E0130 13:58:11.552025 2519 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rvj9k" Jan 30 13:58:11.552740 kubelet[2519]: E0130 13:58:11.552095 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rvj9k_kube-system(be6e8263-d40b-423f-8220-c5dba67bce2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rvj9k_kube-system(be6e8263-d40b-423f-8220-c5dba67bce2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rvj9k" podUID="be6e8263-d40b-423f-8220-c5dba67bce2a" Jan 30 13:58:11.592344 containerd[1465]: time="2025-01-30T13:58:11.592239161Z" level=error msg="Failed to destroy network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.593098 containerd[1465]: time="2025-01-30T13:58:11.592905771Z" level=error msg="encountered an error cleaning up failed sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.593098 containerd[1465]: time="2025-01-30T13:58:11.593030916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grrxz,Uid:8df787c1-03f8-4203-9d8d-3a85d1fa0a95,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.598043 kubelet[2519]: E0130 13:58:11.595125 2519 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:11.598043 kubelet[2519]: E0130 13:58:11.595215 2519 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-grrxz" Jan 30 13:58:11.598043 kubelet[2519]: E0130 13:58:11.595249 2519 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-grrxz" Jan 30 13:58:11.598337 kubelet[2519]: E0130 13:58:11.595525 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-grrxz_kube-system(8df787c1-03f8-4203-9d8d-3a85d1fa0a95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-grrxz_kube-system(8df787c1-03f8-4203-9d8d-3a85d1fa0a95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-grrxz" podUID="8df787c1-03f8-4203-9d8d-3a85d1fa0a95" Jan 30 13:58:11.601840 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959-shm.mount: Deactivated successfully. Jan 30 13:58:12.256622 kubelet[2519]: I0130 13:58:12.253568 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:12.258220 containerd[1465]: time="2025-01-30T13:58:12.257720490Z" level=info msg="StopPodSandbox for \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\"" Jan 30 13:58:12.258220 containerd[1465]: time="2025-01-30T13:58:12.258014704Z" level=info msg="Ensure that sandbox 2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959 in task-service has been cleanup successfully" Jan 30 13:58:12.264705 kubelet[2519]: I0130 13:58:12.264431 2519 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:12.268578 containerd[1465]: time="2025-01-30T13:58:12.268401496Z" level=info msg="StopPodSandbox for \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\"" Jan 30 13:58:12.269217 containerd[1465]: time="2025-01-30T13:58:12.268852052Z" level=info msg="Ensure that sandbox 1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe in task-service has been cleanup successfully" Jan 30 13:58:12.348217 containerd[1465]: time="2025-01-30T13:58:12.348145760Z" level=error msg="StopPodSandbox for \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\" failed" error="failed to destroy network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:12.348953 kubelet[2519]: E0130 13:58:12.348707 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:12.348953 kubelet[2519]: E0130 13:58:12.348789 2519 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959"} Jan 30 13:58:12.348953 kubelet[2519]: E0130 13:58:12.348845 2519 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8df787c1-03f8-4203-9d8d-3a85d1fa0a95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:58:12.348953 kubelet[2519]: E0130 13:58:12.348882 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8df787c1-03f8-4203-9d8d-3a85d1fa0a95\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-grrxz" podUID="8df787c1-03f8-4203-9d8d-3a85d1fa0a95" Jan 30 13:58:12.351641 containerd[1465]: time="2025-01-30T13:58:12.351176652Z" level=error msg="StopPodSandbox for \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\" failed" error="failed to destroy network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:58:12.351754 kubelet[2519]: E0130 13:58:12.351453 2519 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:12.351754 kubelet[2519]: E0130 13:58:12.351509 2519 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe"} Jan 30 13:58:12.351754 kubelet[2519]: E0130 13:58:12.351578 2519 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be6e8263-d40b-423f-8220-c5dba67bce2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:58:12.351754 kubelet[2519]: E0130 13:58:12.351602 2519 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be6e8263-d40b-423f-8220-c5dba67bce2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rvj9k" podUID="be6e8263-d40b-423f-8220-c5dba67bce2a" Jan 30 13:58:16.288204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2918981281.mount: Deactivated successfully. Jan 30 13:58:16.343599 containerd[1465]: time="2025-01-30T13:58:16.343520256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:16.348177 containerd[1465]: time="2025-01-30T13:58:16.347232641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:58:16.355917 containerd[1465]: time="2025-01-30T13:58:16.355847556Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:16.361016 containerd[1465]: time="2025-01-30T13:58:16.360950282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:16.362542 containerd[1465]: time="2025-01-30T13:58:16.362461812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.131510452s" Jan 30 13:58:16.362754 containerd[1465]: time="2025-01-30T13:58:16.362726835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:58:16.393065 containerd[1465]: time="2025-01-30T13:58:16.392994548Z" level=info msg="CreateContainer within sandbox \"c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:58:16.483743 containerd[1465]: time="2025-01-30T13:58:16.483683040Z" level=info msg="CreateContainer within sandbox \"c008237cdf75f6a4b312c70628b8a132a4ae9183f6558e1b5638c765aa37b89e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4e386dd34fb01c6cd01df400ab609b34b84ca7ee54b51cd63c6f90720b810247\"" Jan 30 13:58:16.485624 containerd[1465]: time="2025-01-30T13:58:16.485567444Z" level=info msg="StartContainer for \"4e386dd34fb01c6cd01df400ab609b34b84ca7ee54b51cd63c6f90720b810247\"" Jan 30 13:58:16.622158 systemd[1]: Started cri-containerd-4e386dd34fb01c6cd01df400ab609b34b84ca7ee54b51cd63c6f90720b810247.scope - libcontainer container 4e386dd34fb01c6cd01df400ab609b34b84ca7ee54b51cd63c6f90720b810247. Jan 30 13:58:16.750321 containerd[1465]: time="2025-01-30T13:58:16.750245700Z" level=info msg="StartContainer for \"4e386dd34fb01c6cd01df400ab609b34b84ca7ee54b51cd63c6f90720b810247\" returns successfully" Jan 30 13:58:16.791625 kubelet[2519]: I0130 13:58:16.791593 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:58:16.801754 kubelet[2519]: E0130 13:58:16.801422 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:16.837529 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:58:16.838860 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:58:17.304886 kubelet[2519]: E0130 13:58:17.303528 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:17.306616 kubelet[2519]: E0130 13:58:17.306583 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:17.351114 kubelet[2519]: I0130 13:58:17.351040 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sw4gr" podStartSLOduration=2.279998194 podStartE2EDuration="18.340201311s" podCreationTimestamp="2025-01-30 13:57:59 +0000 UTC" firstStartedPulling="2025-01-30 13:58:00.303837314 +0000 UTC m=+13.530287424" lastFinishedPulling="2025-01-30 13:58:16.364040415 +0000 UTC m=+29.590490541" observedRunningTime="2025-01-30 13:58:17.340198971 +0000 UTC m=+30.566649125" watchObservedRunningTime="2025-01-30 13:58:17.340201311 +0000 UTC m=+30.566651444" Jan 30 13:58:18.305525 kubelet[2519]: I0130 13:58:18.305415 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:58:18.306876 kubelet[2519]: E0130 13:58:18.306758 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:18.862129 systemd[1]: run-containerd-runc-k8s.io-4e386dd34fb01c6cd01df400ab609b34b84ca7ee54b51cd63c6f90720b810247-runc.wbVJMb.mount: Deactivated successfully. Jan 30 13:58:19.036119 kernel: bpftool[3781]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:58:19.308384 kubelet[2519]: E0130 13:58:19.308326 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:19.396371 systemd-networkd[1369]: vxlan.calico: Link UP Jan 30 13:58:19.396381 systemd-networkd[1369]: vxlan.calico: Gained carrier Jan 30 13:58:20.544242 systemd-networkd[1369]: vxlan.calico: Gained IPv6LL Jan 30 13:58:22.994285 containerd[1465]: time="2025-01-30T13:58:22.993884853Z" level=info msg="StopPodSandbox for \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\"" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.089 [INFO][3891] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.093 [INFO][3891] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" iface="eth0" netns="/var/run/netns/cni-5aa855a6-ff52-af9e-49fa-07dbf1c96075" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.093 [INFO][3891] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" iface="eth0" netns="/var/run/netns/cni-5aa855a6-ff52-af9e-49fa-07dbf1c96075" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.096 [INFO][3891] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" iface="eth0" netns="/var/run/netns/cni-5aa855a6-ff52-af9e-49fa-07dbf1c96075" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.097 [INFO][3891] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.097 [INFO][3891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.446 [INFO][3897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.449 [INFO][3897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.450 [INFO][3897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.465 [WARNING][3897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.465 [INFO][3897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.468 [INFO][3897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:23.473186 containerd[1465]: 2025-01-30 13:58:23.470 [INFO][3891] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:23.476205 containerd[1465]: time="2025-01-30T13:58:23.473983785Z" level=info msg="TearDown network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\" successfully" Jan 30 13:58:23.476205 containerd[1465]: time="2025-01-30T13:58:23.475112915Z" level=info msg="StopPodSandbox for \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\" returns successfully" Jan 30 13:58:23.478380 systemd[1]: run-netns-cni\x2d5aa855a6\x2dff52\x2daf9e\x2d49fa\x2d07dbf1c96075.mount: Deactivated successfully. Jan 30 13:58:23.517721 containerd[1465]: time="2025-01-30T13:58:23.517610345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578cd5cfcf-m89qp,Uid:3232a66a-b80f-4c5f-91a6-ce83f301a87d,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:58:23.780414 systemd-networkd[1369]: cali58b20c1e5ea: Link UP Jan 30 13:58:23.781651 systemd-networkd[1369]: cali58b20c1e5ea: Gained carrier Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.631 [INFO][3905] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0 calico-apiserver-578cd5cfcf- calico-apiserver 3232a66a-b80f-4c5f-91a6-ce83f301a87d 768 0 2025-01-30 13:57:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:578cd5cfcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-2-c6825061e7 calico-apiserver-578cd5cfcf-m89qp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali58b20c1e5ea [] []}} ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-m89qp" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.631 [INFO][3905] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-m89qp" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.688 [INFO][3915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" HandleID="k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.710 [INFO][3915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" HandleID="k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319c10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-2-c6825061e7", "pod":"calico-apiserver-578cd5cfcf-m89qp", "timestamp":"2025-01-30 13:58:23.688889159 +0000 UTC"}, Hostname:"ci-4081.3.0-2-c6825061e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.710 [INFO][3915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.711 [INFO][3915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.711 [INFO][3915] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-2-c6825061e7' Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.716 [INFO][3915] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.730 [INFO][3915] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.740 [INFO][3915] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.743 [INFO][3915] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.747 [INFO][3915] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.748 [INFO][3915] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.751 [INFO][3915] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210 Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.759 [INFO][3915] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.767 [INFO][3915] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.1/26] block=192.168.110.0/26 handle="k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.768 [INFO][3915] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.1/26] handle="k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.768 [INFO][3915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:23.816613 containerd[1465]: 2025-01-30 13:58:23.768 [INFO][3915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.1/26] IPv6=[] ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" HandleID="k8s-pod-network.d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.817700 containerd[1465]: 2025-01-30 13:58:23.772 [INFO][3905] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-m89qp" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0", GenerateName:"calico-apiserver-578cd5cfcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"3232a66a-b80f-4c5f-91a6-ce83f301a87d", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578cd5cfcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"", Pod:"calico-apiserver-578cd5cfcf-m89qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58b20c1e5ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:23.817700 containerd[1465]: 2025-01-30 13:58:23.772 [INFO][3905] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.1/32] ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-m89qp" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.817700 containerd[1465]: 2025-01-30 13:58:23.773 [INFO][3905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58b20c1e5ea ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-m89qp" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.817700 containerd[1465]: 2025-01-30 13:58:23.782 [INFO][3905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-m89qp" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.817700 containerd[1465]: 2025-01-30 13:58:23.783 [INFO][3905] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-m89qp" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0", GenerateName:"calico-apiserver-578cd5cfcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"3232a66a-b80f-4c5f-91a6-ce83f301a87d", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578cd5cfcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210", Pod:"calico-apiserver-578cd5cfcf-m89qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58b20c1e5ea", MAC:"ae:f9:75:12:5b:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:23.817700 containerd[1465]: 2025-01-30 13:58:23.809 [INFO][3905] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-m89qp" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:23.862544 containerd[1465]: time="2025-01-30T13:58:23.861551285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:58:23.862544 containerd[1465]: time="2025-01-30T13:58:23.861631823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:58:23.862544 containerd[1465]: time="2025-01-30T13:58:23.861653695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:23.862544 containerd[1465]: time="2025-01-30T13:58:23.861863186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:23.905293 systemd[1]: Started cri-containerd-d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210.scope - libcontainer container d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210. Jan 30 13:58:23.980707 containerd[1465]: time="2025-01-30T13:58:23.980652262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578cd5cfcf-m89qp,Uid:3232a66a-b80f-4c5f-91a6-ce83f301a87d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210\"" Jan 30 13:58:23.993344 containerd[1465]: time="2025-01-30T13:58:23.992833799Z" level=info msg="StopPodSandbox for \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\"" Jan 30 13:58:23.995101 containerd[1465]: time="2025-01-30T13:58:23.993179237Z" level=info msg="StopPodSandbox for \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\"" Jan 30 13:58:24.063342 containerd[1465]: time="2025-01-30T13:58:24.061895743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.187 [INFO][4007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.188 [INFO][4007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" iface="eth0" netns="/var/run/netns/cni-ed24a0ea-08e8-94cc-f7ad-0e97bd0117f3" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.188 [INFO][4007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" iface="eth0" netns="/var/run/netns/cni-ed24a0ea-08e8-94cc-f7ad-0e97bd0117f3" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.188 [INFO][4007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" iface="eth0" netns="/var/run/netns/cni-ed24a0ea-08e8-94cc-f7ad-0e97bd0117f3" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.188 [INFO][4007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.188 [INFO][4007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.235 [INFO][4020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.235 [INFO][4020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.235 [INFO][4020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.249 [WARNING][4020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.250 [INFO][4020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.258 [INFO][4020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:24.266379 containerd[1465]: 2025-01-30 13:58:24.264 [INFO][4007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:24.268734 containerd[1465]: time="2025-01-30T13:58:24.268688807Z" level=info msg="TearDown network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\" successfully" Jan 30 13:58:24.268856 containerd[1465]: time="2025-01-30T13:58:24.268839487Z" level=info msg="StopPodSandbox for \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\" returns successfully" Jan 30 13:58:24.270196 containerd[1465]: time="2025-01-30T13:58:24.270158895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd989f4bc-5k58q,Uid:d4596862-5cca-4d1a-98a1-719edf3cebdc,Namespace:calico-system,Attempt:1,}" Jan 30 13:58:24.273321 systemd[1]: run-netns-cni\x2ded24a0ea\x2d08e8\x2d94cc\x2df7ad\x2d0e97bd0117f3.mount: Deactivated successfully. Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.190 [INFO][4011] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.190 [INFO][4011] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" iface="eth0" netns="/var/run/netns/cni-39b12605-b1c8-1ad0-3735-23e035db0841" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.190 [INFO][4011] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" iface="eth0" netns="/var/run/netns/cni-39b12605-b1c8-1ad0-3735-23e035db0841" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.194 [INFO][4011] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" iface="eth0" netns="/var/run/netns/cni-39b12605-b1c8-1ad0-3735-23e035db0841" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.194 [INFO][4011] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.194 [INFO][4011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.243 [INFO][4021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.243 [INFO][4021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.258 [INFO][4021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.277 [WARNING][4021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.277 [INFO][4021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.286 [INFO][4021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:24.292146 containerd[1465]: 2025-01-30 13:58:24.289 [INFO][4011] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:24.294053 containerd[1465]: time="2025-01-30T13:58:24.293126782Z" level=info msg="TearDown network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\" successfully" Jan 30 13:58:24.294053 containerd[1465]: time="2025-01-30T13:58:24.293203012Z" level=info msg="StopPodSandbox for \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\" returns successfully" Jan 30 13:58:24.294269 kubelet[2519]: E0130 13:58:24.293581 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:24.295577 containerd[1465]: time="2025-01-30T13:58:24.294509344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grrxz,Uid:8df787c1-03f8-4203-9d8d-3a85d1fa0a95,Namespace:kube-system,Attempt:1,}" Jan 30 13:58:24.500840 systemd[1]: run-netns-cni\x2d39b12605\x2db1c8\x2d1ad0\x2d3735\x2d23e035db0841.mount: Deactivated successfully. Jan 30 13:58:24.580740 systemd-networkd[1369]: caliace25c4cea4: Link UP Jan 30 13:58:24.581666 systemd-networkd[1369]: caliace25c4cea4: Gained carrier Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.381 [INFO][4034] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0 calico-kube-controllers-cd989f4bc- calico-system d4596862-5cca-4d1a-98a1-719edf3cebdc 778 0 2025-01-30 13:58:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd989f4bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-2-c6825061e7 calico-kube-controllers-cd989f4bc-5k58q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliace25c4cea4 [] []}} ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Namespace="calico-system" Pod="calico-kube-controllers-cd989f4bc-5k58q" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.381 [INFO][4034] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Namespace="calico-system" Pod="calico-kube-controllers-cd989f4bc-5k58q" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.457 [INFO][4060] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" HandleID="k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.481 [INFO][4060] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" HandleID="k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000512e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-2-c6825061e7", "pod":"calico-kube-controllers-cd989f4bc-5k58q", "timestamp":"2025-01-30 13:58:24.457114982 +0000 UTC"}, Hostname:"ci-4081.3.0-2-c6825061e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.482 [INFO][4060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.482 [INFO][4060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.482 [INFO][4060] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-2-c6825061e7' Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.511 [INFO][4060] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.523 [INFO][4060] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.537 [INFO][4060] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.543 [INFO][4060] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.549 [INFO][4060] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.549 [INFO][4060] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.555 [INFO][4060] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.562 [INFO][4060] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.571 [INFO][4060] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.2/26] block=192.168.110.0/26 handle="k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.571 [INFO][4060] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.2/26] handle="k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.571 [INFO][4060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:24.625130 containerd[1465]: 2025-01-30 13:58:24.571 [INFO][4060] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.2/26] IPv6=[] ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" HandleID="k8s-pod-network.b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.629143 containerd[1465]: 2025-01-30 13:58:24.575 [INFO][4034] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Namespace="calico-system" Pod="calico-kube-controllers-cd989f4bc-5k58q" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0", GenerateName:"calico-kube-controllers-cd989f4bc-", Namespace:"calico-system", SelfLink:"", UID:"d4596862-5cca-4d1a-98a1-719edf3cebdc", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd989f4bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"", Pod:"calico-kube-controllers-cd989f4bc-5k58q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliace25c4cea4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:24.629143 containerd[1465]: 2025-01-30 13:58:24.575 [INFO][4034] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.2/32] ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Namespace="calico-system" Pod="calico-kube-controllers-cd989f4bc-5k58q" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.629143 containerd[1465]: 2025-01-30 13:58:24.575 [INFO][4034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliace25c4cea4 ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Namespace="calico-system" Pod="calico-kube-controllers-cd989f4bc-5k58q" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.629143 containerd[1465]: 2025-01-30 13:58:24.581 [INFO][4034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Namespace="calico-system" Pod="calico-kube-controllers-cd989f4bc-5k58q" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.629143 containerd[1465]: 2025-01-30 13:58:24.582 [INFO][4034] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Namespace="calico-system" Pod="calico-kube-controllers-cd989f4bc-5k58q" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0", GenerateName:"calico-kube-controllers-cd989f4bc-", Namespace:"calico-system", SelfLink:"", UID:"d4596862-5cca-4d1a-98a1-719edf3cebdc", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd989f4bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b", Pod:"calico-kube-controllers-cd989f4bc-5k58q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliace25c4cea4", MAC:"5e:7a:d0:18:91:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:24.629143 containerd[1465]: 2025-01-30 13:58:24.621 [INFO][4034] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b" Namespace="calico-system" Pod="calico-kube-controllers-cd989f4bc-5k58q" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:24.688302 containerd[1465]: time="2025-01-30T13:58:24.686757108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:58:24.688302 containerd[1465]: time="2025-01-30T13:58:24.686861245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:58:24.688302 containerd[1465]: time="2025-01-30T13:58:24.686917589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:24.692092 containerd[1465]: time="2025-01-30T13:58:24.689874240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:24.694674 systemd-networkd[1369]: calid354575a118: Link UP Jan 30 13:58:24.695922 systemd-networkd[1369]: calid354575a118: Gained carrier Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.417 [INFO][4047] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0 coredns-668d6bf9bc- kube-system 8df787c1-03f8-4203-9d8d-3a85d1fa0a95 779 0 2025-01-30 13:57:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-2-c6825061e7 coredns-668d6bf9bc-grrxz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid354575a118 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-grrxz" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.418 [INFO][4047] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-grrxz" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.490 [INFO][4065] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" HandleID="k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.518 [INFO][4065] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" HandleID="k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003054e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-2-c6825061e7", "pod":"coredns-668d6bf9bc-grrxz", "timestamp":"2025-01-30 13:58:24.490142947 +0000 UTC"}, Hostname:"ci-4081.3.0-2-c6825061e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.518 [INFO][4065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.571 [INFO][4065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.571 [INFO][4065] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-2-c6825061e7' Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.600 [INFO][4065] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.611 [INFO][4065] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.639 [INFO][4065] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.646 [INFO][4065] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.653 [INFO][4065] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.653 [INFO][4065] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.657 [INFO][4065] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7 Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.666 [INFO][4065] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.682 [INFO][4065] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.3/26] block=192.168.110.0/26 handle="k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.682 [INFO][4065] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.3/26] handle="k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.682 [INFO][4065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:24.737468 containerd[1465]: 2025-01-30 13:58:24.682 [INFO][4065] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.3/26] IPv6=[] ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" HandleID="k8s-pod-network.142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.738980 containerd[1465]: 2025-01-30 13:58:24.687 [INFO][4047] cni-plugin/k8s.go 386: Populated endpoint ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-grrxz" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8df787c1-03f8-4203-9d8d-3a85d1fa0a95", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"", Pod:"coredns-668d6bf9bc-grrxz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid354575a118", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:24.738980 containerd[1465]: 2025-01-30 13:58:24.687 [INFO][4047] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.3/32] ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-grrxz" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.738980 containerd[1465]: 2025-01-30 13:58:24.687 [INFO][4047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid354575a118 ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-grrxz" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.738980 containerd[1465]: 2025-01-30 13:58:24.696 [INFO][4047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-grrxz" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.738980 containerd[1465]: 2025-01-30 13:58:24.697 [INFO][4047] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-grrxz" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8df787c1-03f8-4203-9d8d-3a85d1fa0a95", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7", Pod:"coredns-668d6bf9bc-grrxz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid354575a118", MAC:"3e:a6:90:19:d6:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:24.738980 containerd[1465]: 2025-01-30 13:58:24.732 [INFO][4047] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-grrxz" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:24.763326 systemd[1]: Started cri-containerd-b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b.scope - libcontainer container b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b. Jan 30 13:58:24.827928 containerd[1465]: time="2025-01-30T13:58:24.824989970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:58:24.827928 containerd[1465]: time="2025-01-30T13:58:24.825073290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:58:24.827928 containerd[1465]: time="2025-01-30T13:58:24.825089538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:24.827928 containerd[1465]: time="2025-01-30T13:58:24.825213834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:24.837089 systemd-networkd[1369]: cali58b20c1e5ea: Gained IPv6LL Jan 30 13:58:24.884280 systemd[1]: Started cri-containerd-142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7.scope - libcontainer container 142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7. Jan 30 13:58:24.937633 containerd[1465]: time="2025-01-30T13:58:24.937575097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd989f4bc-5k58q,Uid:d4596862-5cca-4d1a-98a1-719edf3cebdc,Namespace:calico-system,Attempt:1,} returns sandbox id \"b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b\"" Jan 30 13:58:24.969277 containerd[1465]: time="2025-01-30T13:58:24.969201347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grrxz,Uid:8df787c1-03f8-4203-9d8d-3a85d1fa0a95,Namespace:kube-system,Attempt:1,} returns sandbox id \"142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7\"" Jan 30 13:58:24.970657 kubelet[2519]: E0130 13:58:24.970584 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:24.996198 containerd[1465]: time="2025-01-30T13:58:24.995802962Z" level=info msg="StopPodSandbox for \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\"" Jan 30 13:58:24.999669 containerd[1465]: time="2025-01-30T13:58:24.996371688Z" level=info msg="CreateContainer within sandbox \"142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:58:25.055007 containerd[1465]: time="2025-01-30T13:58:25.054846887Z" level=info msg="CreateContainer within sandbox \"142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0f7d2a71d12b882ed13dd569acc86a45f5d52ac2b7c44b4a7667900f933b628\"" Jan 30 13:58:25.056918 containerd[1465]: time="2025-01-30T13:58:25.056769139Z" level=info msg="StartContainer for \"b0f7d2a71d12b882ed13dd569acc86a45f5d52ac2b7c44b4a7667900f933b628\"" Jan 30 13:58:25.118369 systemd[1]: Started cri-containerd-b0f7d2a71d12b882ed13dd569acc86a45f5d52ac2b7c44b4a7667900f933b628.scope - libcontainer container b0f7d2a71d12b882ed13dd569acc86a45f5d52ac2b7c44b4a7667900f933b628. Jan 30 13:58:25.195911 containerd[1465]: time="2025-01-30T13:58:25.195847277Z" level=info msg="StartContainer for \"b0f7d2a71d12b882ed13dd569acc86a45f5d52ac2b7c44b4a7667900f933b628\" returns successfully" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.110 [INFO][4198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.110 [INFO][4198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" iface="eth0" netns="/var/run/netns/cni-e04d9a91-66be-d385-408a-e73e95120e60" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.111 [INFO][4198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" iface="eth0" netns="/var/run/netns/cni-e04d9a91-66be-d385-408a-e73e95120e60" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.113 [INFO][4198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" iface="eth0" netns="/var/run/netns/cni-e04d9a91-66be-d385-408a-e73e95120e60" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.114 [INFO][4198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.114 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.168 [INFO][4220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.168 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.169 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.180 [WARNING][4220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.181 [INFO][4220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.186 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:25.200897 containerd[1465]: 2025-01-30 13:58:25.196 [INFO][4198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:25.203830 containerd[1465]: time="2025-01-30T13:58:25.201160673Z" level=info msg="TearDown network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\" successfully" Jan 30 13:58:25.203830 containerd[1465]: time="2025-01-30T13:58:25.201223067Z" level=info msg="StopPodSandbox for \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\" returns successfully" Jan 30 13:58:25.203830 containerd[1465]: time="2025-01-30T13:58:25.202984962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rg6b9,Uid:e4fd20cc-1ebf-4c36-acf8-aae4903f42f0,Namespace:calico-system,Attempt:1,}" Jan 30 13:58:25.359582 kubelet[2519]: E0130 13:58:25.357896 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:25.496281 systemd[1]: run-netns-cni\x2de04d9a91\x2d66be\x2dd385\x2d408a\x2de73e95120e60.mount: Deactivated successfully. Jan 30 13:58:25.587140 systemd-networkd[1369]: cali029fcbdc52a: Link UP Jan 30 13:58:25.588062 systemd-networkd[1369]: cali029fcbdc52a: Gained carrier Jan 30 13:58:25.625295 kubelet[2519]: I0130 13:58:25.624152 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-grrxz" podStartSLOduration=33.624106703 podStartE2EDuration="33.624106703s" podCreationTimestamp="2025-01-30 13:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:58:25.422211389 +0000 UTC m=+38.648661541" watchObservedRunningTime="2025-01-30 13:58:25.624106703 +0000 UTC m=+38.850556831" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.316 [INFO][4240] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0 csi-node-driver- calico-system e4fd20cc-1ebf-4c36-acf8-aae4903f42f0 794 0 2025-01-30 13:57:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-2-c6825061e7 csi-node-driver-rg6b9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali029fcbdc52a [] []}} ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Namespace="calico-system" Pod="csi-node-driver-rg6b9" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.316 [INFO][4240] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Namespace="calico-system" Pod="csi-node-driver-rg6b9" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.386 [INFO][4254] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" HandleID="k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.507 [INFO][4254] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" HandleID="k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-2-c6825061e7", "pod":"csi-node-driver-rg6b9", "timestamp":"2025-01-30 13:58:25.386041232 +0000 UTC"}, Hostname:"ci-4081.3.0-2-c6825061e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.507 [INFO][4254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.508 [INFO][4254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.508 [INFO][4254] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-2-c6825061e7' Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.512 [INFO][4254] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.518 [INFO][4254] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.528 [INFO][4254] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.533 [INFO][4254] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.541 [INFO][4254] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.541 [INFO][4254] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.548 [INFO][4254] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3 Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.563 [INFO][4254] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.576 [INFO][4254] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.4/26] block=192.168.110.0/26 handle="k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.576 [INFO][4254] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.4/26] handle="k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.576 [INFO][4254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:25.628970 containerd[1465]: 2025-01-30 13:58:25.576 [INFO][4254] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.4/26] IPv6=[] ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" HandleID="k8s-pod-network.d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.632514 containerd[1465]: 2025-01-30 13:58:25.581 [INFO][4240] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Namespace="calico-system" Pod="csi-node-driver-rg6b9" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"", Pod:"csi-node-driver-rg6b9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali029fcbdc52a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:25.632514 containerd[1465]: 2025-01-30 13:58:25.582 [INFO][4240] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.4/32] ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Namespace="calico-system" Pod="csi-node-driver-rg6b9" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.632514 containerd[1465]: 2025-01-30 13:58:25.582 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali029fcbdc52a ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Namespace="calico-system" Pod="csi-node-driver-rg6b9" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.632514 containerd[1465]: 2025-01-30 13:58:25.588 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Namespace="calico-system" Pod="csi-node-driver-rg6b9" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.632514 containerd[1465]: 2025-01-30 13:58:25.590 [INFO][4240] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Namespace="calico-system" Pod="csi-node-driver-rg6b9" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3", Pod:"csi-node-driver-rg6b9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali029fcbdc52a", MAC:"7a:23:9c:32:19:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:25.632514 containerd[1465]: 2025-01-30 13:58:25.622 [INFO][4240] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3" Namespace="calico-system" Pod="csi-node-driver-rg6b9" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:25.708727 containerd[1465]: time="2025-01-30T13:58:25.706394351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:58:25.709261 containerd[1465]: time="2025-01-30T13:58:25.708673941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:58:25.709261 containerd[1465]: time="2025-01-30T13:58:25.708711823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:25.709261 containerd[1465]: time="2025-01-30T13:58:25.708858020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:25.761758 systemd[1]: Started cri-containerd-d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3.scope - libcontainer container d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3. Jan 30 13:58:25.868407 containerd[1465]: time="2025-01-30T13:58:25.867791296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rg6b9,Uid:e4fd20cc-1ebf-4c36-acf8-aae4903f42f0,Namespace:calico-system,Attempt:1,} returns sandbox id \"d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3\"" Jan 30 13:58:25.922079 systemd-networkd[1369]: caliace25c4cea4: Gained IPv6LL Jan 30 13:58:25.993020 containerd[1465]: time="2025-01-30T13:58:25.992790615Z" level=info msg="StopPodSandbox for \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\"" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.099 [INFO][4339] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.100 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" iface="eth0" netns="/var/run/netns/cni-ed025a3c-447a-0dc1-bbdc-7bb6983f114d" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.100 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" iface="eth0" netns="/var/run/netns/cni-ed025a3c-447a-0dc1-bbdc-7bb6983f114d" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.101 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" iface="eth0" netns="/var/run/netns/cni-ed025a3c-447a-0dc1-bbdc-7bb6983f114d" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.101 [INFO][4339] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.101 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.153 [INFO][4345] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.153 [INFO][4345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.153 [INFO][4345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.162 [WARNING][4345] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.163 [INFO][4345] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.166 [INFO][4345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:26.172778 containerd[1465]: 2025-01-30 13:58:26.169 [INFO][4339] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:26.178226 containerd[1465]: time="2025-01-30T13:58:26.177772429Z" level=info msg="TearDown network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\" successfully" Jan 30 13:58:26.178226 containerd[1465]: time="2025-01-30T13:58:26.177866467Z" level=info msg="StopPodSandbox for \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\" returns successfully" Jan 30 13:58:26.180305 containerd[1465]: time="2025-01-30T13:58:26.178883814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvj9k,Uid:be6e8263-d40b-423f-8220-c5dba67bce2a,Namespace:kube-system,Attempt:1,}" Jan 30 13:58:26.180417 kubelet[2519]: E0130 13:58:26.178395 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:26.181368 systemd[1]: run-netns-cni\x2ded025a3c\x2d447a\x2d0dc1\x2dbbdc\x2d7bb6983f114d.mount: Deactivated successfully. Jan 30 13:58:26.364713 kubelet[2519]: E0130 13:58:26.364671 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:26.368673 systemd-networkd[1369]: calid354575a118: Gained IPv6LL Jan 30 13:58:26.596451 systemd-networkd[1369]: calicbe73c11030: Link UP Jan 30 13:58:26.598214 systemd-networkd[1369]: calicbe73c11030: Gained carrier Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.303 [INFO][4352] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0 coredns-668d6bf9bc- kube-system be6e8263-d40b-423f-8220-c5dba67bce2a 808 0 2025-01-30 13:57:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-2-c6825061e7 coredns-668d6bf9bc-rvj9k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicbe73c11030 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvj9k" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.303 [INFO][4352] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvj9k" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.387 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" HandleID="k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.513 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" HandleID="k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bbf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-2-c6825061e7", "pod":"coredns-668d6bf9bc-rvj9k", "timestamp":"2025-01-30 13:58:26.387296922 +0000 UTC"}, Hostname:"ci-4081.3.0-2-c6825061e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.514 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.514 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.514 [INFO][4365] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-2-c6825061e7' Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.519 [INFO][4365] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.528 [INFO][4365] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.539 [INFO][4365] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.543 [INFO][4365] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.551 [INFO][4365] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.552 [INFO][4365] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.557 [INFO][4365] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7 Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.567 [INFO][4365] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.587 [INFO][4365] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.5/26] block=192.168.110.0/26 handle="k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.587 [INFO][4365] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.5/26] handle="k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.587 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:26.629698 containerd[1465]: 2025-01-30 13:58:26.587 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.5/26] IPv6=[] ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" HandleID="k8s-pod-network.e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.634478 containerd[1465]: 2025-01-30 13:58:26.590 [INFO][4352] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvj9k" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be6e8263-d40b-423f-8220-c5dba67bce2a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"", Pod:"coredns-668d6bf9bc-rvj9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbe73c11030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:26.634478 containerd[1465]: 2025-01-30 13:58:26.590 [INFO][4352] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.5/32] ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvj9k" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.634478 containerd[1465]: 2025-01-30 13:58:26.591 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbe73c11030 ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvj9k" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.634478 containerd[1465]: 2025-01-30 13:58:26.598 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvj9k" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.634478 containerd[1465]: 2025-01-30 13:58:26.599 [INFO][4352] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvj9k" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be6e8263-d40b-423f-8220-c5dba67bce2a", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7", Pod:"coredns-668d6bf9bc-rvj9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbe73c11030", MAC:"1a:e4:68:16:36:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:26.634478 containerd[1465]: 2025-01-30 13:58:26.618 [INFO][4352] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-rvj9k" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:26.714982 containerd[1465]: time="2025-01-30T13:58:26.714265781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:58:26.714982 containerd[1465]: time="2025-01-30T13:58:26.714339307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:58:26.714982 containerd[1465]: time="2025-01-30T13:58:26.714379158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:26.714982 containerd[1465]: time="2025-01-30T13:58:26.714541150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:26.776272 systemd[1]: Started cri-containerd-e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7.scope - libcontainer container e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7. Jan 30 13:58:26.854577 containerd[1465]: time="2025-01-30T13:58:26.854173038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rvj9k,Uid:be6e8263-d40b-423f-8220-c5dba67bce2a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7\"" Jan 30 13:58:26.857186 kubelet[2519]: E0130 13:58:26.857133 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:26.863411 containerd[1465]: time="2025-01-30T13:58:26.863346203Z" level=info msg="CreateContainer within sandbox \"e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:58:26.913601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519952244.mount: Deactivated successfully. Jan 30 13:58:26.929471 containerd[1465]: time="2025-01-30T13:58:26.929146879Z" level=info msg="CreateContainer within sandbox \"e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b2014bd264f561b81f40ef63e48d732239ea9bba950419837759ab1c1fdf12a\"" Jan 30 13:58:26.932995 containerd[1465]: time="2025-01-30T13:58:26.931562652Z" level=info msg="StartContainer for \"1b2014bd264f561b81f40ef63e48d732239ea9bba950419837759ab1c1fdf12a\"" Jan 30 13:58:27.026234 systemd[1]: Started cri-containerd-1b2014bd264f561b81f40ef63e48d732239ea9bba950419837759ab1c1fdf12a.scope - libcontainer container 1b2014bd264f561b81f40ef63e48d732239ea9bba950419837759ab1c1fdf12a. Jan 30 13:58:27.077779 containerd[1465]: time="2025-01-30T13:58:27.076595003Z" level=info msg="StopPodSandbox for \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\"" Jan 30 13:58:27.139981 containerd[1465]: time="2025-01-30T13:58:27.135525505Z" level=info msg="StartContainer for \"1b2014bd264f561b81f40ef63e48d732239ea9bba950419837759ab1c1fdf12a\" returns successfully" Jan 30 13:58:27.383399 kubelet[2519]: E0130 13:58:27.383279 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:27.384547 kubelet[2519]: E0130 13:58:27.384089 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:27.435677 kubelet[2519]: I0130 13:58:27.435610 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rvj9k" podStartSLOduration=35.435467994 podStartE2EDuration="35.435467994s" podCreationTimestamp="2025-01-30 13:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:58:27.432283001 +0000 UTC m=+40.658733152" watchObservedRunningTime="2025-01-30 13:58:27.435467994 +0000 UTC m=+40.661918128" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.323 [INFO][4473] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.323 [INFO][4473] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" iface="eth0" netns="/var/run/netns/cni-266aab02-996f-9570-056d-2665c65bcaff" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.324 [INFO][4473] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" iface="eth0" netns="/var/run/netns/cni-266aab02-996f-9570-056d-2665c65bcaff" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.325 [INFO][4473] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" iface="eth0" netns="/var/run/netns/cni-266aab02-996f-9570-056d-2665c65bcaff" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.325 [INFO][4473] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.325 [INFO][4473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.378 [INFO][4485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.379 [INFO][4485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.380 [INFO][4485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.403 [WARNING][4485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.404 [INFO][4485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.413 [INFO][4485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:27.442471 containerd[1465]: 2025-01-30 13:58:27.427 [INFO][4473] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:27.465853 containerd[1465]: time="2025-01-30T13:58:27.443989857Z" level=info msg="TearDown network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\" successfully" Jan 30 13:58:27.465853 containerd[1465]: time="2025-01-30T13:58:27.444048299Z" level=info msg="StopPodSandbox for \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\" returns successfully" Jan 30 13:58:27.465853 containerd[1465]: time="2025-01-30T13:58:27.446172799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578cd5cfcf-lxt7l,Uid:4252853d-be36-4c01-b117-ed9b5390c193,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:58:27.458392 systemd-networkd[1369]: cali029fcbdc52a: Gained IPv6LL Jan 30 13:58:27.497407 systemd[1]: run-netns-cni\x2d266aab02\x2d996f\x2d9570\x2d056d\x2d2665c65bcaff.mount: Deactivated successfully. Jan 30 13:58:27.898066 systemd-networkd[1369]: cali1a0a4b3eabb: Link UP Jan 30 13:58:27.898408 systemd-networkd[1369]: cali1a0a4b3eabb: Gained carrier Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.666 [INFO][4493] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0 calico-apiserver-578cd5cfcf- calico-apiserver 4252853d-be36-4c01-b117-ed9b5390c193 826 0 2025-01-30 13:57:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:578cd5cfcf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-2-c6825061e7 calico-apiserver-578cd5cfcf-lxt7l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a0a4b3eabb [] []}} ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-lxt7l" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.666 [INFO][4493] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-lxt7l" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.727 [INFO][4508] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" HandleID="k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.756 [INFO][4508] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" HandleID="k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-2-c6825061e7", "pod":"calico-apiserver-578cd5cfcf-lxt7l", "timestamp":"2025-01-30 13:58:27.727762953 +0000 UTC"}, Hostname:"ci-4081.3.0-2-c6825061e7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.756 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.756 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.757 [INFO][4508] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-2-c6825061e7' Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.762 [INFO][4508] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.775 [INFO][4508] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.787 [INFO][4508] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.798 [INFO][4508] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.813 [INFO][4508] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.814 [INFO][4508] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.818 [INFO][4508] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.828 [INFO][4508] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.888 [INFO][4508] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.6/26] block=192.168.110.0/26 handle="k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.888 [INFO][4508] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.6/26] handle="k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" host="ci-4081.3.0-2-c6825061e7" Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.888 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:27.964549 containerd[1465]: 2025-01-30 13:58:27.888 [INFO][4508] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.6/26] IPv6=[] ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" HandleID="k8s-pod-network.a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.965690 containerd[1465]: 2025-01-30 13:58:27.892 [INFO][4493] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-lxt7l" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0", GenerateName:"calico-apiserver-578cd5cfcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"4252853d-be36-4c01-b117-ed9b5390c193", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578cd5cfcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"", Pod:"calico-apiserver-578cd5cfcf-lxt7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a0a4b3eabb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:27.965690 containerd[1465]: 2025-01-30 13:58:27.892 [INFO][4493] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.6/32] ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-lxt7l" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.965690 containerd[1465]: 2025-01-30 13:58:27.892 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a0a4b3eabb ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-lxt7l" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.965690 containerd[1465]: 2025-01-30 13:58:27.895 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-lxt7l" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:27.965690 containerd[1465]: 2025-01-30 13:58:27.903 [INFO][4493] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-lxt7l" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0", GenerateName:"calico-apiserver-578cd5cfcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"4252853d-be36-4c01-b117-ed9b5390c193", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578cd5cfcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b", Pod:"calico-apiserver-578cd5cfcf-lxt7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a0a4b3eabb", MAC:"9e:f3:b3:a4:74:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:27.965690 containerd[1465]: 2025-01-30 13:58:27.956 [INFO][4493] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b" Namespace="calico-apiserver" Pod="calico-apiserver-578cd5cfcf-lxt7l" WorkloadEndpoint="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:28.173700 containerd[1465]: time="2025-01-30T13:58:28.172587018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:58:28.173700 containerd[1465]: time="2025-01-30T13:58:28.172665099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:58:28.173700 containerd[1465]: time="2025-01-30T13:58:28.172676695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:28.174344 containerd[1465]: time="2025-01-30T13:58:28.174019516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:58:28.261616 systemd[1]: run-containerd-runc-k8s.io-a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b-runc.1S9zvu.mount: Deactivated successfully. Jan 30 13:58:28.270190 systemd[1]: Started cri-containerd-a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b.scope - libcontainer container a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b. Jan 30 13:58:28.302609 containerd[1465]: time="2025-01-30T13:58:28.302528050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:28.307095 containerd[1465]: time="2025-01-30T13:58:28.307017926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:58:28.311551 containerd[1465]: time="2025-01-30T13:58:28.311453247Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:28.318411 containerd[1465]: time="2025-01-30T13:58:28.318354088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:28.321205 containerd[1465]: time="2025-01-30T13:58:28.321154860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.259166543s" Jan 30 13:58:28.321205 containerd[1465]: time="2025-01-30T13:58:28.321199401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:58:28.323784 containerd[1465]: time="2025-01-30T13:58:28.323733837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:58:28.326297 containerd[1465]: time="2025-01-30T13:58:28.326253242Z" level=info msg="CreateContainer within sandbox \"d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:58:28.341151 containerd[1465]: time="2025-01-30T13:58:28.341090978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578cd5cfcf-lxt7l,Uid:4252853d-be36-4c01-b117-ed9b5390c193,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b\"" Jan 30 13:58:28.345554 containerd[1465]: time="2025-01-30T13:58:28.345325377Z" level=info msg="CreateContainer within sandbox \"a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:58:28.387668 kubelet[2519]: E0130 13:58:28.387444 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:28.387668 kubelet[2519]: E0130 13:58:28.387566 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:28.451003 containerd[1465]: time="2025-01-30T13:58:28.450751902Z" level=info msg="CreateContainer within sandbox \"a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a00a5f6e0fb67ea4337f4d7ff55071ba83006d326cff42c605d0793d7b0ef420\"" Jan 30 13:58:28.451897 containerd[1465]: time="2025-01-30T13:58:28.451844779Z" level=info msg="StartContainer for \"a00a5f6e0fb67ea4337f4d7ff55071ba83006d326cff42c605d0793d7b0ef420\"" Jan 30 13:58:28.469764 containerd[1465]: time="2025-01-30T13:58:28.469694136Z" level=info msg="CreateContainer within sandbox \"d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"291fdc6825ad040cc677fa85637098fd916ab57f41cc6ced66e4b20886ae9168\"" Jan 30 13:58:28.472232 containerd[1465]: time="2025-01-30T13:58:28.472127358Z" level=info msg="StartContainer for \"291fdc6825ad040cc677fa85637098fd916ab57f41cc6ced66e4b20886ae9168\"" Jan 30 13:58:28.483037 systemd-networkd[1369]: calicbe73c11030: Gained IPv6LL Jan 30 13:58:28.531598 systemd[1]: run-containerd-runc-k8s.io-a00a5f6e0fb67ea4337f4d7ff55071ba83006d326cff42c605d0793d7b0ef420-runc.4wCDwf.mount: Deactivated successfully. Jan 30 13:58:28.558242 systemd[1]: Started cri-containerd-a00a5f6e0fb67ea4337f4d7ff55071ba83006d326cff42c605d0793d7b0ef420.scope - libcontainer container a00a5f6e0fb67ea4337f4d7ff55071ba83006d326cff42c605d0793d7b0ef420. Jan 30 13:58:28.602349 systemd[1]: Started cri-containerd-291fdc6825ad040cc677fa85637098fd916ab57f41cc6ced66e4b20886ae9168.scope - libcontainer container 291fdc6825ad040cc677fa85637098fd916ab57f41cc6ced66e4b20886ae9168. Jan 30 13:58:28.675070 containerd[1465]: time="2025-01-30T13:58:28.674986955Z" level=info msg="StartContainer for \"a00a5f6e0fb67ea4337f4d7ff55071ba83006d326cff42c605d0793d7b0ef420\" returns successfully" Jan 30 13:58:28.688188 containerd[1465]: time="2025-01-30T13:58:28.687865610Z" level=info msg="StartContainer for \"291fdc6825ad040cc677fa85637098fd916ab57f41cc6ced66e4b20886ae9168\" returns successfully" Jan 30 13:58:29.402275 kubelet[2519]: E0130 13:58:29.402219 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:29.421171 kubelet[2519]: I0130 13:58:29.420811 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-578cd5cfcf-m89qp" podStartSLOduration=26.092259425 podStartE2EDuration="30.420772256s" podCreationTimestamp="2025-01-30 13:57:59 +0000 UTC" firstStartedPulling="2025-01-30 13:58:23.993472596 +0000 UTC m=+37.219922720" lastFinishedPulling="2025-01-30 13:58:28.321985439 +0000 UTC m=+41.548435551" observedRunningTime="2025-01-30 13:58:29.417746572 +0000 UTC m=+42.644196724" watchObservedRunningTime="2025-01-30 13:58:29.420772256 +0000 UTC m=+42.647222390" Jan 30 13:58:29.484834 systemd[1]: run-containerd-runc-k8s.io-291fdc6825ad040cc677fa85637098fd916ab57f41cc6ced66e4b20886ae9168-runc.yuzdFy.mount: Deactivated successfully. Jan 30 13:58:29.953108 systemd-networkd[1369]: cali1a0a4b3eabb: Gained IPv6LL Jan 30 13:58:30.410366 kubelet[2519]: I0130 13:58:30.409462 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:58:30.415225 kubelet[2519]: I0130 13:58:30.415006 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:58:31.088584 containerd[1465]: time="2025-01-30T13:58:31.088470675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:31.090754 containerd[1465]: time="2025-01-30T13:58:31.090678051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:58:31.093504 containerd[1465]: time="2025-01-30T13:58:31.093410341Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:31.100239 containerd[1465]: time="2025-01-30T13:58:31.100160052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:31.102995 containerd[1465]: time="2025-01-30T13:58:31.101558742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.777765403s" Jan 30 13:58:31.102995 containerd[1465]: time="2025-01-30T13:58:31.101762481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:58:31.105676 containerd[1465]: time="2025-01-30T13:58:31.104366253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:58:31.140702 containerd[1465]: time="2025-01-30T13:58:31.140350678Z" level=info msg="CreateContainer within sandbox \"b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:58:31.179136 containerd[1465]: time="2025-01-30T13:58:31.178891130Z" level=info msg="CreateContainer within sandbox \"b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1508155fef86020836fc19a335aed4a60770173d41618ab306ab6928ab86b935\"" Jan 30 13:58:31.182910 containerd[1465]: time="2025-01-30T13:58:31.181481921Z" level=info msg="StartContainer for \"1508155fef86020836fc19a335aed4a60770173d41618ab306ab6928ab86b935\"" Jan 30 13:58:31.240279 systemd[1]: Started cri-containerd-1508155fef86020836fc19a335aed4a60770173d41618ab306ab6928ab86b935.scope - libcontainer container 1508155fef86020836fc19a335aed4a60770173d41618ab306ab6928ab86b935. Jan 30 13:58:31.337788 containerd[1465]: time="2025-01-30T13:58:31.337542516Z" level=info msg="StartContainer for \"1508155fef86020836fc19a335aed4a60770173d41618ab306ab6928ab86b935\" returns successfully" Jan 30 13:58:31.455492 kubelet[2519]: I0130 13:58:31.455418 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-578cd5cfcf-lxt7l" podStartSLOduration=32.455391132 podStartE2EDuration="32.455391132s" podCreationTimestamp="2025-01-30 13:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:58:29.464149362 +0000 UTC m=+42.690599518" watchObservedRunningTime="2025-01-30 13:58:31.455391132 +0000 UTC m=+44.681841265" Jan 30 13:58:32.561495 kubelet[2519]: I0130 13:58:32.560177 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cd989f4bc-5k58q" podStartSLOduration=26.398017658 podStartE2EDuration="32.560149349s" podCreationTimestamp="2025-01-30 13:58:00 +0000 UTC" firstStartedPulling="2025-01-30 13:58:24.941835717 +0000 UTC m=+38.168285842" lastFinishedPulling="2025-01-30 13:58:31.103967397 +0000 UTC m=+44.330417533" observedRunningTime="2025-01-30 13:58:31.456969227 +0000 UTC m=+44.683419350" watchObservedRunningTime="2025-01-30 13:58:32.560149349 +0000 UTC m=+45.786599478" Jan 30 13:58:32.723566 containerd[1465]: time="2025-01-30T13:58:32.721750718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:32.730400 containerd[1465]: time="2025-01-30T13:58:32.730310257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:58:32.731567 containerd[1465]: time="2025-01-30T13:58:32.731481203Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:32.740991 containerd[1465]: time="2025-01-30T13:58:32.739432536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:32.742338 containerd[1465]: time="2025-01-30T13:58:32.742248886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.637829214s" Jan 30 13:58:32.742575 containerd[1465]: time="2025-01-30T13:58:32.742545672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:58:32.764185 containerd[1465]: time="2025-01-30T13:58:32.763873644Z" level=info msg="CreateContainer within sandbox \"d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:58:32.806512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034276011.mount: Deactivated successfully. Jan 30 13:58:32.814310 containerd[1465]: time="2025-01-30T13:58:32.814178051Z" level=info msg="CreateContainer within sandbox \"d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"19e90f0ef522b9c30beedf7bf0aa4f52236388ac1e3305ccfb4f3f043789119c\"" Jan 30 13:58:32.815395 containerd[1465]: time="2025-01-30T13:58:32.815360302Z" level=info msg="StartContainer for \"19e90f0ef522b9c30beedf7bf0aa4f52236388ac1e3305ccfb4f3f043789119c\"" Jan 30 13:58:32.872292 systemd[1]: Started cri-containerd-19e90f0ef522b9c30beedf7bf0aa4f52236388ac1e3305ccfb4f3f043789119c.scope - libcontainer container 19e90f0ef522b9c30beedf7bf0aa4f52236388ac1e3305ccfb4f3f043789119c. Jan 30 13:58:32.928284 containerd[1465]: time="2025-01-30T13:58:32.928091732Z" level=info msg="StartContainer for \"19e90f0ef522b9c30beedf7bf0aa4f52236388ac1e3305ccfb4f3f043789119c\" returns successfully" Jan 30 13:58:32.930818 containerd[1465]: time="2025-01-30T13:58:32.930620692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:58:34.509623 containerd[1465]: time="2025-01-30T13:58:34.508540794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:34.512048 containerd[1465]: time="2025-01-30T13:58:34.511969471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:58:34.531241 containerd[1465]: time="2025-01-30T13:58:34.531178938Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:34.533560 containerd[1465]: time="2025-01-30T13:58:34.533490596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.601963065s" Jan 30 13:58:34.535071 containerd[1465]: time="2025-01-30T13:58:34.533821916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:58:34.535071 containerd[1465]: time="2025-01-30T13:58:34.533742861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:58:34.540424 containerd[1465]: time="2025-01-30T13:58:34.540346967Z" level=info msg="CreateContainer within sandbox \"d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:58:34.600412 containerd[1465]: time="2025-01-30T13:58:34.600342136Z" level=info msg="CreateContainer within sandbox \"d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"12ee7decf60fac588aeb1a4686e3ccb544dd8efd6981192ced10fa09f38949d0\"" Jan 30 13:58:34.601766 containerd[1465]: time="2025-01-30T13:58:34.601473351Z" level=info msg="StartContainer for \"12ee7decf60fac588aeb1a4686e3ccb544dd8efd6981192ced10fa09f38949d0\"" Jan 30 13:58:34.662081 systemd[1]: Started cri-containerd-12ee7decf60fac588aeb1a4686e3ccb544dd8efd6981192ced10fa09f38949d0.scope - libcontainer container 12ee7decf60fac588aeb1a4686e3ccb544dd8efd6981192ced10fa09f38949d0. Jan 30 13:58:34.725606 containerd[1465]: time="2025-01-30T13:58:34.725136275Z" level=info msg="StartContainer for \"12ee7decf60fac588aeb1a4686e3ccb544dd8efd6981192ced10fa09f38949d0\" returns successfully" Jan 30 13:58:35.303644 kubelet[2519]: I0130 13:58:35.303554 2519 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:58:35.303644 kubelet[2519]: I0130 13:58:35.303650 2519 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:58:35.477352 kubelet[2519]: I0130 13:58:35.476718 2519 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rg6b9" podStartSLOduration=27.816713822 podStartE2EDuration="36.476695548s" podCreationTimestamp="2025-01-30 13:57:59 +0000 UTC" firstStartedPulling="2025-01-30 13:58:25.875880019 +0000 UTC m=+39.102330139" lastFinishedPulling="2025-01-30 13:58:34.535861749 +0000 UTC m=+47.762311865" observedRunningTime="2025-01-30 13:58:35.476016167 +0000 UTC m=+48.702466303" watchObservedRunningTime="2025-01-30 13:58:35.476695548 +0000 UTC m=+48.703145678" Jan 30 13:58:39.813235 systemd[1]: Started sshd@9-64.227.111.225:22-147.75.109.163:53124.service - OpenSSH per-connection server daemon (147.75.109.163:53124). Jan 30 13:58:39.980169 sshd[4835]: Accepted publickey for core from 147.75.109.163 port 53124 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:39.984563 sshd[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:39.995081 systemd-logind[1442]: New session 10 of user core. Jan 30 13:58:39.998343 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:58:40.829027 sshd[4835]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:40.833462 systemd[1]: sshd@9-64.227.111.225:22-147.75.109.163:53124.service: Deactivated successfully. Jan 30 13:58:40.837023 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:58:40.839526 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:58:40.841654 systemd-logind[1442]: Removed session 10. Jan 30 13:58:43.355542 kubelet[2519]: I0130 13:58:43.355057 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:58:45.855128 systemd[1]: Started sshd@10-64.227.111.225:22-147.75.109.163:53138.service - OpenSSH per-connection server daemon (147.75.109.163:53138). Jan 30 13:58:45.915890 sshd[4857]: Accepted publickey for core from 147.75.109.163 port 53138 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:45.918195 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:45.927071 systemd-logind[1442]: New session 11 of user core. Jan 30 13:58:45.935499 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:58:46.132174 sshd[4857]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:46.137687 systemd[1]: sshd@10-64.227.111.225:22-147.75.109.163:53138.service: Deactivated successfully. Jan 30 13:58:46.141019 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:58:46.142220 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:58:46.143349 systemd-logind[1442]: Removed session 11. Jan 30 13:58:47.068855 containerd[1465]: time="2025-01-30T13:58:47.068482500Z" level=info msg="StopPodSandbox for \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\"" Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.302 [WARNING][4885] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8df787c1-03f8-4203-9d8d-3a85d1fa0a95", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7", Pod:"coredns-668d6bf9bc-grrxz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid354575a118", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.306 [INFO][4885] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.306 [INFO][4885] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" iface="eth0" netns="" Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.306 [INFO][4885] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.306 [INFO][4885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.349 [INFO][4891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.349 [INFO][4891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.350 [INFO][4891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.358 [WARNING][4891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.359 [INFO][4891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.361 [INFO][4891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:47.368320 containerd[1465]: 2025-01-30 13:58:47.364 [INFO][4885] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:47.368320 containerd[1465]: time="2025-01-30T13:58:47.368137314Z" level=info msg="TearDown network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\" successfully" Jan 30 13:58:47.368320 containerd[1465]: time="2025-01-30T13:58:47.368182662Z" level=info msg="StopPodSandbox for \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\" returns successfully" Jan 30 13:58:47.370667 containerd[1465]: time="2025-01-30T13:58:47.369887961Z" level=info msg="RemovePodSandbox for \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\"" Jan 30 13:58:47.370667 containerd[1465]: time="2025-01-30T13:58:47.369933198Z" level=info msg="Forcibly stopping sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\"" Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.443 [WARNING][4909] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8df787c1-03f8-4203-9d8d-3a85d1fa0a95", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"142fa03fa45b8439e4caecffa6402c355f240c3185d87c364bee49968552b2b7", Pod:"coredns-668d6bf9bc-grrxz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid354575a118", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.443 [INFO][4909] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.443 [INFO][4909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" iface="eth0" netns="" Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.443 [INFO][4909] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.443 [INFO][4909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.478 [INFO][4915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.478 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.478 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.488 [WARNING][4915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.488 [INFO][4915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" HandleID="k8s-pod-network.2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--grrxz-eth0" Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.498 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:47.508592 containerd[1465]: 2025-01-30 13:58:47.504 [INFO][4909] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959" Jan 30 13:58:47.511610 containerd[1465]: time="2025-01-30T13:58:47.509055166Z" level=info msg="TearDown network for sandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\" successfully" Jan 30 13:58:47.535553 containerd[1465]: time="2025-01-30T13:58:47.535348215Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:47.535553 containerd[1465]: time="2025-01-30T13:58:47.535466298Z" level=info msg="RemovePodSandbox \"2fd16566aabf012767ccba8151ec23bd7830101776f0a2234dc35bffe8af7959\" returns successfully" Jan 30 13:58:47.537455 containerd[1465]: time="2025-01-30T13:58:47.536884398Z" level=info msg="StopPodSandbox for \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\"" Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.591 [WARNING][4934] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0", GenerateName:"calico-apiserver-578cd5cfcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"4252853d-be36-4c01-b117-ed9b5390c193", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578cd5cfcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b", Pod:"calico-apiserver-578cd5cfcf-lxt7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a0a4b3eabb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.592 [INFO][4934] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.592 [INFO][4934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" iface="eth0" netns="" Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.592 [INFO][4934] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.592 [INFO][4934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.620 [INFO][4940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.621 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.621 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.629 [WARNING][4940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.629 [INFO][4940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.631 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:47.638225 containerd[1465]: 2025-01-30 13:58:47.635 [INFO][4934] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:47.640073 containerd[1465]: time="2025-01-30T13:58:47.639672498Z" level=info msg="TearDown network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\" successfully" Jan 30 13:58:47.640073 containerd[1465]: time="2025-01-30T13:58:47.639804280Z" level=info msg="StopPodSandbox for \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\" returns successfully" Jan 30 13:58:47.641143 containerd[1465]: time="2025-01-30T13:58:47.641103864Z" level=info msg="RemovePodSandbox for \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\"" Jan 30 13:58:47.641143 containerd[1465]: time="2025-01-30T13:58:47.641140377Z" level=info msg="Forcibly stopping sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\"" Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.712 [WARNING][4958] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0", GenerateName:"calico-apiserver-578cd5cfcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"4252853d-be36-4c01-b117-ed9b5390c193", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578cd5cfcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"a3db49bd117354d37ed5044a2dc92cccb08e2c8cc18f655c81c62b8de499b70b", Pod:"calico-apiserver-578cd5cfcf-lxt7l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a0a4b3eabb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.713 [INFO][4958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.713 [INFO][4958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" iface="eth0" netns="" Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.713 [INFO][4958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.713 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.751 [INFO][4964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.751 [INFO][4964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.751 [INFO][4964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.759 [WARNING][4964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.759 [INFO][4964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" HandleID="k8s-pod-network.5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--lxt7l-eth0" Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.762 [INFO][4964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:47.767509 containerd[1465]: 2025-01-30 13:58:47.764 [INFO][4958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b" Jan 30 13:58:47.767509 containerd[1465]: time="2025-01-30T13:58:47.767407452Z" level=info msg="TearDown network for sandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\" successfully" Jan 30 13:58:47.775790 containerd[1465]: time="2025-01-30T13:58:47.775622091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:47.776025 containerd[1465]: time="2025-01-30T13:58:47.775803286Z" level=info msg="RemovePodSandbox \"5bffcb021d6c7e55f5ae64caf86314aa2e2c218af8102d0abaec804cb91f0d8b\" returns successfully" Jan 30 13:58:47.776805 containerd[1465]: time="2025-01-30T13:58:47.776485452Z" level=info msg="StopPodSandbox for \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\"" Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.839 [WARNING][4983] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0", GenerateName:"calico-apiserver-578cd5cfcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"3232a66a-b80f-4c5f-91a6-ce83f301a87d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578cd5cfcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210", Pod:"calico-apiserver-578cd5cfcf-m89qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58b20c1e5ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.839 [INFO][4983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.839 [INFO][4983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" iface="eth0" netns="" Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.839 [INFO][4983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.839 [INFO][4983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.875 [INFO][4989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.875 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.875 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.885 [WARNING][4989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.885 [INFO][4989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.888 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:47.893263 containerd[1465]: 2025-01-30 13:58:47.891 [INFO][4983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:47.895023 containerd[1465]: time="2025-01-30T13:58:47.894363726Z" level=info msg="TearDown network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\" successfully" Jan 30 13:58:47.895023 containerd[1465]: time="2025-01-30T13:58:47.894437303Z" level=info msg="StopPodSandbox for \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\" returns successfully" Jan 30 13:58:47.895817 containerd[1465]: time="2025-01-30T13:58:47.895365947Z" level=info msg="RemovePodSandbox for \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\"" Jan 30 13:58:47.895817 containerd[1465]: time="2025-01-30T13:58:47.895401315Z" level=info msg="Forcibly stopping sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\"" Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:47.955 [WARNING][5007] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0", GenerateName:"calico-apiserver-578cd5cfcf-", Namespace:"calico-apiserver", SelfLink:"", UID:"3232a66a-b80f-4c5f-91a6-ce83f301a87d", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578cd5cfcf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"d4e765dc10a37f0b76505f422b81f3e527df72562531f49914bac47ff6140210", Pod:"calico-apiserver-578cd5cfcf-m89qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58b20c1e5ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:47.956 [INFO][5007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:47.956 [INFO][5007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" iface="eth0" netns="" Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:47.956 [INFO][5007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:47.956 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:48.003 [INFO][5014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:48.003 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:48.003 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:48.011 [WARNING][5014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:48.011 [INFO][5014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" HandleID="k8s-pod-network.9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--apiserver--578cd5cfcf--m89qp-eth0" Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:48.014 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:48.019808 containerd[1465]: 2025-01-30 13:58:48.017 [INFO][5007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb" Jan 30 13:58:48.020843 containerd[1465]: time="2025-01-30T13:58:48.020435983Z" level=info msg="TearDown network for sandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\" successfully" Jan 30 13:58:48.027690 containerd[1465]: time="2025-01-30T13:58:48.027613368Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:48.027854 containerd[1465]: time="2025-01-30T13:58:48.027725518Z" level=info msg="RemovePodSandbox \"9a5c59ba5196d5187a18b32844fa77ba6d2c10d4a9bf4403d3f9dd23ebcafffb\" returns successfully" Jan 30 13:58:48.028485 containerd[1465]: time="2025-01-30T13:58:48.028449990Z" level=info msg="StopPodSandbox for \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\"" Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.081 [WARNING][5032] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be6e8263-d40b-423f-8220-c5dba67bce2a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7", Pod:"coredns-668d6bf9bc-rvj9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbe73c11030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.081 [INFO][5032] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.081 [INFO][5032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" iface="eth0" netns="" Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.081 [INFO][5032] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.081 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.116 [INFO][5038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.116 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.116 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.125 [WARNING][5038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.125 [INFO][5038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.128 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:48.134913 containerd[1465]: 2025-01-30 13:58:48.132 [INFO][5032] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:48.137071 containerd[1465]: time="2025-01-30T13:58:48.135814656Z" level=info msg="TearDown network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\" successfully" Jan 30 13:58:48.137071 containerd[1465]: time="2025-01-30T13:58:48.135861239Z" level=info msg="StopPodSandbox for \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\" returns successfully" Jan 30 13:58:48.137071 containerd[1465]: time="2025-01-30T13:58:48.136596391Z" level=info msg="RemovePodSandbox for \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\"" Jan 30 13:58:48.137071 containerd[1465]: time="2025-01-30T13:58:48.136636382Z" level=info msg="Forcibly stopping sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\"" Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.209 [WARNING][5056] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"be6e8263-d40b-423f-8220-c5dba67bce2a", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"e81184fecb500ba20834fc3e1ddf98fe300ce0e43d5250793206220e431011a7", Pod:"coredns-668d6bf9bc-rvj9k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicbe73c11030", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.210 [INFO][5056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.210 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" iface="eth0" netns="" Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.210 [INFO][5056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.210 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.246 [INFO][5064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.246 [INFO][5064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.246 [INFO][5064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.257 [WARNING][5064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.257 [INFO][5064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" HandleID="k8s-pod-network.1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Workload="ci--4081.3.0--2--c6825061e7-k8s-coredns--668d6bf9bc--rvj9k-eth0" Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.261 [INFO][5064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:48.266383 containerd[1465]: 2025-01-30 13:58:48.263 [INFO][5056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe" Jan 30 13:58:48.268120 containerd[1465]: time="2025-01-30T13:58:48.266433869Z" level=info msg="TearDown network for sandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\" successfully" Jan 30 13:58:48.272913 containerd[1465]: time="2025-01-30T13:58:48.272852669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:48.273099 containerd[1465]: time="2025-01-30T13:58:48.272980603Z" level=info msg="RemovePodSandbox \"1d0ed4587b46a4d46f0e67a7a932eb2efb9812e3abc387737dbfbc565a73b2fe\" returns successfully" Jan 30 13:58:48.273647 containerd[1465]: time="2025-01-30T13:58:48.273577065Z" level=info msg="StopPodSandbox for \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\"" Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.332 [WARNING][5082] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0", GenerateName:"calico-kube-controllers-cd989f4bc-", Namespace:"calico-system", SelfLink:"", UID:"d4596862-5cca-4d1a-98a1-719edf3cebdc", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd989f4bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b", Pod:"calico-kube-controllers-cd989f4bc-5k58q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliace25c4cea4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.332 [INFO][5082] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.333 [INFO][5082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" iface="eth0" netns="" Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.333 [INFO][5082] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.333 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.369 [INFO][5088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.369 [INFO][5088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.369 [INFO][5088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.377 [WARNING][5088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.377 [INFO][5088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.380 [INFO][5088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:48.385483 containerd[1465]: 2025-01-30 13:58:48.383 [INFO][5082] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:48.387600 containerd[1465]: time="2025-01-30T13:58:48.385761646Z" level=info msg="TearDown network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\" successfully" Jan 30 13:58:48.387600 containerd[1465]: time="2025-01-30T13:58:48.385796165Z" level=info msg="StopPodSandbox for \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\" returns successfully" Jan 30 13:58:48.387600 containerd[1465]: time="2025-01-30T13:58:48.386532028Z" level=info msg="RemovePodSandbox for \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\"" Jan 30 13:58:48.387600 containerd[1465]: time="2025-01-30T13:58:48.386574547Z" level=info msg="Forcibly stopping sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\"" Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.445 [WARNING][5106] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0", GenerateName:"calico-kube-controllers-cd989f4bc-", Namespace:"calico-system", SelfLink:"", UID:"d4596862-5cca-4d1a-98a1-719edf3cebdc", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd989f4bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"b8550f82cb733adecd491a19d65850d7b256ada01da15cd6780263c5596e6a9b", Pod:"calico-kube-controllers-cd989f4bc-5k58q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliace25c4cea4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.445 [INFO][5106] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.445 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" iface="eth0" netns="" Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.445 [INFO][5106] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.445 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.531 [INFO][5112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.531 [INFO][5112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.531 [INFO][5112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.553 [WARNING][5112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.553 [INFO][5112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" HandleID="k8s-pod-network.2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Workload="ci--4081.3.0--2--c6825061e7-k8s-calico--kube--controllers--cd989f4bc--5k58q-eth0" Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.561 [INFO][5112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:48.568339 containerd[1465]: 2025-01-30 13:58:48.565 [INFO][5106] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6" Jan 30 13:58:48.568339 containerd[1465]: time="2025-01-30T13:58:48.568259197Z" level=info msg="TearDown network for sandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\" successfully" Jan 30 13:58:48.581919 containerd[1465]: time="2025-01-30T13:58:48.581848608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:48.582127 containerd[1465]: time="2025-01-30T13:58:48.581950411Z" level=info msg="RemovePodSandbox \"2de95d04ccda1911f9ff623c520f0d3901d4fd45ae40d8e8f68aaac263e1f1a6\" returns successfully" Jan 30 13:58:48.583173 containerd[1465]: time="2025-01-30T13:58:48.582723858Z" level=info msg="StopPodSandbox for \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\"" Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.658 [WARNING][5130] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3", Pod:"csi-node-driver-rg6b9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali029fcbdc52a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.659 [INFO][5130] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.659 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" iface="eth0" netns="" Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.659 [INFO][5130] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.659 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.692 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.692 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.692 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.701 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.701 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.703 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:48.707812 containerd[1465]: 2025-01-30 13:58:48.705 [INFO][5130] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:48.709122 containerd[1465]: time="2025-01-30T13:58:48.707882646Z" level=info msg="TearDown network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\" successfully" Jan 30 13:58:48.709122 containerd[1465]: time="2025-01-30T13:58:48.707924184Z" level=info msg="StopPodSandbox for \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\" returns successfully" Jan 30 13:58:48.709122 containerd[1465]: time="2025-01-30T13:58:48.708666954Z" level=info msg="RemovePodSandbox for \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\"" Jan 30 13:58:48.709122 containerd[1465]: time="2025-01-30T13:58:48.708705427Z" level=info msg="Forcibly stopping sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\"" Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.785 [WARNING][5155] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e4fd20cc-1ebf-4c36-acf8-aae4903f42f0", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 57, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-2-c6825061e7", ContainerID:"d3fae372d000034db0af49b27be08102479504413a4949a2491ea0d8cfddf5f3", Pod:"csi-node-driver-rg6b9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali029fcbdc52a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.786 [INFO][5155] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.786 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" iface="eth0" netns="" Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.786 [INFO][5155] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.786 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.830 [INFO][5161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.831 [INFO][5161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.831 [INFO][5161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.839 [WARNING][5161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.839 [INFO][5161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" HandleID="k8s-pod-network.9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Workload="ci--4081.3.0--2--c6825061e7-k8s-csi--node--driver--rg6b9-eth0" Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.842 [INFO][5161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:48.847719 containerd[1465]: 2025-01-30 13:58:48.844 [INFO][5155] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7" Jan 30 13:58:48.847719 containerd[1465]: time="2025-01-30T13:58:48.847508207Z" level=info msg="TearDown network for sandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\" successfully" Jan 30 13:58:48.854869 containerd[1465]: time="2025-01-30T13:58:48.854602147Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:48.854869 containerd[1465]: time="2025-01-30T13:58:48.854699232Z" level=info msg="RemovePodSandbox \"9d486bc527622251a047cfa87f5d64c03396a4ce4cfdc1cf4fde82e6618dbcd7\" returns successfully" Jan 30 13:58:49.420602 kubelet[2519]: E0130 13:58:49.420554 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:51.156039 systemd[1]: Started sshd@11-64.227.111.225:22-147.75.109.163:43848.service - OpenSSH per-connection server daemon (147.75.109.163:43848). Jan 30 13:58:51.284311 sshd[5191]: Accepted publickey for core from 147.75.109.163 port 43848 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:51.287188 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:51.295597 systemd-logind[1442]: New session 12 of user core. Jan 30 13:58:51.304311 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:58:51.543099 sshd[5191]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:51.547314 systemd[1]: sshd@11-64.227.111.225:22-147.75.109.163:43848.service: Deactivated successfully. Jan 30 13:58:51.550642 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:58:51.556196 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:58:51.557662 systemd-logind[1442]: Removed session 12. Jan 30 13:58:54.393642 kubelet[2519]: I0130 13:58:54.393160 2519 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:58:56.565499 systemd[1]: Started sshd@12-64.227.111.225:22-147.75.109.163:43858.service - OpenSSH per-connection server daemon (147.75.109.163:43858). Jan 30 13:58:56.611628 sshd[5209]: Accepted publickey for core from 147.75.109.163 port 43858 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:56.612567 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:56.620386 systemd-logind[1442]: New session 13 of user core. Jan 30 13:58:56.629279 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:58:56.815007 sshd[5209]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:56.823778 systemd[1]: sshd@12-64.227.111.225:22-147.75.109.163:43858.service: Deactivated successfully. Jan 30 13:58:56.826333 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:58:56.828041 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:58:56.835285 systemd[1]: Started sshd@13-64.227.111.225:22-147.75.109.163:43862.service - OpenSSH per-connection server daemon (147.75.109.163:43862). Jan 30 13:58:56.838762 systemd-logind[1442]: Removed session 13. Jan 30 13:58:56.890051 sshd[5224]: Accepted publickey for core from 147.75.109.163 port 43862 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:56.892093 sshd[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:56.899147 systemd-logind[1442]: New session 14 of user core. Jan 30 13:58:56.914348 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:58:57.195074 sshd[5224]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:57.211120 systemd[1]: sshd@13-64.227.111.225:22-147.75.109.163:43862.service: Deactivated successfully. Jan 30 13:58:57.215745 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:58:57.221182 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:58:57.232599 systemd[1]: Started sshd@14-64.227.111.225:22-147.75.109.163:43876.service - OpenSSH per-connection server daemon (147.75.109.163:43876). Jan 30 13:58:57.236803 systemd-logind[1442]: Removed session 14. Jan 30 13:58:57.296155 sshd[5234]: Accepted publickey for core from 147.75.109.163 port 43876 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:57.298857 sshd[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:57.306266 systemd-logind[1442]: New session 15 of user core. Jan 30 13:58:57.315378 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:58:57.501113 sshd[5234]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:57.508540 systemd[1]: sshd@14-64.227.111.225:22-147.75.109.163:43876.service: Deactivated successfully. Jan 30 13:58:57.512924 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:58:57.515582 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:58:57.517989 systemd-logind[1442]: Removed session 15. Jan 30 13:58:58.996656 kubelet[2519]: E0130 13:58:58.994624 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:58:59.000095 kubelet[2519]: E0130 13:58:59.000009 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:59:02.521423 systemd[1]: Started sshd@15-64.227.111.225:22-147.75.109.163:49304.service - OpenSSH per-connection server daemon (147.75.109.163:49304). Jan 30 13:59:02.597221 sshd[5279]: Accepted publickey for core from 147.75.109.163 port 49304 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:02.601068 sshd[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:02.609172 systemd-logind[1442]: New session 16 of user core. Jan 30 13:59:02.615298 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:59:02.876128 sshd[5279]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:02.881115 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:59:02.882155 systemd[1]: sshd@15-64.227.111.225:22-147.75.109.163:49304.service: Deactivated successfully. Jan 30 13:59:02.885527 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:59:02.888589 systemd-logind[1442]: Removed session 16. Jan 30 13:59:04.996845 kubelet[2519]: E0130 13:59:04.996759 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:59:07.895389 systemd[1]: Started sshd@16-64.227.111.225:22-147.75.109.163:42430.service - OpenSSH per-connection server daemon (147.75.109.163:42430). Jan 30 13:59:07.936571 sshd[5297]: Accepted publickey for core from 147.75.109.163 port 42430 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:07.938585 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:07.945037 systemd-logind[1442]: New session 17 of user core. Jan 30 13:59:07.950226 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:59:08.095639 sshd[5297]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:08.100603 systemd[1]: sshd@16-64.227.111.225:22-147.75.109.163:42430.service: Deactivated successfully. Jan 30 13:59:08.103347 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:59:08.105029 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:59:08.107632 systemd-logind[1442]: Removed session 17. Jan 30 13:59:13.124478 systemd[1]: Started sshd@17-64.227.111.225:22-147.75.109.163:42444.service - OpenSSH per-connection server daemon (147.75.109.163:42444). Jan 30 13:59:13.175169 sshd[5312]: Accepted publickey for core from 147.75.109.163 port 42444 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:13.177666 sshd[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:13.185648 systemd-logind[1442]: New session 18 of user core. Jan 30 13:59:13.191292 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:59:13.372546 sshd[5312]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:13.376922 systemd[1]: sshd@17-64.227.111.225:22-147.75.109.163:42444.service: Deactivated successfully. Jan 30 13:59:13.381756 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:59:13.384646 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:59:13.386638 systemd-logind[1442]: Removed session 18. Jan 30 13:59:18.396322 systemd[1]: Started sshd@18-64.227.111.225:22-147.75.109.163:42086.service - OpenSSH per-connection server daemon (147.75.109.163:42086). Jan 30 13:59:18.437846 sshd[5324]: Accepted publickey for core from 147.75.109.163 port 42086 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:18.440150 sshd[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:18.449288 systemd-logind[1442]: New session 19 of user core. Jan 30 13:59:18.455208 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:59:18.617612 sshd[5324]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:18.632042 systemd[1]: sshd@18-64.227.111.225:22-147.75.109.163:42086.service: Deactivated successfully. Jan 30 13:59:18.636420 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:59:18.638046 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:59:18.648854 systemd[1]: Started sshd@19-64.227.111.225:22-147.75.109.163:42102.service - OpenSSH per-connection server daemon (147.75.109.163:42102). Jan 30 13:59:18.651020 systemd-logind[1442]: Removed session 19. Jan 30 13:59:18.696562 sshd[5336]: Accepted publickey for core from 147.75.109.163 port 42102 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:18.699414 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:18.710027 systemd-logind[1442]: New session 20 of user core. Jan 30 13:59:18.713318 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:59:19.308504 sshd[5336]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:19.328173 systemd[1]: Started sshd@20-64.227.111.225:22-147.75.109.163:42112.service - OpenSSH per-connection server daemon (147.75.109.163:42112). Jan 30 13:59:19.328836 systemd[1]: sshd@19-64.227.111.225:22-147.75.109.163:42102.service: Deactivated successfully. Jan 30 13:59:19.338362 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:59:19.346822 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:59:19.354164 systemd-logind[1442]: Removed session 20. Jan 30 13:59:19.425773 sshd[5345]: Accepted publickey for core from 147.75.109.163 port 42112 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:19.429517 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:19.444325 systemd-logind[1442]: New session 21 of user core. Jan 30 13:59:19.452369 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:59:19.992348 kubelet[2519]: E0130 13:59:19.992295 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:59:20.684860 sshd[5345]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:20.718661 systemd[1]: Started sshd@21-64.227.111.225:22-147.75.109.163:42118.service - OpenSSH per-connection server daemon (147.75.109.163:42118). Jan 30 13:59:20.719356 systemd[1]: sshd@20-64.227.111.225:22-147.75.109.163:42112.service: Deactivated successfully. Jan 30 13:59:20.727325 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:59:20.737692 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:59:20.748157 systemd-logind[1442]: Removed session 21. Jan 30 13:59:20.857254 sshd[5383]: Accepted publickey for core from 147.75.109.163 port 42118 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:20.858826 sshd[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:20.881138 systemd-logind[1442]: New session 22 of user core. Jan 30 13:59:20.887082 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:59:21.561190 sshd[5383]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:21.574281 systemd[1]: sshd@21-64.227.111.225:22-147.75.109.163:42118.service: Deactivated successfully. Jan 30 13:59:21.581022 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:59:21.585204 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:59:21.591496 systemd[1]: Started sshd@22-64.227.111.225:22-147.75.109.163:42130.service - OpenSSH per-connection server daemon (147.75.109.163:42130). Jan 30 13:59:21.595136 systemd-logind[1442]: Removed session 22. Jan 30 13:59:21.688495 sshd[5398]: Accepted publickey for core from 147.75.109.163 port 42130 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:21.691117 sshd[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:21.700397 systemd-logind[1442]: New session 23 of user core. Jan 30 13:59:21.708246 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:59:21.887440 sshd[5398]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:21.894036 systemd-logind[1442]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:59:21.895041 systemd[1]: sshd@22-64.227.111.225:22-147.75.109.163:42130.service: Deactivated successfully. Jan 30 13:59:21.897666 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:59:21.899450 systemd-logind[1442]: Removed session 23. Jan 30 13:59:24.992598 kubelet[2519]: E0130 13:59:24.992056 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:59:26.914493 systemd[1]: Started sshd@23-64.227.111.225:22-147.75.109.163:42134.service - OpenSSH per-connection server daemon (147.75.109.163:42134). Jan 30 13:59:26.976106 sshd[5415]: Accepted publickey for core from 147.75.109.163 port 42134 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:26.981481 sshd[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:27.004423 systemd-logind[1442]: New session 24 of user core. Jan 30 13:59:27.010399 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:59:27.201418 sshd[5415]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:27.207881 systemd[1]: sshd@23-64.227.111.225:22-147.75.109.163:42134.service: Deactivated successfully. Jan 30 13:59:27.212006 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:59:27.213376 systemd-logind[1442]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:59:27.215646 systemd-logind[1442]: Removed session 24. Jan 30 13:59:29.992321 kubelet[2519]: E0130 13:59:29.992186 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:59:32.221114 systemd[1]: Started sshd@24-64.227.111.225:22-147.75.109.163:50196.service - OpenSSH per-connection server daemon (147.75.109.163:50196). Jan 30 13:59:32.305095 sshd[5428]: Accepted publickey for core from 147.75.109.163 port 50196 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:32.308265 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:32.317623 systemd-logind[1442]: New session 25 of user core. Jan 30 13:59:32.325292 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:59:32.507465 systemd[1]: run-containerd-runc-k8s.io-1508155fef86020836fc19a335aed4a60770173d41618ab306ab6928ab86b935-runc.salmJx.mount: Deactivated successfully. Jan 30 13:59:32.515394 sshd[5428]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:32.524429 systemd[1]: sshd@24-64.227.111.225:22-147.75.109.163:50196.service: Deactivated successfully. Jan 30 13:59:32.528462 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:59:32.531218 systemd-logind[1442]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:59:32.534333 systemd-logind[1442]: Removed session 25. Jan 30 13:59:37.541080 systemd[1]: Started sshd@25-64.227.111.225:22-147.75.109.163:41416.service - OpenSSH per-connection server daemon (147.75.109.163:41416). Jan 30 13:59:37.617070 sshd[5477]: Accepted publickey for core from 147.75.109.163 port 41416 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:37.620501 sshd[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:37.631029 systemd-logind[1442]: New session 26 of user core. Jan 30 13:59:37.637307 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:59:37.923204 sshd[5477]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:37.930822 systemd[1]: sshd@25-64.227.111.225:22-147.75.109.163:41416.service: Deactivated successfully. Jan 30 13:59:37.934691 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:59:37.937839 systemd-logind[1442]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:59:37.940137 systemd-logind[1442]: Removed session 26. Jan 30 13:59:37.992291 kubelet[2519]: E0130 13:59:37.991748 2519 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 13:59:42.946300 systemd[1]: Started sshd@26-64.227.111.225:22-147.75.109.163:41430.service - OpenSSH per-connection server daemon (147.75.109.163:41430). Jan 30 13:59:43.043012 sshd[5497]: Accepted publickey for core from 147.75.109.163 port 41430 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:43.045237 sshd[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:43.055542 systemd-logind[1442]: New session 27 of user core. Jan 30 13:59:43.059229 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:59:43.305445 sshd[5497]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:43.312139 systemd[1]: sshd@26-64.227.111.225:22-147.75.109.163:41430.service: Deactivated successfully. Jan 30 13:59:43.320295 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:59:43.324082 systemd-logind[1442]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:59:43.326591 systemd-logind[1442]: Removed session 27.