Jan 30 13:55:09.957271 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:55:09.957299 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:09.957312 kernel: BIOS-provided physical RAM map: Jan 30 13:55:09.957319 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:55:09.957325 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:55:09.957332 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:55:09.957340 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 13:55:09.957347 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 13:55:09.957353 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:55:09.957364 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:55:09.957371 kernel: NX (Execute Disable) protection: active Jan 30 13:55:09.957377 kernel: APIC: Static calls initialized Jan 30 13:55:09.957388 kernel: SMBIOS 2.8 present. Jan 30 13:55:09.957396 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 13:55:09.957404 kernel: Hypervisor detected: KVM Jan 30 13:55:09.957415 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:55:09.957426 kernel: kvm-clock: using sched offset of 3545509238 cycles Jan 30 13:55:09.957434 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:55:09.957443 kernel: tsc: Detected 2494.140 MHz processor Jan 30 13:55:09.957451 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:55:09.957459 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:55:09.957467 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 13:55:09.957475 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:55:09.957483 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:55:09.957494 kernel: ACPI: Early table checksum verification disabled Jan 30 13:55:09.957502 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 13:55:09.957509 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:09.957517 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:09.957525 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:09.957533 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 13:55:09.957541 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:09.957549 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:09.957556 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:09.957567 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:55:09.957574 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 13:55:09.957582 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 13:55:09.957590 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 13:55:09.957597 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 13:55:09.957605 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 13:55:09.957613 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 13:55:09.957627 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 13:55:09.957635 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:55:09.957643 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:55:09.957652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:55:09.957660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:55:09.957671 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 13:55:09.957679 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 13:55:09.957691 kernel: Zone ranges: Jan 30 13:55:09.957699 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:55:09.957707 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 13:55:09.957715 kernel: Normal empty Jan 30 13:55:09.957724 kernel: Movable zone start for each node Jan 30 13:55:09.957732 kernel: Early memory node ranges Jan 30 13:55:09.957740 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:55:09.957748 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 13:55:09.957757 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 13:55:09.957768 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:55:09.957776 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:55:09.957787 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 13:55:09.957795 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:55:09.957804 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:55:09.957812 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:55:09.957820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:55:09.957828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:55:09.957837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:55:09.957848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:55:09.957856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:55:09.957865 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:55:09.957873 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:55:09.957881 kernel: TSC deadline timer available Jan 30 13:55:09.957889 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:55:09.957898 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:55:09.957906 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 13:55:09.957916 kernel: Booting paravirtualized kernel on KVM Jan 30 13:55:09.957925 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:55:09.957937 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:55:09.957945 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:55:09.957954 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:55:09.957962 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:55:09.957970 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:55:09.957979 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:09.957988 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:55:09.957999 kernel: random: crng init done Jan 30 13:55:09.958007 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:55:09.958015 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:55:09.958023 kernel: Fallback order for Node 0: 0 Jan 30 13:55:09.958032 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 13:55:09.958040 kernel: Policy zone: DMA32 Jan 30 13:55:09.958048 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:55:09.958057 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 13:55:09.958065 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:55:09.958093 kernel: Kernel/User page tables isolation: enabled Jan 30 13:55:09.958105 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:55:09.958116 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:55:09.958128 kernel: Dynamic Preempt: voluntary Jan 30 13:55:09.958168 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:55:09.958178 kernel: rcu: RCU event tracing is enabled. Jan 30 13:55:09.958187 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:55:09.958196 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:55:09.958205 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:55:09.958213 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:55:09.958226 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:55:09.958235 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:55:09.958243 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:55:09.958251 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:55:09.958263 kernel: Console: colour VGA+ 80x25 Jan 30 13:55:09.958272 kernel: printk: console [tty0] enabled Jan 30 13:55:09.958281 kernel: printk: console [ttyS0] enabled Jan 30 13:55:09.958289 kernel: ACPI: Core revision 20230628 Jan 30 13:55:09.958298 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:55:09.958309 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:55:09.958318 kernel: x2apic enabled Jan 30 13:55:09.958326 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:55:09.958334 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:55:09.958343 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 30 13:55:09.958351 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Jan 30 13:55:09.958360 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:55:09.958369 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:55:09.958389 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:55:09.958398 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:55:09.958407 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:55:09.958418 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:55:09.958427 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 13:55:09.958436 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:55:09.958445 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:55:09.958454 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:55:09.958463 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:55:09.958477 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:55:09.958486 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:55:09.958495 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:55:09.958504 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:55:09.958573 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:55:09.958582 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:55:09.958591 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:55:09.958600 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:55:09.958613 kernel: landlock: Up and running. Jan 30 13:55:09.958622 kernel: SELinux: Initializing. Jan 30 13:55:09.958631 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:55:09.958639 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:55:09.958648 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 13:55:09.958657 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:09.958666 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:09.958676 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:55:09.958685 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 13:55:09.958697 kernel: signal: max sigframe size: 1776 Jan 30 13:55:09.958706 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:55:09.958715 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:55:09.958724 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:55:09.958733 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:55:09.958742 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:55:09.958751 kernel: .... node #0, CPUs: #1 Jan 30 13:55:09.958762 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:55:09.958780 kernel: smpboot: Max logical packages: 1 Jan 30 13:55:09.958798 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Jan 30 13:55:09.958810 kernel: devtmpfs: initialized Jan 30 13:55:09.958823 kernel: x86/mm: Memory block size: 128MB Jan 30 13:55:09.958837 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:55:09.958849 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:55:09.958863 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:55:09.958876 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:55:09.958890 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:55:09.958905 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:55:09.958923 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:55:09.958937 kernel: audit: type=2000 audit(1738245308.040:1): state=initialized audit_enabled=0 res=1 Jan 30 13:55:09.958950 kernel: cpuidle: using governor menu Jan 30 13:55:09.958963 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:55:09.958977 kernel: dca service started, version 1.12.1 Jan 30 13:55:09.958992 kernel: PCI: Using configuration type 1 for base access Jan 30 13:55:09.959005 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:55:09.959018 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:55:09.959032 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:55:09.959094 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:55:09.959110 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:55:09.959124 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:55:09.959155 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:55:09.959170 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:55:09.959179 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:55:09.959188 kernel: ACPI: Interpreter enabled Jan 30 13:55:09.959197 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:55:09.959206 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:55:09.959220 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:55:09.959229 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:55:09.959242 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:55:09.959253 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:55:09.959517 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:55:09.959699 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:55:09.959840 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:55:09.959869 kernel: acpiphp: Slot [3] registered Jan 30 13:55:09.959884 kernel: acpiphp: Slot [4] registered Jan 30 13:55:09.959899 kernel: acpiphp: Slot [5] registered Jan 30 13:55:09.959911 kernel: acpiphp: Slot [6] registered Jan 30 13:55:09.959922 kernel: acpiphp: Slot [7] registered Jan 30 13:55:09.959935 kernel: acpiphp: Slot [8] registered Jan 30 13:55:09.959947 kernel: acpiphp: Slot [9] registered Jan 30 13:55:09.959963 kernel: acpiphp: Slot [10] registered Jan 30 13:55:09.959978 kernel: acpiphp: Slot [11] registered Jan 30 13:55:09.959991 kernel: acpiphp: Slot [12] registered Jan 30 13:55:09.960004 kernel: acpiphp: Slot [13] registered Jan 30 13:55:09.960013 kernel: acpiphp: Slot [14] registered Jan 30 13:55:09.960022 kernel: acpiphp: Slot [15] registered Jan 30 13:55:09.960030 kernel: acpiphp: Slot [16] registered Jan 30 13:55:09.960040 kernel: acpiphp: Slot [17] registered Jan 30 13:55:09.960053 kernel: acpiphp: Slot [18] registered Jan 30 13:55:09.960068 kernel: acpiphp: Slot [19] registered Jan 30 13:55:09.960084 kernel: acpiphp: Slot [20] registered Jan 30 13:55:09.960094 kernel: acpiphp: Slot [21] registered Jan 30 13:55:09.960107 kernel: acpiphp: Slot [22] registered Jan 30 13:55:09.960116 kernel: acpiphp: Slot [23] registered Jan 30 13:55:09.960125 kernel: acpiphp: Slot [24] registered Jan 30 13:55:09.960134 kernel: acpiphp: Slot [25] registered Jan 30 13:55:09.960143 kernel: acpiphp: Slot [26] registered Jan 30 13:55:09.961058 kernel: acpiphp: Slot [27] registered Jan 30 13:55:09.961075 kernel: acpiphp: Slot [28] registered Jan 30 13:55:09.961092 kernel: acpiphp: Slot [29] registered Jan 30 13:55:09.961106 kernel: acpiphp: Slot [30] registered Jan 30 13:55:09.961118 kernel: acpiphp: Slot [31] registered Jan 30 13:55:09.961193 kernel: PCI host bridge to bus 0000:00 Jan 30 13:55:09.961373 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:55:09.963366 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:55:09.963467 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:55:09.963554 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:55:09.963639 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 13:55:09.963728 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:55:09.963871 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:55:09.963982 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:55:09.964096 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:55:09.964284 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 13:55:09.964386 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:55:09.964534 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:55:09.964687 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:55:09.964834 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:55:09.965006 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 13:55:09.967263 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 13:55:09.967542 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:55:09.967692 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:55:09.967855 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:55:09.967992 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:55:09.968097 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:55:09.968274 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 13:55:09.968374 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 13:55:09.968545 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:55:09.968668 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:55:09.968796 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:55:09.968947 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 13:55:09.969107 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 13:55:09.970398 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 13:55:09.970603 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:55:09.970773 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 13:55:09.970910 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 13:55:09.971062 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 13:55:09.971238 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 13:55:09.971402 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 13:55:09.971511 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 13:55:09.971611 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 13:55:09.971765 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:55:09.971939 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:55:09.972108 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 13:55:09.974435 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 13:55:09.974647 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:55:09.974759 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 13:55:09.974858 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 13:55:09.974958 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 13:55:09.975076 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:55:09.977391 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 13:55:09.977583 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 13:55:09.977608 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:55:09.977621 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:55:09.977630 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:55:09.977639 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:55:09.977649 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:55:09.977671 kernel: iommu: Default domain type: Translated Jan 30 13:55:09.977685 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:55:09.977700 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:55:09.977714 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:55:09.977729 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:55:09.977744 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 13:55:09.977906 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:55:09.978058 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:55:09.979348 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:55:09.979390 kernel: vgaarb: loaded Jan 30 13:55:09.979406 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:55:09.979420 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:55:09.979434 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:55:09.979447 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:55:09.979461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:55:09.979474 kernel: pnp: PnP ACPI init Jan 30 13:55:09.979488 kernel: pnp: PnP ACPI: found 4 devices Jan 30 13:55:09.979514 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:55:09.979529 kernel: NET: Registered PF_INET protocol family Jan 30 13:55:09.979544 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:55:09.979560 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:55:09.979574 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:55:09.979586 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:55:09.979600 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:55:09.979615 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:55:09.979630 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:55:09.979651 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:55:09.979663 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:55:09.979673 kernel: NET: Registered PF_XDP protocol family Jan 30 13:55:09.979836 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:55:09.979986 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:55:09.980108 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:55:09.982367 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:55:09.982509 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 13:55:09.982654 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:55:09.982797 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:55:09.982812 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:55:09.982914 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 37108 usecs Jan 30 13:55:09.982927 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:55:09.982936 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:55:09.982946 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Jan 30 13:55:09.982956 kernel: Initialise system trusted keyrings Jan 30 13:55:09.982965 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:55:09.982979 kernel: Key type asymmetric registered Jan 30 13:55:09.982988 kernel: Asymmetric key parser 'x509' registered Jan 30 13:55:09.982998 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:55:09.983008 kernel: io scheduler mq-deadline registered Jan 30 13:55:09.983017 kernel: io scheduler kyber registered Jan 30 13:55:09.983026 kernel: io scheduler bfq registered Jan 30 13:55:09.983035 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:55:09.983045 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:55:09.983055 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:55:09.983067 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:55:09.983076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:55:09.983086 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:55:09.983096 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:55:09.983112 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:55:09.983127 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:55:09.985421 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:55:09.985444 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:55:09.985578 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:55:09.985715 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:55:09 UTC (1738245309) Jan 30 13:55:09.985849 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:55:09.985867 kernel: intel_pstate: CPU model not supported Jan 30 13:55:09.985880 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:55:09.985893 kernel: Segment Routing with IPv6 Jan 30 13:55:09.985906 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:55:09.985919 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:55:09.985931 kernel: Key type dns_resolver registered Jan 30 13:55:09.985954 kernel: IPI shorthand broadcast: enabled Jan 30 13:55:09.985967 kernel: sched_clock: Marking stable (1039003621, 89292486)->(1146409241, -18113134) Jan 30 13:55:09.985980 kernel: registered taskstats version 1 Jan 30 13:55:09.985993 kernel: Loading compiled-in X.509 certificates Jan 30 13:55:09.986005 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:55:09.986017 kernel: Key type .fscrypt registered Jan 30 13:55:09.986029 kernel: Key type fscrypt-provisioning registered Jan 30 13:55:09.986042 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:55:09.986059 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:55:09.986072 kernel: ima: No architecture policies found Jan 30 13:55:09.986106 kernel: clk: Disabling unused clocks Jan 30 13:55:09.986119 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:55:09.987488 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:55:09.987537 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:55:09.987550 kernel: Run /init as init process Jan 30 13:55:09.987560 kernel: with arguments: Jan 30 13:55:09.987572 kernel: /init Jan 30 13:55:09.987590 kernel: with environment: Jan 30 13:55:09.987604 kernel: HOME=/ Jan 30 13:55:09.987613 kernel: TERM=linux Jan 30 13:55:09.987622 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:55:09.987636 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:55:09.987648 systemd[1]: Detected virtualization kvm. Jan 30 13:55:09.987658 systemd[1]: Detected architecture x86-64. Jan 30 13:55:09.987667 systemd[1]: Running in initrd. Jan 30 13:55:09.987679 systemd[1]: No hostname configured, using default hostname. Jan 30 13:55:09.987689 systemd[1]: Hostname set to . Jan 30 13:55:09.987699 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:55:09.987708 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:55:09.987721 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:09.987731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:09.987742 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:55:09.987752 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:55:09.987764 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:55:09.987774 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:55:09.987786 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:55:09.987795 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:55:09.987805 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:09.987815 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:09.987825 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:55:09.987837 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:55:09.987847 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:55:09.987859 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:55:09.987869 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:55:09.987879 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:55:09.987891 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:55:09.987901 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:55:09.987911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:09.987921 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:09.987930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:09.987940 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:55:09.987953 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:55:09.987969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:55:09.987983 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:55:09.988002 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:55:09.988016 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:55:09.988030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:55:09.988043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:09.988107 systemd-journald[182]: Collecting audit messages is disabled. Jan 30 13:55:09.989312 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:55:09.989326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:09.989337 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:55:09.989348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:55:09.989370 systemd-journald[182]: Journal started Jan 30 13:55:09.989393 systemd-journald[182]: Runtime Journal (/run/log/journal/3700573b64a645c69ed0ba6542fc07c0) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:55:09.987119 systemd-modules-load[183]: Inserted module 'overlay' Jan 30 13:55:09.991199 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:55:10.021165 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:55:10.022176 kernel: Bridge firewalling registered Jan 30 13:55:10.022121 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 30 13:55:10.025696 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:10.027628 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:10.036357 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:10.039026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:55:10.052510 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:55:10.053407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:55:10.059372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:55:10.075346 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:10.080509 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:10.089480 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:55:10.090980 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:10.092315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:10.098563 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:55:10.131049 systemd-resolved[216]: Positive Trust Anchors: Jan 30 13:55:10.131067 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:55:10.131103 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:55:10.137857 dracut-cmdline[220]: dracut-dracut-053 Jan 30 13:55:10.138606 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 30 13:55:10.139978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:55:10.141847 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:55:10.140965 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:10.243216 kernel: SCSI subsystem initialized Jan 30 13:55:10.253169 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:55:10.266245 kernel: iscsi: registered transport (tcp) Jan 30 13:55:10.290199 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:55:10.290275 kernel: QLogic iSCSI HBA Driver Jan 30 13:55:10.351111 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:55:10.357430 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:55:10.400395 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:55:10.400473 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:55:10.400494 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:55:10.447250 kernel: raid6: avx2x4 gen() 16091 MB/s Jan 30 13:55:10.464208 kernel: raid6: avx2x2 gen() 13927 MB/s Jan 30 13:55:10.481400 kernel: raid6: avx2x1 gen() 12115 MB/s Jan 30 13:55:10.481514 kernel: raid6: using algorithm avx2x4 gen() 16091 MB/s Jan 30 13:55:10.499520 kernel: raid6: .... xor() 6994 MB/s, rmw enabled Jan 30 13:55:10.499594 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:55:10.523197 kernel: xor: automatically using best checksumming function avx Jan 30 13:55:10.704178 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:55:10.721012 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:55:10.728539 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:10.750366 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 30 13:55:10.756375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:10.765328 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:55:10.786990 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 30 13:55:10.829578 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:55:10.836465 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:55:10.918706 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:10.930938 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:55:10.966871 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:55:10.971171 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:55:10.972619 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:10.974107 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:55:10.980521 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:55:11.022229 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:55:11.048779 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:55:11.048937 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 13:55:11.104895 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 13:55:11.105105 kernel: ACPI: bus type USB registered Jan 30 13:55:11.105145 kernel: usbcore: registered new interface driver usbfs Jan 30 13:55:11.105160 kernel: usbcore: registered new interface driver hub Jan 30 13:55:11.105172 kernel: usbcore: registered new device driver usb Jan 30 13:55:11.105183 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:55:11.105195 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:55:11.105222 kernel: GPT:9289727 != 125829119 Jan 30 13:55:11.105234 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:55:11.105246 kernel: GPT:9289727 != 125829119 Jan 30 13:55:11.105262 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:55:11.105274 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:11.105285 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 13:55:11.129351 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 30 13:55:11.132494 kernel: libata version 3.00 loaded. Jan 30 13:55:11.157410 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:55:11.236049 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:55:11.236156 kernel: AES CTR mode by8 optimization enabled Jan 30 13:55:11.236184 kernel: scsi host1: ata_piix Jan 30 13:55:11.236489 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (452) Jan 30 13:55:11.236514 kernel: scsi host2: ata_piix Jan 30 13:55:11.236727 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 13:55:11.236768 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 13:55:11.236790 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Jan 30 13:55:11.180884 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:55:11.186535 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:11.189266 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:11.190319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:11.194732 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:11.195291 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:11.201673 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:11.260041 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:55:11.285416 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:55:11.327341 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 13:55:11.327758 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 13:55:11.327945 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 13:55:11.328155 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 13:55:11.328350 kernel: hub 1-0:1.0: USB hub found Jan 30 13:55:11.328658 kernel: hub 1-0:1.0: 2 ports detected Jan 30 13:55:11.326346 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:55:11.329098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:11.338314 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:55:11.347548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:55:11.354623 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:55:11.357460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:55:11.365855 disk-uuid[532]: Primary Header is updated. Jan 30 13:55:11.365855 disk-uuid[532]: Secondary Entries is updated. Jan 30 13:55:11.365855 disk-uuid[532]: Secondary Header is updated. Jan 30 13:55:11.380257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:11.393235 kernel: GPT:disk_guids don't match. Jan 30 13:55:11.393379 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:55:11.393404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:11.406879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:12.402249 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:55:12.403016 disk-uuid[533]: The operation has completed successfully. Jan 30 13:55:12.455122 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:55:12.455315 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:55:12.481489 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:55:12.487163 sh[563]: Success Jan 30 13:55:12.507182 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:55:12.569330 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:55:12.585915 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:55:12.587570 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:55:12.616365 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:55:12.616467 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:12.616491 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:55:12.616610 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:55:12.617318 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:55:12.629311 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:55:12.630854 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:55:12.637455 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:55:12.641475 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:55:12.658025 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:12.658186 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:12.658203 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:12.665198 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:12.682213 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:12.682713 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:55:12.692749 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:55:12.701609 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:55:12.828649 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:55:12.846579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:55:12.868508 ignition[658]: Ignition 2.19.0 Jan 30 13:55:12.869710 ignition[658]: Stage: fetch-offline Jan 30 13:55:12.870319 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:12.870337 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:12.871738 ignition[658]: parsed url from cmdline: "" Jan 30 13:55:12.871749 ignition[658]: no config URL provided Jan 30 13:55:12.871763 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:55:12.871784 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:55:12.871792 ignition[658]: failed to fetch config: resource requires networking Jan 30 13:55:12.872068 ignition[658]: Ignition finished successfully Jan 30 13:55:12.875238 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:55:12.889382 systemd-networkd[753]: lo: Link UP Jan 30 13:55:12.889400 systemd-networkd[753]: lo: Gained carrier Jan 30 13:55:12.892976 systemd-networkd[753]: Enumeration completed Jan 30 13:55:12.893674 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:55:12.893683 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 13:55:12.895603 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:55:12.895609 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:55:12.896350 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:55:12.896677 systemd-networkd[753]: eth0: Link UP Jan 30 13:55:12.896685 systemd-networkd[753]: eth0: Gained carrier Jan 30 13:55:12.896702 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:55:12.897067 systemd[1]: Reached target network.target - Network. Jan 30 13:55:12.901566 systemd-networkd[753]: eth1: Link UP Jan 30 13:55:12.901571 systemd-networkd[753]: eth1: Gained carrier Jan 30 13:55:12.901587 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:55:12.904616 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:55:12.919232 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Jan 30 13:55:12.923479 systemd-networkd[753]: eth0: DHCPv4 address 64.23.157.134/20, gateway 64.23.144.1 acquired from 169.254.169.253 Jan 30 13:55:12.951569 ignition[756]: Ignition 2.19.0 Jan 30 13:55:12.952420 ignition[756]: Stage: fetch Jan 30 13:55:12.953325 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:12.953895 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:12.954927 ignition[756]: parsed url from cmdline: "" Jan 30 13:55:12.954933 ignition[756]: no config URL provided Jan 30 13:55:12.954942 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:55:12.954954 ignition[756]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:55:12.954985 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 13:55:12.986445 ignition[756]: GET result: OK Jan 30 13:55:12.986680 ignition[756]: parsing config with SHA512: 6fa2f2ee69a7ffc01efa1b48409f654c863b1fc0fc9ce7b95580a2531c4bacf6cd1ed0aeeebb6f7f244156f460f6b73ffd7126a9243eaf522a83318b7d554015 Jan 30 13:55:12.996650 unknown[756]: fetched base config from "system" Jan 30 13:55:12.997587 ignition[756]: fetch: fetch complete Jan 30 13:55:12.996692 unknown[756]: fetched base config from "system" Jan 30 13:55:12.997603 ignition[756]: fetch: fetch passed Jan 30 13:55:12.996704 unknown[756]: fetched user config from "digitalocean" Jan 30 13:55:12.997738 ignition[756]: Ignition finished successfully Jan 30 13:55:12.999676 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:55:13.017352 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:55:13.037924 ignition[764]: Ignition 2.19.0 Jan 30 13:55:13.037944 ignition[764]: Stage: kargs Jan 30 13:55:13.038454 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.038478 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.040431 ignition[764]: kargs: kargs passed Jan 30 13:55:13.040528 ignition[764]: Ignition finished successfully Jan 30 13:55:13.042288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:55:13.048507 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:55:13.084948 ignition[771]: Ignition 2.19.0 Jan 30 13:55:13.085931 ignition[771]: Stage: disks Jan 30 13:55:13.086409 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.086426 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.091857 ignition[771]: disks: disks passed Jan 30 13:55:13.091990 ignition[771]: Ignition finished successfully Jan 30 13:55:13.093509 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:55:13.095009 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:55:13.095635 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:55:13.096578 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:55:13.097678 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:55:13.098650 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:55:13.104506 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:55:13.136631 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:55:13.141566 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:55:13.148408 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:55:13.288218 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:55:13.288869 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:55:13.290379 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:55:13.301415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:55:13.304322 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:55:13.308393 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 13:55:13.315427 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:55:13.324699 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (788) Jan 30 13:55:13.324768 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:13.324785 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:13.324799 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:13.317404 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:55:13.317457 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:55:13.332680 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:13.331838 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:55:13.338390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:55:13.345594 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:55:13.410569 coreos-metadata[791]: Jan 30 13:55:13.410 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:13.423208 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:55:13.425170 coreos-metadata[790]: Jan 30 13:55:13.424 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:13.428271 coreos-metadata[791]: Jan 30 13:55:13.426 INFO Fetch successful Jan 30 13:55:13.435818 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:55:13.438151 coreos-metadata[791]: Jan 30 13:55:13.436 INFO wrote hostname ci-4081.3.0-b-c9e031af59 to /sysroot/etc/hostname Jan 30 13:55:13.439339 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:55:13.443839 coreos-metadata[790]: Jan 30 13:55:13.441 INFO Fetch successful Jan 30 13:55:13.451919 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 13:55:13.452469 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 13:55:13.455325 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:55:13.461550 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:55:13.585462 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:55:13.589345 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:55:13.591366 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:55:13.617526 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:55:13.618428 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:13.639713 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:55:13.660895 ignition[909]: INFO : Ignition 2.19.0 Jan 30 13:55:13.660895 ignition[909]: INFO : Stage: mount Jan 30 13:55:13.662015 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.662015 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.663475 ignition[909]: INFO : mount: mount passed Jan 30 13:55:13.664006 ignition[909]: INFO : Ignition finished successfully Jan 30 13:55:13.665957 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:55:13.670333 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:55:13.690645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:55:13.701193 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Jan 30 13:55:13.704511 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:55:13.704584 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:55:13.704616 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:55:13.710164 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:55:13.713023 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:55:13.741375 ignition[937]: INFO : Ignition 2.19.0 Jan 30 13:55:13.741375 ignition[937]: INFO : Stage: files Jan 30 13:55:13.742853 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:13.742853 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:13.742853 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:55:13.745386 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:55:13.745386 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:55:13.747365 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:55:13.748295 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:55:13.748295 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:55:13.748089 unknown[937]: wrote ssh authorized keys file for user: core Jan 30 13:55:13.751122 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:55:13.751122 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:55:13.814048 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:55:13.967716 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:55:13.967716 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:55:13.969831 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:55:14.184699 systemd-networkd[753]: eth1: Gained IPv6LL Jan 30 13:55:14.248521 systemd-networkd[753]: eth0: Gained IPv6LL Jan 30 13:55:14.494867 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:55:14.807156 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:55:14.807156 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:55:14.809003 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:55:14.809003 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:55:14.809003 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:55:14.809003 ignition[937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:55:14.811828 ignition[937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:55:14.811828 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:55:14.811828 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:55:14.811828 ignition[937]: INFO : files: files passed Jan 30 13:55:14.811828 ignition[937]: INFO : Ignition finished successfully Jan 30 13:55:14.811971 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:55:14.826307 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:55:14.830467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:55:14.833461 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:55:14.833645 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:55:14.863672 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:14.863672 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:14.865839 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:55:14.868630 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:55:14.869709 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:55:14.876401 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:55:14.909830 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:55:14.909989 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:55:14.911705 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:55:14.912744 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:55:14.913766 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:55:14.930860 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:55:14.952775 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:55:14.959550 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:55:14.975792 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:14.976396 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:14.977487 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:55:14.978352 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:55:14.978514 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:55:14.979437 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:55:14.980292 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:55:14.980899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:55:14.981573 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:55:14.982645 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:55:14.983451 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:55:14.984243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:55:14.985098 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:55:14.985991 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:55:14.987020 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:55:14.987642 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:55:14.987884 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:55:14.989071 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:14.990064 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:14.990914 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:55:14.991083 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:14.991806 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:55:14.991984 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:55:14.993013 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:55:14.993226 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:55:14.994266 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:55:14.994515 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:55:14.995520 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:55:14.995641 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:55:15.007807 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:55:15.009498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:55:15.009788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:15.018768 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:55:15.019869 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:55:15.020172 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:15.023322 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:55:15.024074 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:55:15.035776 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:55:15.038820 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:55:15.045947 ignition[990]: INFO : Ignition 2.19.0 Jan 30 13:55:15.045947 ignition[990]: INFO : Stage: umount Jan 30 13:55:15.045947 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:55:15.045947 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:55:15.051478 ignition[990]: INFO : umount: umount passed Jan 30 13:55:15.051478 ignition[990]: INFO : Ignition finished successfully Jan 30 13:55:15.050773 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:55:15.050986 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:55:15.058344 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:55:15.058440 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:55:15.059022 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:55:15.059098 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:55:15.062583 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:55:15.062692 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:55:15.063796 systemd[1]: Stopped target network.target - Network. Jan 30 13:55:15.065281 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:55:15.065375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:55:15.065814 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:55:15.066327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:55:15.067248 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:15.067802 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:55:15.068228 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:55:15.068795 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:55:15.068877 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:55:15.069631 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:55:15.069701 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:55:15.070637 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:55:15.070741 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:55:15.071325 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:55:15.071382 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:55:15.072224 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:55:15.073606 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:55:15.077212 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:55:15.077350 systemd-networkd[753]: eth1: DHCPv6 lease lost Jan 30 13:55:15.078025 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:55:15.078374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:55:15.081248 systemd-networkd[753]: eth0: DHCPv6 lease lost Jan 30 13:55:15.082743 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:55:15.083231 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:55:15.084017 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:55:15.084207 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:55:15.087258 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:55:15.087377 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:15.088236 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:55:15.088337 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:55:15.098498 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:55:15.100496 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:55:15.100638 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:55:15.101262 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:55:15.101325 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:15.101965 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:55:15.102022 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:15.102859 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:55:15.102925 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:15.104074 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:15.123631 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:55:15.124923 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:15.125992 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:55:15.126279 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:55:15.128456 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:55:15.128526 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:15.129709 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:55:15.129757 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:15.130650 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:55:15.130716 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:55:15.131928 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:55:15.131988 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:55:15.132763 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:55:15.132840 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:55:15.139490 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:55:15.140045 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:55:15.140183 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:15.140675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:15.140735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:15.150321 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:55:15.151191 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:55:15.153368 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:55:15.160556 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:55:15.173289 systemd[1]: Switching root. Jan 30 13:55:15.211086 systemd-journald[182]: Journal stopped Jan 30 13:55:16.527103 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 30 13:55:16.528260 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:55:16.528289 kernel: SELinux: policy capability open_perms=1 Jan 30 13:55:16.528301 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:55:16.528313 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:55:16.528358 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:55:16.528372 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:55:16.528384 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:55:16.528400 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:55:16.528413 kernel: audit: type=1403 audit(1738245315.401:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:55:16.528428 systemd[1]: Successfully loaded SELinux policy in 42.125ms. Jan 30 13:55:16.528453 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.411ms. Jan 30 13:55:16.528472 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:55:16.528490 systemd[1]: Detected virtualization kvm. Jan 30 13:55:16.528504 systemd[1]: Detected architecture x86-64. Jan 30 13:55:16.528517 systemd[1]: Detected first boot. Jan 30 13:55:16.528530 systemd[1]: Hostname set to . Jan 30 13:55:16.528543 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:55:16.528556 zram_generator::config[1033]: No configuration found. Jan 30 13:55:16.528570 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:55:16.528582 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:55:16.528600 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:55:16.528614 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:55:16.528629 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:55:16.528642 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:55:16.528655 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:55:16.528667 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:55:16.528680 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:55:16.528693 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:55:16.528713 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:55:16.528726 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:55:16.528742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:55:16.528760 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:55:16.528779 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:55:16.528799 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:55:16.528818 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:55:16.528839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:55:16.528860 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:55:16.528888 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:55:16.528910 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:55:16.528931 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:55:16.528953 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:55:16.528976 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:55:16.528999 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:55:16.529028 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:55:16.529058 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:55:16.529081 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:55:16.529104 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:55:16.529126 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:55:16.529164 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:55:16.529189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:55:16.530021 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:55:16.530078 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:55:16.530098 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:55:16.531214 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:55:16.531257 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:55:16.531280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:16.531303 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:55:16.531322 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:55:16.531342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:55:16.531364 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:55:16.531385 systemd[1]: Reached target machines.target - Containers. Jan 30 13:55:16.531426 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:55:16.531448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:16.531470 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:55:16.531493 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:55:16.531516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:16.531537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:55:16.531560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:16.531581 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:55:16.531603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:16.531635 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:55:16.531659 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:55:16.531681 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:55:16.531704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:55:16.531728 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:55:16.531750 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:55:16.531771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:55:16.531794 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:55:16.531816 kernel: loop: module loaded Jan 30 13:55:16.531847 kernel: fuse: init (API version 7.39) Jan 30 13:55:16.531868 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:55:16.531892 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:55:16.531921 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:55:16.531943 systemd[1]: Stopped verity-setup.service. Jan 30 13:55:16.531964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:16.531987 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:55:16.532010 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:55:16.532042 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:55:16.532063 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:55:16.532086 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:55:16.532110 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:55:16.532148 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:55:16.532180 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:55:16.532203 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:55:16.532226 kernel: ACPI: bus type drm_connector registered Jan 30 13:55:16.532257 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:16.532280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:16.532303 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:55:16.532332 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:55:16.532357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:16.532377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:16.532401 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:55:16.532423 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:55:16.532446 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:16.532467 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:16.532491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:55:16.532522 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:55:16.532601 systemd-journald[1106]: Collecting audit messages is disabled. Jan 30 13:55:16.532654 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:55:16.532679 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:55:16.532704 systemd-journald[1106]: Journal started Jan 30 13:55:16.532769 systemd-journald[1106]: Runtime Journal (/run/log/journal/3700573b64a645c69ed0ba6542fc07c0) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:55:16.103121 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:55:16.125117 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:55:16.125720 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:55:16.541189 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:55:16.560186 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:55:16.565185 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:55:16.565325 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:55:16.572786 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:55:16.583185 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:55:16.598166 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:55:16.598297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:16.607303 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:55:16.607438 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:16.619341 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:55:16.621228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:16.631220 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:55:16.641172 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:55:16.641277 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:55:16.642429 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:55:16.643117 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:55:16.643934 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:55:16.645871 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:55:16.679340 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:55:16.708027 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:55:16.719412 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:55:16.725622 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:55:16.723430 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:55:16.733981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:55:16.756352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:55:16.773858 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:55:16.769657 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:55:16.810570 kernel: loop1: detected capacity change from 0 to 8 Jan 30 13:55:16.834668 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:55:16.842788 systemd-journald[1106]: Time spent on flushing to /var/log/journal/3700573b64a645c69ed0ba6542fc07c0 is 51.068ms for 997 entries. Jan 30 13:55:16.842788 systemd-journald[1106]: System Journal (/var/log/journal/3700573b64a645c69ed0ba6542fc07c0) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:55:16.931314 systemd-journald[1106]: Received client request to flush runtime journal. Jan 30 13:55:16.931391 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:55:16.853502 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:55:16.859721 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:55:16.873404 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:55:16.919213 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:55:16.938499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:55:16.940240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:55:16.973206 kernel: loop3: detected capacity change from 0 to 205544 Jan 30 13:55:17.014867 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:55:17.038375 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 30 13:55:17.038396 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 30 13:55:17.050353 kernel: loop5: detected capacity change from 0 to 8 Jan 30 13:55:17.053567 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 13:55:17.061069 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:55:17.086510 kernel: loop7: detected capacity change from 0 to 205544 Jan 30 13:55:17.100959 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 13:55:17.101637 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 30 13:55:17.122342 systemd[1]: Reloading requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:55:17.122374 systemd[1]: Reloading... Jan 30 13:55:17.274416 zram_generator::config[1203]: No configuration found. Jan 30 13:55:17.458996 ldconfig[1131]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:55:17.558442 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:17.645408 systemd[1]: Reloading finished in 522 ms. Jan 30 13:55:17.683266 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:55:17.684544 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:55:17.702679 systemd[1]: Starting ensure-sysext.service... Jan 30 13:55:17.716446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:55:17.733926 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:55:17.734183 systemd[1]: Reloading... Jan 30 13:55:17.802975 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:55:17.803598 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:55:17.807391 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:55:17.809376 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 30 13:55:17.809531 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 30 13:55:17.821572 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:55:17.821593 systemd-tmpfiles[1250]: Skipping /boot Jan 30 13:55:17.861018 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:55:17.861279 systemd-tmpfiles[1250]: Skipping /boot Jan 30 13:55:17.948305 zram_generator::config[1280]: No configuration found. Jan 30 13:55:18.121012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:18.181972 systemd[1]: Reloading finished in 447 ms. Jan 30 13:55:18.203113 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:55:18.211341 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:55:18.226573 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:18.230453 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:55:18.242479 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:55:18.255329 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:55:18.263719 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:55:18.279156 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:55:18.287820 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.288097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:18.303215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:18.307588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:18.311926 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:18.312622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.312780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.320550 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:55:18.322441 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.322643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:18.322870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.322967 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.327206 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.327458 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:18.334832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:55:18.336935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.337260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.347215 systemd[1]: Finished ensure-sysext.service. Jan 30 13:55:18.370649 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:55:18.381261 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:55:18.382409 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:55:18.387068 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Jan 30 13:55:18.411949 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:55:18.413485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:18.415653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:18.421719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:18.421943 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:18.423045 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:55:18.423300 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:55:18.433900 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:18.434923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:18.436907 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:55:18.459441 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:55:18.459938 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:18.460040 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:18.462546 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:55:18.467064 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:55:18.474410 augenrules[1362]: No rules Jan 30 13:55:18.476396 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:18.519502 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:55:18.562376 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:55:18.609373 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 13:55:18.610001 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.611008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:55:18.616551 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:55:18.631552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:55:18.641546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:55:18.642383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:55:18.642466 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:55:18.642493 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:55:18.730655 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 13:55:18.734603 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 13:55:18.769994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:55:18.770380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:55:18.771644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:55:18.772503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:55:18.775087 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:55:18.775417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:55:18.780770 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:55:18.780932 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:55:18.797317 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:55:18.831732 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:55:18.832620 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:55:18.852265 systemd-resolved[1327]: Positive Trust Anchors: Jan 30 13:55:18.852696 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:55:18.852826 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:55:18.855398 systemd-networkd[1361]: lo: Link UP Jan 30 13:55:18.855410 systemd-networkd[1361]: lo: Gained carrier Jan 30 13:55:18.860532 systemd-networkd[1361]: Enumeration completed Jan 30 13:55:18.860764 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:55:18.864655 systemd-resolved[1327]: Using system hostname 'ci-4081.3.0-b-c9e031af59'. Jan 30 13:55:18.865211 systemd-networkd[1361]: eth0: Configuring with /run/systemd/network/10-56:c2:f1:a1:65:e9.network. Jan 30 13:55:18.867304 systemd-networkd[1361]: eth1: Configuring with /run/systemd/network/10-1e:a1:ee:70:66:25.network. Jan 30 13:55:18.868237 systemd-networkd[1361]: eth0: Link UP Jan 30 13:55:18.868249 systemd-networkd[1361]: eth0: Gained carrier Jan 30 13:55:18.870574 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:55:18.872072 systemd-networkd[1361]: eth1: Link UP Jan 30 13:55:18.872084 systemd-networkd[1361]: eth1: Gained carrier Jan 30 13:55:18.876062 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1363) Jan 30 13:55:18.875493 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:55:18.876206 systemd[1]: Reached target network.target - Network. Jan 30 13:55:18.876672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:55:18.880286 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 30 13:55:18.969868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:55:18.977226 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:55:18.979573 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:55:18.992430 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:55:19.022493 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:55:19.026471 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:55:19.055175 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:55:19.101246 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:55:19.115743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:19.137176 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:55:19.139252 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:55:19.149244 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:55:19.149386 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:55:19.149419 kernel: [drm] features: -context_init Jan 30 13:55:19.150398 kernel: [drm] number of scanouts: 1 Jan 30 13:55:19.150515 kernel: [drm] number of cap sets: 0 Jan 30 13:55:19.155188 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:55:19.164862 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:55:19.164956 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:55:19.214200 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:55:19.217833 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:19.220426 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:19.271798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:19.285408 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:55:19.287272 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:19.318603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:55:19.337198 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:55:19.369892 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:55:19.377558 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:55:19.412583 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:55:19.414629 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:55:19.452404 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:55:19.455545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:55:19.457699 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:55:19.458250 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:55:19.458494 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:55:19.458989 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:55:19.459370 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:55:19.459508 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:55:19.459635 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:55:19.459679 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:55:19.459790 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:55:19.461166 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:55:19.465870 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:55:19.476611 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:55:19.482929 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:55:19.486816 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:55:19.491693 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:55:19.492557 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:55:19.493412 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:55:19.493457 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:55:19.502540 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:55:19.509694 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:55:19.517510 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:55:19.530574 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:55:19.540400 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:55:19.549836 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:55:19.552422 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:55:19.556848 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:55:19.566545 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:55:19.583805 jq[1441]: false Jan 30 13:55:19.578245 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:55:19.599442 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:55:19.610390 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:55:19.614110 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:55:19.624629 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:55:19.628517 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:55:19.631816 coreos-metadata[1439]: Jan 30 13:55:19.631 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:19.636107 dbus-daemon[1440]: [system] SELinux support is enabled Jan 30 13:55:19.638398 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:55:19.639980 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:55:19.646807 coreos-metadata[1439]: Jan 30 13:55:19.646 INFO Fetch successful Jan 30 13:55:19.650236 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:55:19.664794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:55:19.666425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:55:19.669809 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:55:19.671256 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:55:19.704938 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:55:19.705034 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:55:19.707955 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:55:19.708053 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 13:55:19.708083 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:55:19.726204 jq[1451]: true Jan 30 13:55:19.730259 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:55:19.730994 extend-filesystems[1442]: Found loop4 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found loop5 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found loop6 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found loop7 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found vda Jan 30 13:55:19.730994 extend-filesystems[1442]: Found vda1 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found vda2 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found vda3 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found usr Jan 30 13:55:19.730994 extend-filesystems[1442]: Found vda4 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found vda6 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found vda7 Jan 30 13:55:19.730994 extend-filesystems[1442]: Found vda9 Jan 30 13:55:19.730994 extend-filesystems[1442]: Checking size of /dev/vda9 Jan 30 13:55:19.837210 extend-filesystems[1442]: Resized partition /dev/vda9 Jan 30 13:55:19.748334 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:55:19.849727 tar[1456]: linux-amd64/helm Jan 30 13:55:19.851417 update_engine[1450]: I20250130 13:55:19.757715 1450 main.cc:92] Flatcar Update Engine starting Jan 30 13:55:19.851417 update_engine[1450]: I20250130 13:55:19.794260 1450 update_check_scheduler.cc:74] Next update check in 2m26s Jan 30 13:55:19.851747 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:55:19.857100 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 13:55:19.857149 jq[1476]: true Jan 30 13:55:19.748658 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:55:19.785797 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:55:19.797598 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:55:19.823496 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:55:19.829961 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:55:19.870967 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1360) Jan 30 13:55:19.998048 systemd-logind[1449]: New seat seat0. Jan 30 13:55:20.003386 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:55:20.010845 systemd-networkd[1361]: eth1: Gained IPv6LL Jan 30 13:55:20.012184 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:55:20.028917 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 30 13:55:20.041354 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:55:20.042217 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:55:20.042252 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:55:20.043671 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:55:20.048974 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:55:20.061437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:20.074447 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:55:20.089535 systemd[1]: Starting sshkeys.service... Jan 30 13:55:20.135317 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:55:20.144812 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:55:20.153308 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 13:55:20.190296 extend-filesystems[1492]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:55:20.190296 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 13:55:20.190296 extend-filesystems[1492]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 13:55:20.208043 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Jan 30 13:55:20.208043 extend-filesystems[1442]: Found vdb Jan 30 13:55:20.208009 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:55:20.208589 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:55:20.261233 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:55:20.304750 coreos-metadata[1510]: Jan 30 13:55:20.303 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:55:20.316553 coreos-metadata[1510]: Jan 30 13:55:20.316 INFO Fetch successful Jan 30 13:55:20.319385 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:55:20.332311 systemd-networkd[1361]: eth0: Gained IPv6LL Jan 30 13:55:20.334110 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 30 13:55:20.337981 unknown[1510]: wrote ssh authorized keys file for user: core Jan 30 13:55:20.400810 update-ssh-keys[1530]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:55:20.404887 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:55:20.413434 systemd[1]: Finished sshkeys.service. Jan 30 13:55:20.457749 containerd[1461]: time="2025-01-30T13:55:20.457638989Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:55:20.559110 containerd[1461]: time="2025-01-30T13:55:20.558996655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.568034 containerd[1461]: time="2025-01-30T13:55:20.567939814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.568238 containerd[1461]: time="2025-01-30T13:55:20.568220355Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:55:20.568311 containerd[1461]: time="2025-01-30T13:55:20.568300448Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:55:20.568738 containerd[1461]: time="2025-01-30T13:55:20.568700365Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:55:20.570492 containerd[1461]: time="2025-01-30T13:55:20.570343255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.570971 containerd[1461]: time="2025-01-30T13:55:20.570848164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.570971 containerd[1461]: time="2025-01-30T13:55:20.570877863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.571660 containerd[1461]: time="2025-01-30T13:55:20.571477905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.571660 containerd[1461]: time="2025-01-30T13:55:20.571607544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.571830 containerd[1461]: time="2025-01-30T13:55:20.571634063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.571830 containerd[1461]: time="2025-01-30T13:55:20.571762329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.574446 containerd[1461]: time="2025-01-30T13:55:20.571968415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.574957 containerd[1461]: time="2025-01-30T13:55:20.574910995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:55:20.578415 containerd[1461]: time="2025-01-30T13:55:20.577645983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:55:20.578415 containerd[1461]: time="2025-01-30T13:55:20.577722864Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:55:20.578415 containerd[1461]: time="2025-01-30T13:55:20.578090524Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:55:20.578415 containerd[1461]: time="2025-01-30T13:55:20.578341360Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:55:20.587351 containerd[1461]: time="2025-01-30T13:55:20.587213941Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:55:20.590996 containerd[1461]: time="2025-01-30T13:55:20.589552188Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:55:20.590996 containerd[1461]: time="2025-01-30T13:55:20.589640158Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:55:20.590996 containerd[1461]: time="2025-01-30T13:55:20.590442894Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:55:20.590996 containerd[1461]: time="2025-01-30T13:55:20.590490424Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:55:20.590996 containerd[1461]: time="2025-01-30T13:55:20.590851184Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:55:20.593421 containerd[1461]: time="2025-01-30T13:55:20.593354167Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:55:20.595369 containerd[1461]: time="2025-01-30T13:55:20.595308350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:55:20.595579 containerd[1461]: time="2025-01-30T13:55:20.595557749Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:55:20.595660 containerd[1461]: time="2025-01-30T13:55:20.595643866Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:55:20.595737 containerd[1461]: time="2025-01-30T13:55:20.595721373Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.595816 containerd[1461]: time="2025-01-30T13:55:20.595798860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.595893 containerd[1461]: time="2025-01-30T13:55:20.595879499Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.596018 containerd[1461]: time="2025-01-30T13:55:20.595999159Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597228895Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597281185Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597301473Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597319976Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597358010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597384193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597406072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597457939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597478695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597499658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597523600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597568986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597596310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.597946 containerd[1461]: time="2025-01-30T13:55:20.597623678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.598650 containerd[1461]: time="2025-01-30T13:55:20.597645920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.598650 containerd[1461]: time="2025-01-30T13:55:20.597664724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.598650 containerd[1461]: time="2025-01-30T13:55:20.597689457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.598650 containerd[1461]: time="2025-01-30T13:55:20.597721989Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:55:20.598650 containerd[1461]: time="2025-01-30T13:55:20.597764274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.598650 containerd[1461]: time="2025-01-30T13:55:20.597783124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.598650 containerd[1461]: time="2025-01-30T13:55:20.597799952Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:55:20.601503 containerd[1461]: time="2025-01-30T13:55:20.599319942Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:55:20.601503 containerd[1461]: time="2025-01-30T13:55:20.599516409Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:55:20.601503 containerd[1461]: time="2025-01-30T13:55:20.599537013Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:55:20.601503 containerd[1461]: time="2025-01-30T13:55:20.599558767Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:55:20.601503 containerd[1461]: time="2025-01-30T13:55:20.599578424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.601503 containerd[1461]: time="2025-01-30T13:55:20.599601591Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:55:20.601503 containerd[1461]: time="2025-01-30T13:55:20.599619355Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:55:20.601503 containerd[1461]: time="2025-01-30T13:55:20.599634995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:55:20.603466 containerd[1461]: time="2025-01-30T13:55:20.602201951Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:55:20.603466 containerd[1461]: time="2025-01-30T13:55:20.602359922Z" level=info msg="Connect containerd service" Jan 30 13:55:20.603466 containerd[1461]: time="2025-01-30T13:55:20.602463132Z" level=info msg="using legacy CRI server" Jan 30 13:55:20.603466 containerd[1461]: time="2025-01-30T13:55:20.602479260Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:55:20.603466 containerd[1461]: time="2025-01-30T13:55:20.602731845Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.604041171Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.604809629Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.604899139Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.605038233Z" level=info msg="Start subscribing containerd event" Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.605128061Z" level=info msg="Start recovering state" Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.605286178Z" level=info msg="Start event monitor" Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.605314322Z" level=info msg="Start snapshots syncer" Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.605332808Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.605347319Z" level=info msg="Start streaming server" Jan 30 13:55:20.611020 containerd[1461]: time="2025-01-30T13:55:20.605462070Z" level=info msg="containerd successfully booted in 0.148934s" Jan 30 13:55:20.605625 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:55:20.639261 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:55:20.690282 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:55:20.743353 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:55:20.760335 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:55:20.773306 systemd[1]: Started sshd@0-64.23.157.134:22-147.75.109.163:32944.service - OpenSSH per-connection server daemon (147.75.109.163:32944). Jan 30 13:55:20.789862 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:55:20.790219 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:55:20.806727 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:55:20.875178 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:55:20.891754 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:55:20.905755 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:55:20.910818 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:55:20.981850 sshd[1547]: Accepted publickey for core from 147.75.109.163 port 32944 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:20.988680 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:21.007952 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:55:21.022573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:55:21.030150 systemd-logind[1449]: New session 1 of user core. Jan 30 13:55:21.067930 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:55:21.083634 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:55:21.104961 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:55:21.293062 systemd[1559]: Queued start job for default target default.target. Jan 30 13:55:21.299937 systemd[1559]: Created slice app.slice - User Application Slice. Jan 30 13:55:21.299985 systemd[1559]: Reached target paths.target - Paths. Jan 30 13:55:21.300009 systemd[1559]: Reached target timers.target - Timers. Jan 30 13:55:21.304591 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:55:21.346288 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:55:21.346496 systemd[1559]: Reached target sockets.target - Sockets. Jan 30 13:55:21.346523 systemd[1559]: Reached target basic.target - Basic System. Jan 30 13:55:21.346598 systemd[1559]: Reached target default.target - Main User Target. Jan 30 13:55:21.346650 systemd[1559]: Startup finished in 227ms. Jan 30 13:55:21.346847 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:55:21.357506 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:55:21.387940 tar[1456]: linux-amd64/LICENSE Jan 30 13:55:21.387940 tar[1456]: linux-amd64/README.md Jan 30 13:55:21.410112 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:55:21.440886 systemd[1]: Started sshd@1-64.23.157.134:22-147.75.109.163:32956.service - OpenSSH per-connection server daemon (147.75.109.163:32956). Jan 30 13:55:21.511895 sshd[1573]: Accepted publickey for core from 147.75.109.163 port 32956 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:21.515290 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:21.523359 systemd-logind[1449]: New session 2 of user core. Jan 30 13:55:21.527413 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:55:21.601962 sshd[1573]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:21.614553 systemd[1]: sshd@1-64.23.157.134:22-147.75.109.163:32956.service: Deactivated successfully. Jan 30 13:55:21.619647 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:55:21.624779 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:55:21.635349 systemd[1]: Started sshd@2-64.23.157.134:22-147.75.109.163:32962.service - OpenSSH per-connection server daemon (147.75.109.163:32962). Jan 30 13:55:21.639738 systemd-logind[1449]: Removed session 2. Jan 30 13:55:21.684185 sshd[1580]: Accepted publickey for core from 147.75.109.163 port 32962 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:21.686612 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:21.696071 systemd-logind[1449]: New session 3 of user core. Jan 30 13:55:21.704699 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:55:21.778356 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:21.787538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:21.791117 systemd[1]: sshd@2-64.23.157.134:22-147.75.109.163:32962.service: Deactivated successfully. Jan 30 13:55:21.800643 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:55:21.800884 (kubelet)[1588]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:21.805732 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:55:21.806942 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:55:21.811453 systemd[1]: Startup finished in 1.175s (kernel) + 5.701s (initrd) + 6.450s (userspace) = 13.327s. Jan 30 13:55:21.814690 systemd-logind[1449]: Removed session 3. Jan 30 13:55:22.542339 kubelet[1588]: E0130 13:55:22.542167 1588 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:22.546875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:22.547119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:22.547807 systemd[1]: kubelet.service: Consumed 1.241s CPU time. Jan 30 13:55:31.810660 systemd[1]: Started sshd@3-64.23.157.134:22-147.75.109.163:59818.service - OpenSSH per-connection server daemon (147.75.109.163:59818). Jan 30 13:55:31.859991 sshd[1603]: Accepted publickey for core from 147.75.109.163 port 59818 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:31.862581 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:31.868977 systemd-logind[1449]: New session 4 of user core. Jan 30 13:55:31.880478 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:55:31.948477 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:31.956373 systemd[1]: sshd@3-64.23.157.134:22-147.75.109.163:59818.service: Deactivated successfully. Jan 30 13:55:31.958399 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:55:31.960445 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:55:31.964573 systemd[1]: Started sshd@4-64.23.157.134:22-147.75.109.163:59820.service - OpenSSH per-connection server daemon (147.75.109.163:59820). Jan 30 13:55:31.966515 systemd-logind[1449]: Removed session 4. Jan 30 13:55:32.016810 sshd[1610]: Accepted publickey for core from 147.75.109.163 port 59820 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:32.018989 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:32.025907 systemd-logind[1449]: New session 5 of user core. Jan 30 13:55:32.031442 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:55:32.088694 sshd[1610]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:32.099915 systemd[1]: sshd@4-64.23.157.134:22-147.75.109.163:59820.service: Deactivated successfully. Jan 30 13:55:32.102904 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:55:32.105442 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:55:32.110625 systemd[1]: Started sshd@5-64.23.157.134:22-147.75.109.163:59822.service - OpenSSH per-connection server daemon (147.75.109.163:59822). Jan 30 13:55:32.113282 systemd-logind[1449]: Removed session 5. Jan 30 13:55:32.165458 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 59822 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:32.168108 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:32.176973 systemd-logind[1449]: New session 6 of user core. Jan 30 13:55:32.197560 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:55:32.266552 sshd[1617]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:32.284182 systemd[1]: sshd@5-64.23.157.134:22-147.75.109.163:59822.service: Deactivated successfully. Jan 30 13:55:32.287120 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:55:32.290468 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:55:32.295704 systemd[1]: Started sshd@6-64.23.157.134:22-147.75.109.163:59838.service - OpenSSH per-connection server daemon (147.75.109.163:59838). Jan 30 13:55:32.297512 systemd-logind[1449]: Removed session 6. Jan 30 13:55:32.356823 sshd[1624]: Accepted publickey for core from 147.75.109.163 port 59838 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:32.358844 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:32.365390 systemd-logind[1449]: New session 7 of user core. Jan 30 13:55:32.377483 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:55:32.453713 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:55:32.454766 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:32.473956 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:32.479863 sshd[1624]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:32.493901 systemd[1]: sshd@6-64.23.157.134:22-147.75.109.163:59838.service: Deactivated successfully. Jan 30 13:55:32.497742 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:55:32.502244 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:55:32.514694 systemd[1]: Started sshd@7-64.23.157.134:22-147.75.109.163:59840.service - OpenSSH per-connection server daemon (147.75.109.163:59840). Jan 30 13:55:32.517780 systemd-logind[1449]: Removed session 7. Jan 30 13:55:32.553804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:55:32.560557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:32.566180 sshd[1632]: Accepted publickey for core from 147.75.109.163 port 59840 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:32.568885 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:32.579643 systemd-logind[1449]: New session 8 of user core. Jan 30 13:55:32.584565 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:55:32.657018 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:55:32.657359 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:32.667707 sudo[1639]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:32.677202 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:55:32.678201 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:32.702456 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:32.710538 auditctl[1642]: No rules Jan 30 13:55:32.712558 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:55:32.712817 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:32.731479 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:55:32.780494 augenrules[1666]: No rules Jan 30 13:55:32.781774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:32.781902 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:32.784248 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:55:32.788073 sudo[1638]: pam_unix(sudo:session): session closed for user root Jan 30 13:55:32.795404 sshd[1632]: pam_unix(sshd:session): session closed for user core Jan 30 13:55:32.807228 systemd[1]: sshd@7-64.23.157.134:22-147.75.109.163:59840.service: Deactivated successfully. Jan 30 13:55:32.811492 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:55:32.815458 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:55:32.826374 systemd[1]: Started sshd@8-64.23.157.134:22-147.75.109.163:59854.service - OpenSSH per-connection server daemon (147.75.109.163:59854). Jan 30 13:55:32.830302 systemd-logind[1449]: Removed session 8. Jan 30 13:55:32.872405 kubelet[1664]: E0130 13:55:32.872336 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:32.876150 sshd[1678]: Accepted publickey for core from 147.75.109.163 port 59854 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:55:32.877449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:32.877660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:32.880382 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:55:32.888270 systemd-logind[1449]: New session 9 of user core. Jan 30 13:55:32.893452 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:55:32.954750 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:55:32.955207 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:55:33.451627 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:55:33.451950 (dockerd)[1698]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:55:33.936841 dockerd[1698]: time="2025-01-30T13:55:33.935817921Z" level=info msg="Starting up" Jan 30 13:55:34.095290 dockerd[1698]: time="2025-01-30T13:55:34.095225042Z" level=info msg="Loading containers: start." Jan 30 13:55:34.229197 kernel: Initializing XFRM netlink socket Jan 30 13:55:34.268360 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 30 13:55:33.857660 systemd-resolved[1327]: Clock change detected. Flushing caches. Jan 30 13:55:33.866509 systemd-journald[1106]: Time jumped backwards, rotating. Jan 30 13:55:33.857746 systemd-timesyncd[1348]: Contacted time server 66.42.71.197:123 (2.flatcar.pool.ntp.org). Jan 30 13:55:33.857809 systemd-timesyncd[1348]: Initial clock synchronization to Thu 2025-01-30 13:55:33.857464 UTC. Jan 30 13:55:33.898441 systemd-networkd[1361]: docker0: Link UP Jan 30 13:55:33.919867 dockerd[1698]: time="2025-01-30T13:55:33.919742436Z" level=info msg="Loading containers: done." Jan 30 13:55:33.941746 dockerd[1698]: time="2025-01-30T13:55:33.941262812Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:55:33.941746 dockerd[1698]: time="2025-01-30T13:55:33.941417042Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:55:33.941746 dockerd[1698]: time="2025-01-30T13:55:33.941535546Z" level=info msg="Daemon has completed initialization" Jan 30 13:55:33.975431 dockerd[1698]: time="2025-01-30T13:55:33.975342452Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:55:33.975780 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:55:34.952124 containerd[1461]: time="2025-01-30T13:55:34.952039128Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:55:35.570875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2026250975.mount: Deactivated successfully. Jan 30 13:55:36.677811 containerd[1461]: time="2025-01-30T13:55:36.677698281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:36.679724 containerd[1461]: time="2025-01-30T13:55:36.679646477Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 30 13:55:36.680960 containerd[1461]: time="2025-01-30T13:55:36.680485705Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:36.684260 containerd[1461]: time="2025-01-30T13:55:36.684208730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:36.685562 containerd[1461]: time="2025-01-30T13:55:36.685520759Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 1.733391105s" Jan 30 13:55:36.685757 containerd[1461]: time="2025-01-30T13:55:36.685731459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:55:36.688086 containerd[1461]: time="2025-01-30T13:55:36.688044419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:55:38.133635 containerd[1461]: time="2025-01-30T13:55:38.133517230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.135335 containerd[1461]: time="2025-01-30T13:55:38.135222857Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 30 13:55:38.135922 containerd[1461]: time="2025-01-30T13:55:38.135844767Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.140109 containerd[1461]: time="2025-01-30T13:55:38.140011865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:38.142617 containerd[1461]: time="2025-01-30T13:55:38.142268992Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.454042528s" Jan 30 13:55:38.142617 containerd[1461]: time="2025-01-30T13:55:38.142361066Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:55:38.146478 containerd[1461]: time="2025-01-30T13:55:38.146425505Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:55:39.416304 containerd[1461]: time="2025-01-30T13:55:39.415928929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:39.418247 containerd[1461]: time="2025-01-30T13:55:39.418153830Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 30 13:55:39.420223 containerd[1461]: time="2025-01-30T13:55:39.419919211Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:39.423425 containerd[1461]: time="2025-01-30T13:55:39.423320623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:39.425319 containerd[1461]: time="2025-01-30T13:55:39.425232647Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.278752248s" Jan 30 13:55:39.425319 containerd[1461]: time="2025-01-30T13:55:39.425325626Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:55:39.427387 containerd[1461]: time="2025-01-30T13:55:39.426043784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:55:40.689573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894003603.mount: Deactivated successfully. Jan 30 13:55:41.292135 containerd[1461]: time="2025-01-30T13:55:41.292006752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.294005 containerd[1461]: time="2025-01-30T13:55:41.293697077Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 30 13:55:41.294844 containerd[1461]: time="2025-01-30T13:55:41.294787066Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.297626 containerd[1461]: time="2025-01-30T13:55:41.297572651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:41.299366 containerd[1461]: time="2025-01-30T13:55:41.298986343Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.872896739s" Jan 30 13:55:41.299366 containerd[1461]: time="2025-01-30T13:55:41.299109561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:55:41.300468 containerd[1461]: time="2025-01-30T13:55:41.300297792Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:55:41.302756 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 13:55:41.832971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705274632.mount: Deactivated successfully. Jan 30 13:55:42.689555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:55:42.700656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:42.892560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:42.905992 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:42.997498 kubelet[1969]: E0130 13:55:42.996108 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:42.999334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:42.999627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:43.080143 containerd[1461]: time="2025-01-30T13:55:43.079808686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.082438 containerd[1461]: time="2025-01-30T13:55:43.081947812Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:55:43.082438 containerd[1461]: time="2025-01-30T13:55:43.082033673Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.086839 containerd[1461]: time="2025-01-30T13:55:43.086727333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.091308 containerd[1461]: time="2025-01-30T13:55:43.089527725Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.789176538s" Jan 30 13:55:43.091308 containerd[1461]: time="2025-01-30T13:55:43.089616338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:55:43.095138 containerd[1461]: time="2025-01-30T13:55:43.095077059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:55:43.556259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1756397799.mount: Deactivated successfully. Jan 30 13:55:43.562015 containerd[1461]: time="2025-01-30T13:55:43.561010978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.562015 containerd[1461]: time="2025-01-30T13:55:43.561901778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:55:43.562526 containerd[1461]: time="2025-01-30T13:55:43.562468407Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.566496 containerd[1461]: time="2025-01-30T13:55:43.566239495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:43.568303 containerd[1461]: time="2025-01-30T13:55:43.567886943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 472.481836ms" Jan 30 13:55:43.568303 containerd[1461]: time="2025-01-30T13:55:43.567941806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:55:43.569296 containerd[1461]: time="2025-01-30T13:55:43.569184296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:55:44.055855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount578888430.mount: Deactivated successfully. Jan 30 13:55:44.402536 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 13:55:45.828028 containerd[1461]: time="2025-01-30T13:55:45.827951722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:45.829542 containerd[1461]: time="2025-01-30T13:55:45.829440984Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 30 13:55:45.829798 containerd[1461]: time="2025-01-30T13:55:45.829747863Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:45.833293 containerd[1461]: time="2025-01-30T13:55:45.833194163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:55:45.835205 containerd[1461]: time="2025-01-30T13:55:45.834741417Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.26550559s" Jan 30 13:55:45.835205 containerd[1461]: time="2025-01-30T13:55:45.834783450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:55:48.725774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:48.737676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:48.783456 systemd[1]: Reloading requested from client PID 2061 ('systemctl') (unit session-9.scope)... Jan 30 13:55:48.783493 systemd[1]: Reloading... Jan 30 13:55:48.948365 zram_generator::config[2101]: No configuration found. Jan 30 13:55:49.112812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:49.235721 systemd[1]: Reloading finished in 451 ms. Jan 30 13:55:49.301183 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:55:49.301309 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:55:49.301628 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:49.304344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:49.475649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:49.478057 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:55:49.545941 kubelet[2152]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:49.545941 kubelet[2152]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:55:49.545941 kubelet[2152]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:49.547552 kubelet[2152]: I0130 13:55:49.547451 2152 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:55:50.139504 kubelet[2152]: I0130 13:55:50.139432 2152 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:55:50.139504 kubelet[2152]: I0130 13:55:50.139514 2152 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:55:50.145315 kubelet[2152]: I0130 13:55:50.143192 2152 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:55:50.174073 kubelet[2152]: E0130 13:55:50.174010 2152 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.157.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:50.174253 kubelet[2152]: I0130 13:55:50.174218 2152 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:50.185720 kubelet[2152]: E0130 13:55:50.185662 2152 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:55:50.186068 kubelet[2152]: I0130 13:55:50.186047 2152 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:55:50.192891 kubelet[2152]: I0130 13:55:50.192843 2152 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:55:50.193371 kubelet[2152]: I0130 13:55:50.193340 2152 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:55:50.193736 kubelet[2152]: I0130 13:55:50.193681 2152 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:55:50.194080 kubelet[2152]: I0130 13:55:50.193819 2152 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-b-c9e031af59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:55:50.194425 kubelet[2152]: I0130 13:55:50.194406 2152 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:55:50.194518 kubelet[2152]: I0130 13:55:50.194507 2152 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:55:50.194767 kubelet[2152]: I0130 13:55:50.194749 2152 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:50.197483 kubelet[2152]: I0130 13:55:50.197435 2152 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:55:50.197658 kubelet[2152]: I0130 13:55:50.197647 2152 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:55:50.197750 kubelet[2152]: I0130 13:55:50.197742 2152 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:55:50.197835 kubelet[2152]: I0130 13:55:50.197819 2152 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:55:50.206849 kubelet[2152]: I0130 13:55:50.206813 2152 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:55:50.210628 kubelet[2152]: I0130 13:55:50.210191 2152 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:55:50.210628 kubelet[2152]: W0130 13:55:50.210329 2152 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:55:50.211809 kubelet[2152]: I0130 13:55:50.211415 2152 server.go:1269] "Started kubelet" Jan 30 13:55:50.211809 kubelet[2152]: W0130 13:55:50.211629 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.157.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-b-c9e031af59&limit=500&resourceVersion=0": dial tcp 64.23.157.134:6443: connect: connection refused Jan 30 13:55:50.211809 kubelet[2152]: E0130 13:55:50.211704 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.157.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-b-c9e031af59&limit=500&resourceVersion=0\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:50.214902 kubelet[2152]: W0130 13:55:50.214838 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.157.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.157.134:6443: connect: connection refused Jan 30 13:55:50.215300 kubelet[2152]: E0130 13:55:50.215099 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.157.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:50.215842 kubelet[2152]: I0130 13:55:50.215803 2152 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:55:50.216966 kubelet[2152]: I0130 13:55:50.216777 2152 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:55:50.225204 kubelet[2152]: I0130 13:55:50.224671 2152 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:55:50.225204 kubelet[2152]: I0130 13:55:50.225077 2152 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:55:50.225204 kubelet[2152]: I0130 13:55:50.225077 2152 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:55:50.227278 kubelet[2152]: I0130 13:55:50.225686 2152 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:55:50.232916 kubelet[2152]: E0130 13:55:50.227394 2152 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.157.134:6443/api/v1/namespaces/default/events\": dial tcp 64.23.157.134:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-b-c9e031af59.181f7cefb7d2fc7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-b-c9e031af59,UID:ci-4081.3.0-b-c9e031af59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-b-c9e031af59,},FirstTimestamp:2025-01-30 13:55:50.21138649 +0000 UTC m=+0.727411360,LastTimestamp:2025-01-30 13:55:50.21138649 +0000 UTC m=+0.727411360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-b-c9e031af59,}" Jan 30 13:55:50.235999 kubelet[2152]: I0130 13:55:50.235947 2152 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:55:50.237503 kubelet[2152]: E0130 13:55:50.237456 2152 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-b-c9e031af59\" not found" Jan 30 13:55:50.239128 kubelet[2152]: I0130 13:55:50.238982 2152 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:55:50.239402 kubelet[2152]: I0130 13:55:50.239389 2152 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:55:50.241343 kubelet[2152]: I0130 13:55:50.241308 2152 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:55:50.241550 kubelet[2152]: I0130 13:55:50.241504 2152 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:55:50.245039 kubelet[2152]: E0130 13:55:50.244930 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.157.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-c9e031af59?timeout=10s\": dial tcp 64.23.157.134:6443: connect: connection refused" interval="200ms" Jan 30 13:55:50.245572 kubelet[2152]: I0130 13:55:50.245519 2152 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:55:50.263895 kubelet[2152]: I0130 13:55:50.263668 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:55:50.266358 kubelet[2152]: I0130 13:55:50.265963 2152 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:55:50.266358 kubelet[2152]: I0130 13:55:50.266014 2152 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:55:50.266358 kubelet[2152]: I0130 13:55:50.266048 2152 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:55:50.266358 kubelet[2152]: E0130 13:55:50.266128 2152 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:55:50.272646 kubelet[2152]: W0130 13:55:50.272565 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.157.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.157.134:6443: connect: connection refused Jan 30 13:55:50.275091 kubelet[2152]: E0130 13:55:50.274599 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.157.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:50.275091 kubelet[2152]: W0130 13:55:50.274895 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.157.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.157.134:6443: connect: connection refused Jan 30 13:55:50.275091 kubelet[2152]: E0130 13:55:50.274951 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.157.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:50.277719 kubelet[2152]: I0130 13:55:50.277301 2152 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:55:50.277719 kubelet[2152]: I0130 13:55:50.277337 2152 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:55:50.277719 kubelet[2152]: I0130 13:55:50.277365 2152 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:50.279880 kubelet[2152]: I0130 13:55:50.279849 2152 policy_none.go:49] "None policy: Start" Jan 30 13:55:50.281508 kubelet[2152]: I0130 13:55:50.281479 2152 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:55:50.281508 kubelet[2152]: I0130 13:55:50.281506 2152 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:55:50.291515 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:55:50.306059 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:55:50.314612 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:55:50.328021 kubelet[2152]: I0130 13:55:50.327847 2152 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:55:50.328180 kubelet[2152]: I0130 13:55:50.328088 2152 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:55:50.328180 kubelet[2152]: I0130 13:55:50.328102 2152 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:55:50.329806 kubelet[2152]: I0130 13:55:50.329669 2152 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:55:50.331637 kubelet[2152]: E0130 13:55:50.331519 2152 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-b-c9e031af59\" not found" Jan 30 13:55:50.377960 systemd[1]: Created slice kubepods-burstable-pod92f1fb0255b6e42059af5373734736f2.slice - libcontainer container kubepods-burstable-pod92f1fb0255b6e42059af5373734736f2.slice. Jan 30 13:55:50.395432 systemd[1]: Created slice kubepods-burstable-poddd840ada2a2ef734b3973351b99398e0.slice - libcontainer container kubepods-burstable-poddd840ada2a2ef734b3973351b99398e0.slice. Jan 30 13:55:50.402739 systemd[1]: Created slice kubepods-burstable-pod59b98c9545173c14f293d21b0c00eebd.slice - libcontainer container kubepods-burstable-pod59b98c9545173c14f293d21b0c00eebd.slice. Jan 30 13:55:50.429659 kubelet[2152]: I0130 13:55:50.429518 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.434010 kubelet[2152]: E0130 13:55:50.433952 2152 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.157.134:6443/api/v1/nodes\": dial tcp 64.23.157.134:6443: connect: connection refused" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.445737 kubelet[2152]: E0130 13:55:50.445676 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.157.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-c9e031af59?timeout=10s\": dial tcp 64.23.157.134:6443: connect: connection refused" interval="400ms" Jan 30 13:55:50.540930 kubelet[2152]: I0130 13:55:50.540781 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92f1fb0255b6e42059af5373734736f2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-b-c9e031af59\" (UID: \"92f1fb0255b6e42059af5373734736f2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.540930 kubelet[2152]: I0130 13:55:50.540872 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92f1fb0255b6e42059af5373734736f2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-b-c9e031af59\" (UID: \"92f1fb0255b6e42059af5373734736f2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.540930 kubelet[2152]: I0130 13:55:50.540914 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.540930 kubelet[2152]: I0130 13:55:50.540937 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.541310 kubelet[2152]: I0130 13:55:50.540963 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.541310 kubelet[2152]: I0130 13:55:50.540991 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd840ada2a2ef734b3973351b99398e0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-b-c9e031af59\" (UID: \"dd840ada2a2ef734b3973351b99398e0\") " pod="kube-system/kube-scheduler-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.541310 kubelet[2152]: I0130 13:55:50.541014 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92f1fb0255b6e42059af5373734736f2-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-b-c9e031af59\" (UID: \"92f1fb0255b6e42059af5373734736f2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.541310 kubelet[2152]: I0130 13:55:50.541034 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.541310 kubelet[2152]: I0130 13:55:50.541056 2152 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.636660 kubelet[2152]: I0130 13:55:50.636140 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.636660 kubelet[2152]: E0130 13:55:50.636582 2152 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.157.134:6443/api/v1/nodes\": dial tcp 64.23.157.134:6443: connect: connection refused" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:50.692578 kubelet[2152]: E0130 13:55:50.691983 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:50.693047 containerd[1461]: time="2025-01-30T13:55:50.692954118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-b-c9e031af59,Uid:92f1fb0255b6e42059af5373734736f2,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:50.694839 systemd-resolved[1327]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 13:55:50.699706 kubelet[2152]: E0130 13:55:50.699339 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:50.704403 containerd[1461]: time="2025-01-30T13:55:50.704334402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-b-c9e031af59,Uid:dd840ada2a2ef734b3973351b99398e0,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:50.706902 kubelet[2152]: E0130 13:55:50.706214 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:50.707795 containerd[1461]: time="2025-01-30T13:55:50.707412254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-b-c9e031af59,Uid:59b98c9545173c14f293d21b0c00eebd,Namespace:kube-system,Attempt:0,}" Jan 30 13:55:50.847306 kubelet[2152]: E0130 13:55:50.847166 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.157.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-c9e031af59?timeout=10s\": dial tcp 64.23.157.134:6443: connect: connection refused" interval="800ms" Jan 30 13:55:51.021838 kubelet[2152]: W0130 13:55:51.021624 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.157.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.157.134:6443: connect: connection refused Jan 30 13:55:51.021838 kubelet[2152]: E0130 13:55:51.021794 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.157.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:51.038132 kubelet[2152]: I0130 13:55:51.038053 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:51.038606 kubelet[2152]: E0130 13:55:51.038567 2152 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.157.134:6443/api/v1/nodes\": dial tcp 64.23.157.134:6443: connect: connection refused" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:51.089757 kubelet[2152]: W0130 13:55:51.089660 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.157.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-b-c9e031af59&limit=500&resourceVersion=0": dial tcp 64.23.157.134:6443: connect: connection refused Jan 30 13:55:51.089919 kubelet[2152]: E0130 13:55:51.089776 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.157.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-b-c9e031af59&limit=500&resourceVersion=0\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:51.150452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204122751.mount: Deactivated successfully. Jan 30 13:55:51.156314 containerd[1461]: time="2025-01-30T13:55:51.156168229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:51.158527 containerd[1461]: time="2025-01-30T13:55:51.158440396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:55:51.159642 containerd[1461]: time="2025-01-30T13:55:51.159222684Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:51.160586 containerd[1461]: time="2025-01-30T13:55:51.160535708Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:51.161717 containerd[1461]: time="2025-01-30T13:55:51.161584846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:55:51.162376 containerd[1461]: time="2025-01-30T13:55:51.162328747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:55:51.162655 containerd[1461]: time="2025-01-30T13:55:51.162586161Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:51.168723 containerd[1461]: time="2025-01-30T13:55:51.168630910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:55:51.172068 containerd[1461]: time="2025-01-30T13:55:51.171672405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.549311ms" Jan 30 13:55:51.174226 containerd[1461]: time="2025-01-30T13:55:51.174151466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.626716ms" Jan 30 13:55:51.177290 containerd[1461]: time="2025-01-30T13:55:51.177015320Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.512719ms" Jan 30 13:55:51.241689 kubelet[2152]: W0130 13:55:51.241550 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.157.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.157.134:6443: connect: connection refused Jan 30 13:55:51.241689 kubelet[2152]: E0130 13:55:51.241628 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.157.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:51.359194 containerd[1461]: time="2025-01-30T13:55:51.358868080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:51.359194 containerd[1461]: time="2025-01-30T13:55:51.358977036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:51.359194 containerd[1461]: time="2025-01-30T13:55:51.358993406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:51.360681 containerd[1461]: time="2025-01-30T13:55:51.358519606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:51.361628 containerd[1461]: time="2025-01-30T13:55:51.361413756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:51.361628 containerd[1461]: time="2025-01-30T13:55:51.360764959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:51.361628 containerd[1461]: time="2025-01-30T13:55:51.360796879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:51.361628 containerd[1461]: time="2025-01-30T13:55:51.360938806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:51.376572 containerd[1461]: time="2025-01-30T13:55:51.373430529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:55:51.376572 containerd[1461]: time="2025-01-30T13:55:51.373523902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:55:51.376572 containerd[1461]: time="2025-01-30T13:55:51.373536879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:51.376572 containerd[1461]: time="2025-01-30T13:55:51.373654944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:55:51.409751 systemd[1]: Started cri-containerd-71bc27719bde40013fb4cfcf0fb3ae3baa0d2bf23bcd1b499e80df83583a8115.scope - libcontainer container 71bc27719bde40013fb4cfcf0fb3ae3baa0d2bf23bcd1b499e80df83583a8115. Jan 30 13:55:51.412330 systemd[1]: Started cri-containerd-c85d0624fb3cc261f0a52ce92c0aefd51b0f7e7430f1d627e6d958e861634803.scope - libcontainer container c85d0624fb3cc261f0a52ce92c0aefd51b0f7e7430f1d627e6d958e861634803. Jan 30 13:55:51.422761 systemd[1]: Started cri-containerd-f8eaa5d009f9087eb754d630aa9b54f68eea03a337a6f9fb523113b11f12428a.scope - libcontainer container f8eaa5d009f9087eb754d630aa9b54f68eea03a337a6f9fb523113b11f12428a. Jan 30 13:55:51.509122 kubelet[2152]: W0130 13:55:51.509072 2152 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.157.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.157.134:6443: connect: connection refused Jan 30 13:55:51.509122 kubelet[2152]: E0130 13:55:51.509129 2152 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.157.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.157.134:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:55:51.537001 containerd[1461]: time="2025-01-30T13:55:51.536723833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-b-c9e031af59,Uid:59b98c9545173c14f293d21b0c00eebd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8eaa5d009f9087eb754d630aa9b54f68eea03a337a6f9fb523113b11f12428a\"" Jan 30 13:55:51.564129 kubelet[2152]: E0130 13:55:51.564024 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:51.566584 containerd[1461]: time="2025-01-30T13:55:51.566475447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-b-c9e031af59,Uid:92f1fb0255b6e42059af5373734736f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"71bc27719bde40013fb4cfcf0fb3ae3baa0d2bf23bcd1b499e80df83583a8115\"" Jan 30 13:55:51.574341 kubelet[2152]: E0130 13:55:51.574113 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:51.585462 containerd[1461]: time="2025-01-30T13:55:51.584181443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-b-c9e031af59,Uid:dd840ada2a2ef734b3973351b99398e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c85d0624fb3cc261f0a52ce92c0aefd51b0f7e7430f1d627e6d958e861634803\"" Jan 30 13:55:51.587783 containerd[1461]: time="2025-01-30T13:55:51.586928079Z" level=info msg="CreateContainer within sandbox \"f8eaa5d009f9087eb754d630aa9b54f68eea03a337a6f9fb523113b11f12428a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:55:51.594553 kubelet[2152]: E0130 13:55:51.594508 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:51.595434 containerd[1461]: time="2025-01-30T13:55:51.595387671Z" level=info msg="CreateContainer within sandbox \"71bc27719bde40013fb4cfcf0fb3ae3baa0d2bf23bcd1b499e80df83583a8115\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:55:51.612871 containerd[1461]: time="2025-01-30T13:55:51.612720720Z" level=info msg="CreateContainer within sandbox \"c85d0624fb3cc261f0a52ce92c0aefd51b0f7e7430f1d627e6d958e861634803\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:55:51.618784 containerd[1461]: time="2025-01-30T13:55:51.618728779Z" level=info msg="CreateContainer within sandbox \"f8eaa5d009f9087eb754d630aa9b54f68eea03a337a6f9fb523113b11f12428a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ab66485ce2786aad0c277ca509220976f8262a493ced0604d1f99488202a3f8f\"" Jan 30 13:55:51.620120 containerd[1461]: time="2025-01-30T13:55:51.620067870Z" level=info msg="StartContainer for \"ab66485ce2786aad0c277ca509220976f8262a493ced0604d1f99488202a3f8f\"" Jan 30 13:55:51.630772 containerd[1461]: time="2025-01-30T13:55:51.630597958Z" level=info msg="CreateContainer within sandbox \"71bc27719bde40013fb4cfcf0fb3ae3baa0d2bf23bcd1b499e80df83583a8115\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9e610f22f20de78ef2ab460614699bfcd77cc16da6d204967e819bccdca83a20\"" Jan 30 13:55:51.632069 containerd[1461]: time="2025-01-30T13:55:51.632015249Z" level=info msg="StartContainer for \"9e610f22f20de78ef2ab460614699bfcd77cc16da6d204967e819bccdca83a20\"" Jan 30 13:55:51.636321 containerd[1461]: time="2025-01-30T13:55:51.635881851Z" level=info msg="CreateContainer within sandbox \"c85d0624fb3cc261f0a52ce92c0aefd51b0f7e7430f1d627e6d958e861634803\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4fbd2d1f53cdfd4ec358fa6ec6573c2f0d73aa78375664a42626407ad66c0bcd\"" Jan 30 13:55:51.638440 containerd[1461]: time="2025-01-30T13:55:51.638392779Z" level=info msg="StartContainer for \"4fbd2d1f53cdfd4ec358fa6ec6573c2f0d73aa78375664a42626407ad66c0bcd\"" Jan 30 13:55:51.648618 kubelet[2152]: E0130 13:55:51.648543 2152 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.157.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-b-c9e031af59?timeout=10s\": dial tcp 64.23.157.134:6443: connect: connection refused" interval="1.6s" Jan 30 13:55:51.674761 systemd[1]: Started cri-containerd-ab66485ce2786aad0c277ca509220976f8262a493ced0604d1f99488202a3f8f.scope - libcontainer container ab66485ce2786aad0c277ca509220976f8262a493ced0604d1f99488202a3f8f. Jan 30 13:55:51.706426 systemd[1]: Started cri-containerd-9e610f22f20de78ef2ab460614699bfcd77cc16da6d204967e819bccdca83a20.scope - libcontainer container 9e610f22f20de78ef2ab460614699bfcd77cc16da6d204967e819bccdca83a20. Jan 30 13:55:51.720543 systemd[1]: Started cri-containerd-4fbd2d1f53cdfd4ec358fa6ec6573c2f0d73aa78375664a42626407ad66c0bcd.scope - libcontainer container 4fbd2d1f53cdfd4ec358fa6ec6573c2f0d73aa78375664a42626407ad66c0bcd. Jan 30 13:55:51.786049 containerd[1461]: time="2025-01-30T13:55:51.785975528Z" level=info msg="StartContainer for \"ab66485ce2786aad0c277ca509220976f8262a493ced0604d1f99488202a3f8f\" returns successfully" Jan 30 13:55:51.826488 containerd[1461]: time="2025-01-30T13:55:51.826403724Z" level=info msg="StartContainer for \"9e610f22f20de78ef2ab460614699bfcd77cc16da6d204967e819bccdca83a20\" returns successfully" Jan 30 13:55:51.842298 kubelet[2152]: I0130 13:55:51.841738 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:51.843474 containerd[1461]: time="2025-01-30T13:55:51.841958662Z" level=info msg="StartContainer for \"4fbd2d1f53cdfd4ec358fa6ec6573c2f0d73aa78375664a42626407ad66c0bcd\" returns successfully" Jan 30 13:55:51.844781 kubelet[2152]: E0130 13:55:51.844214 2152 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.157.134:6443/api/v1/nodes\": dial tcp 64.23.157.134:6443: connect: connection refused" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:52.287572 kubelet[2152]: E0130 13:55:52.287528 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:52.290197 kubelet[2152]: E0130 13:55:52.290153 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:52.296806 kubelet[2152]: E0130 13:55:52.296689 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:53.300595 kubelet[2152]: E0130 13:55:53.300503 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:53.448301 kubelet[2152]: I0130 13:55:53.446571 2152 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:54.189699 kubelet[2152]: E0130 13:55:54.189636 2152 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-b-c9e031af59\" not found" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:54.217500 kubelet[2152]: I0130 13:55:54.217177 2152 apiserver.go:52] "Watching apiserver" Jan 30 13:55:54.240586 kubelet[2152]: I0130 13:55:54.240487 2152 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:55:54.302241 kubelet[2152]: E0130 13:55:54.302110 2152 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:54.329882 kubelet[2152]: I0130 13:55:54.329221 2152 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:56.468128 systemd[1]: Reloading requested from client PID 2425 ('systemctl') (unit session-9.scope)... Jan 30 13:55:56.468150 systemd[1]: Reloading... Jan 30 13:55:56.593403 zram_generator::config[2464]: No configuration found. Jan 30 13:55:56.773209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:55:56.900983 systemd[1]: Reloading finished in 432 ms. Jan 30 13:55:56.953394 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:56.958773 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:55:56.959197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:56.959337 systemd[1]: kubelet.service: Consumed 1.194s CPU time, 111.4M memory peak, 0B memory swap peak. Jan 30 13:55:56.965842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:57.123531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:57.139154 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:55:57.253722 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:57.255208 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:55:57.255208 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:55:57.255208 kubelet[2515]: I0130 13:55:57.254318 2515 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:55:57.264419 kubelet[2515]: I0130 13:55:57.264364 2515 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:55:57.266299 kubelet[2515]: I0130 13:55:57.264643 2515 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:55:57.266299 kubelet[2515]: I0130 13:55:57.265076 2515 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:55:57.267448 kubelet[2515]: I0130 13:55:57.267406 2515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:55:57.275129 kubelet[2515]: I0130 13:55:57.274960 2515 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:55:57.281007 kubelet[2515]: E0130 13:55:57.280876 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:55:57.281503 kubelet[2515]: I0130 13:55:57.281483 2515 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:55:57.286300 kubelet[2515]: I0130 13:55:57.286234 2515 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:55:57.286646 kubelet[2515]: I0130 13:55:57.286628 2515 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:55:57.286933 kubelet[2515]: I0130 13:55:57.286900 2515 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:55:57.287384 kubelet[2515]: I0130 13:55:57.287059 2515 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-b-c9e031af59","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:55:57.287576 kubelet[2515]: I0130 13:55:57.287562 2515 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:55:57.287646 kubelet[2515]: I0130 13:55:57.287639 2515 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:55:57.287778 kubelet[2515]: I0130 13:55:57.287766 2515 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:57.288077 kubelet[2515]: I0130 13:55:57.288064 2515 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:55:57.288937 kubelet[2515]: I0130 13:55:57.288913 2515 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:55:57.289131 kubelet[2515]: I0130 13:55:57.289101 2515 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:55:57.289217 kubelet[2515]: I0130 13:55:57.289204 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:55:57.292371 kubelet[2515]: I0130 13:55:57.292231 2515 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:55:57.292803 kubelet[2515]: I0130 13:55:57.292692 2515 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:55:57.293653 kubelet[2515]: I0130 13:55:57.293134 2515 server.go:1269] "Started kubelet" Jan 30 13:55:57.301362 kubelet[2515]: I0130 13:55:57.300732 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:55:57.312434 kubelet[2515]: I0130 13:55:57.312361 2515 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:55:57.314358 kubelet[2515]: I0130 13:55:57.313635 2515 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:55:57.317655 kubelet[2515]: I0130 13:55:57.317569 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:55:57.318288 kubelet[2515]: I0130 13:55:57.317909 2515 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:55:57.324923 kubelet[2515]: I0130 13:55:57.324879 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:55:57.327097 kubelet[2515]: I0130 13:55:57.327057 2515 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:55:57.327478 kubelet[2515]: E0130 13:55:57.327448 2515 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-b-c9e031af59\" not found" Jan 30 13:55:57.332796 kubelet[2515]: I0130 13:55:57.332538 2515 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:55:57.333132 kubelet[2515]: I0130 13:55:57.333057 2515 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:55:57.333291 kubelet[2515]: I0130 13:55:57.333197 2515 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:55:57.333567 kubelet[2515]: I0130 13:55:57.333469 2515 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:55:57.334131 kubelet[2515]: E0130 13:55:57.334108 2515 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:55:57.337067 kubelet[2515]: I0130 13:55:57.336821 2515 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:55:57.337067 kubelet[2515]: I0130 13:55:57.336850 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:55:57.345253 kubelet[2515]: I0130 13:55:57.343293 2515 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:55:57.345253 kubelet[2515]: I0130 13:55:57.344974 2515 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:55:57.345253 kubelet[2515]: I0130 13:55:57.345014 2515 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:55:57.345253 kubelet[2515]: E0130 13:55:57.345102 2515 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:55:57.430613 kubelet[2515]: I0130 13:55:57.430547 2515 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:55:57.430613 kubelet[2515]: I0130 13:55:57.430574 2515 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:55:57.430613 kubelet[2515]: I0130 13:55:57.430609 2515 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:55:57.430919 kubelet[2515]: I0130 13:55:57.430854 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:55:57.430919 kubelet[2515]: I0130 13:55:57.430868 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:55:57.430919 kubelet[2515]: I0130 13:55:57.430895 2515 policy_none.go:49] "None policy: Start" Jan 30 13:55:57.432476 kubelet[2515]: I0130 13:55:57.432439 2515 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:55:57.432618 kubelet[2515]: I0130 13:55:57.432486 2515 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:55:57.432996 kubelet[2515]: I0130 13:55:57.432955 2515 state_mem.go:75] "Updated machine memory state" Jan 30 13:55:57.444459 kubelet[2515]: I0130 13:55:57.444217 2515 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:55:57.446233 kubelet[2515]: I0130 13:55:57.445120 2515 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:55:57.446233 kubelet[2515]: I0130 13:55:57.445330 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:55:57.446233 kubelet[2515]: E0130 13:55:57.445189 2515 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:55:57.446233 kubelet[2515]: I0130 13:55:57.445992 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:55:57.560823 kubelet[2515]: I0130 13:55:57.560746 2515 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.576777 kubelet[2515]: I0130 13:55:57.576733 2515 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.577042 kubelet[2515]: I0130 13:55:57.577029 2515 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.660089 kubelet[2515]: W0130 13:55:57.660048 2515 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:57.664050 kubelet[2515]: W0130 13:55:57.663642 2515 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:57.664050 kubelet[2515]: W0130 13:55:57.663749 2515 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:55:57.736165 kubelet[2515]: I0130 13:55:57.735987 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92f1fb0255b6e42059af5373734736f2-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-b-c9e031af59\" (UID: \"92f1fb0255b6e42059af5373734736f2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.736165 kubelet[2515]: I0130 13:55:57.736047 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92f1fb0255b6e42059af5373734736f2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-b-c9e031af59\" (UID: \"92f1fb0255b6e42059af5373734736f2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.736165 kubelet[2515]: I0130 13:55:57.736082 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd840ada2a2ef734b3973351b99398e0-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-b-c9e031af59\" (UID: \"dd840ada2a2ef734b3973351b99398e0\") " pod="kube-system/kube-scheduler-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.736165 kubelet[2515]: I0130 13:55:57.736110 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92f1fb0255b6e42059af5373734736f2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-b-c9e031af59\" (UID: \"92f1fb0255b6e42059af5373734736f2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.736165 kubelet[2515]: I0130 13:55:57.736138 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.736511 kubelet[2515]: I0130 13:55:57.736171 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.736511 kubelet[2515]: I0130 13:55:57.736194 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.736511 kubelet[2515]: I0130 13:55:57.736219 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.736511 kubelet[2515]: I0130 13:55:57.736249 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59b98c9545173c14f293d21b0c00eebd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-b-c9e031af59\" (UID: \"59b98c9545173c14f293d21b0c00eebd\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" Jan 30 13:55:57.963117 kubelet[2515]: E0130 13:55:57.961372 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:57.964610 kubelet[2515]: E0130 13:55:57.964567 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:57.964918 kubelet[2515]: E0130 13:55:57.964860 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:58.301488 kubelet[2515]: I0130 13:55:58.301426 2515 apiserver.go:52] "Watching apiserver" Jan 30 13:55:58.333466 kubelet[2515]: I0130 13:55:58.333409 2515 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:55:58.391754 kubelet[2515]: E0130 13:55:58.390160 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:58.391754 kubelet[2515]: E0130 13:55:58.390567 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:58.391754 kubelet[2515]: E0130 13:55:58.390859 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:55:58.466877 kubelet[2515]: I0130 13:55:58.466809 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-b-c9e031af59" podStartSLOduration=1.466791566 podStartE2EDuration="1.466791566s" podCreationTimestamp="2025-01-30 13:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:58.466537606 +0000 UTC m=+1.304380561" watchObservedRunningTime="2025-01-30 13:55:58.466791566 +0000 UTC m=+1.304634509" Jan 30 13:55:58.521520 kubelet[2515]: I0130 13:55:58.521185 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-b-c9e031af59" podStartSLOduration=1.5211605499999998 podStartE2EDuration="1.52116055s" podCreationTimestamp="2025-01-30 13:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:58.494764519 +0000 UTC m=+1.332607468" watchObservedRunningTime="2025-01-30 13:55:58.52116055 +0000 UTC m=+1.359003497" Jan 30 13:55:59.393009 kubelet[2515]: E0130 13:55:59.392642 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:00.396354 kubelet[2515]: E0130 13:56:00.396308 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:01.035356 kubelet[2515]: I0130 13:56:01.035261 2515 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:56:01.036003 containerd[1461]: time="2025-01-30T13:56:01.035945130Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:56:01.037341 kubelet[2515]: I0130 13:56:01.036789 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:56:01.348023 sudo[1683]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:01.354585 sshd[1678]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:01.365455 systemd[1]: sshd@8-64.23.157.134:22-147.75.109.163:59854.service: Deactivated successfully. Jan 30 13:56:01.369696 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:56:01.370125 systemd[1]: session-9.scope: Consumed 5.400s CPU time, 152.5M memory peak, 0B memory swap peak. Jan 30 13:56:01.374453 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:56:01.377079 systemd-logind[1449]: Removed session 9. Jan 30 13:56:01.817193 kubelet[2515]: I0130 13:56:01.816650 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-b-c9e031af59" podStartSLOduration=4.8166224060000005 podStartE2EDuration="4.816622406s" podCreationTimestamp="2025-01-30 13:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:55:58.523577643 +0000 UTC m=+1.361420590" watchObservedRunningTime="2025-01-30 13:56:01.816622406 +0000 UTC m=+4.654465350" Jan 30 13:56:01.845867 systemd[1]: Created slice kubepods-besteffort-pod09ba59ec_7945_4611_ac3c_eb73977d8871.slice - libcontainer container kubepods-besteffort-pod09ba59ec_7945_4611_ac3c_eb73977d8871.slice. Jan 30 13:56:01.863559 kubelet[2515]: I0130 13:56:01.863335 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09ba59ec-7945-4611-ac3c-eb73977d8871-lib-modules\") pod \"kube-proxy-ww5m2\" (UID: \"09ba59ec-7945-4611-ac3c-eb73977d8871\") " pod="kube-system/kube-proxy-ww5m2" Jan 30 13:56:01.863559 kubelet[2515]: I0130 13:56:01.863402 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09ba59ec-7945-4611-ac3c-eb73977d8871-kube-proxy\") pod \"kube-proxy-ww5m2\" (UID: \"09ba59ec-7945-4611-ac3c-eb73977d8871\") " pod="kube-system/kube-proxy-ww5m2" Jan 30 13:56:01.863559 kubelet[2515]: I0130 13:56:01.863428 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09ba59ec-7945-4611-ac3c-eb73977d8871-xtables-lock\") pod \"kube-proxy-ww5m2\" (UID: \"09ba59ec-7945-4611-ac3c-eb73977d8871\") " pod="kube-system/kube-proxy-ww5m2" Jan 30 13:56:01.863559 kubelet[2515]: I0130 13:56:01.863456 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89fg\" (UniqueName: \"kubernetes.io/projected/09ba59ec-7945-4611-ac3c-eb73977d8871-kube-api-access-t89fg\") pod \"kube-proxy-ww5m2\" (UID: \"09ba59ec-7945-4611-ac3c-eb73977d8871\") " pod="kube-system/kube-proxy-ww5m2" Jan 30 13:56:02.134686 systemd[1]: Created slice kubepods-besteffort-podcfae0b76_ad46_49a4_b65e_526a3c4424f2.slice - libcontainer container kubepods-besteffort-podcfae0b76_ad46_49a4_b65e_526a3c4424f2.slice. Jan 30 13:56:02.158487 kubelet[2515]: E0130 13:56:02.158431 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:02.160163 containerd[1461]: time="2025-01-30T13:56:02.160081777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ww5m2,Uid:09ba59ec-7945-4611-ac3c-eb73977d8871,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:02.173767 kubelet[2515]: I0130 13:56:02.173627 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qzfk\" (UniqueName: \"kubernetes.io/projected/cfae0b76-ad46-49a4-b65e-526a3c4424f2-kube-api-access-6qzfk\") pod \"tigera-operator-76c4976dd7-bq567\" (UID: \"cfae0b76-ad46-49a4-b65e-526a3c4424f2\") " pod="tigera-operator/tigera-operator-76c4976dd7-bq567" Jan 30 13:56:02.173767 kubelet[2515]: I0130 13:56:02.173695 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cfae0b76-ad46-49a4-b65e-526a3c4424f2-var-lib-calico\") pod \"tigera-operator-76c4976dd7-bq567\" (UID: \"cfae0b76-ad46-49a4-b65e-526a3c4424f2\") " pod="tigera-operator/tigera-operator-76c4976dd7-bq567" Jan 30 13:56:02.215313 containerd[1461]: time="2025-01-30T13:56:02.213881793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:02.215313 containerd[1461]: time="2025-01-30T13:56:02.214000652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:02.215313 containerd[1461]: time="2025-01-30T13:56:02.214066101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:02.215313 containerd[1461]: time="2025-01-30T13:56:02.214330939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:02.272711 systemd[1]: Started cri-containerd-ea90561012baadb9547bf3b42f317f12b884b7c027cb86316d2309a6466e42c8.scope - libcontainer container ea90561012baadb9547bf3b42f317f12b884b7c027cb86316d2309a6466e42c8. Jan 30 13:56:02.343906 containerd[1461]: time="2025-01-30T13:56:02.343142238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ww5m2,Uid:09ba59ec-7945-4611-ac3c-eb73977d8871,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea90561012baadb9547bf3b42f317f12b884b7c027cb86316d2309a6466e42c8\"" Jan 30 13:56:02.345304 kubelet[2515]: E0130 13:56:02.344957 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:02.353344 containerd[1461]: time="2025-01-30T13:56:02.352244081Z" level=info msg="CreateContainer within sandbox \"ea90561012baadb9547bf3b42f317f12b884b7c027cb86316d2309a6466e42c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:56:02.375140 containerd[1461]: time="2025-01-30T13:56:02.374727703Z" level=info msg="CreateContainer within sandbox \"ea90561012baadb9547bf3b42f317f12b884b7c027cb86316d2309a6466e42c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87f524e8313a021711c0df44e4a887e4988e431204b5f52f8262268700514ac3\"" Jan 30 13:56:02.377107 containerd[1461]: time="2025-01-30T13:56:02.377050455Z" level=info msg="StartContainer for \"87f524e8313a021711c0df44e4a887e4988e431204b5f52f8262268700514ac3\"" Jan 30 13:56:02.433693 systemd[1]: Started cri-containerd-87f524e8313a021711c0df44e4a887e4988e431204b5f52f8262268700514ac3.scope - libcontainer container 87f524e8313a021711c0df44e4a887e4988e431204b5f52f8262268700514ac3. Jan 30 13:56:02.442803 containerd[1461]: time="2025-01-30T13:56:02.442716131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-bq567,Uid:cfae0b76-ad46-49a4-b65e-526a3c4424f2,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:56:02.507729 containerd[1461]: time="2025-01-30T13:56:02.506691089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:02.507729 containerd[1461]: time="2025-01-30T13:56:02.506804059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:02.507729 containerd[1461]: time="2025-01-30T13:56:02.506826991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:02.507729 containerd[1461]: time="2025-01-30T13:56:02.507559706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:02.514538 containerd[1461]: time="2025-01-30T13:56:02.514426599Z" level=info msg="StartContainer for \"87f524e8313a021711c0df44e4a887e4988e431204b5f52f8262268700514ac3\" returns successfully" Jan 30 13:56:02.551715 systemd[1]: Started cri-containerd-632e7ca330fa06000d40e67b8cce3044e15915ac057dc52ccd23691c95445c8a.scope - libcontainer container 632e7ca330fa06000d40e67b8cce3044e15915ac057dc52ccd23691c95445c8a. Jan 30 13:56:02.651640 containerd[1461]: time="2025-01-30T13:56:02.651560854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-bq567,Uid:cfae0b76-ad46-49a4-b65e-526a3c4424f2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"632e7ca330fa06000d40e67b8cce3044e15915ac057dc52ccd23691c95445c8a\"" Jan 30 13:56:02.655787 containerd[1461]: time="2025-01-30T13:56:02.655728555Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:56:03.419078 kubelet[2515]: E0130 13:56:03.418661 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:04.332501 kubelet[2515]: E0130 13:56:04.330894 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:04.365714 kubelet[2515]: I0130 13:56:04.364912 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ww5m2" podStartSLOduration=3.364887988 podStartE2EDuration="3.364887988s" podCreationTimestamp="2025-01-30 13:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:03.443239744 +0000 UTC m=+6.281082693" watchObservedRunningTime="2025-01-30 13:56:04.364887988 +0000 UTC m=+7.202730933" Jan 30 13:56:04.414841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776518866.mount: Deactivated successfully. Jan 30 13:56:04.430881 kubelet[2515]: E0130 13:56:04.430079 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:04.430881 kubelet[2515]: E0130 13:56:04.430191 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:05.048483 update_engine[1450]: I20250130 13:56:05.048352 1450 update_attempter.cc:509] Updating boot flags... Jan 30 13:56:05.118043 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2855) Jan 30 13:56:05.243919 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2854) Jan 30 13:56:05.315540 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2854) Jan 30 13:56:05.448448 containerd[1461]: time="2025-01-30T13:56:05.448382186Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:05.449608 containerd[1461]: time="2025-01-30T13:56:05.449551293Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:56:05.450552 containerd[1461]: time="2025-01-30T13:56:05.450511901Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:05.453639 containerd[1461]: time="2025-01-30T13:56:05.453236934Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:05.454369 containerd[1461]: time="2025-01-30T13:56:05.454326474Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.798553806s" Jan 30 13:56:05.454369 containerd[1461]: time="2025-01-30T13:56:05.454370386Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:56:05.466242 containerd[1461]: time="2025-01-30T13:56:05.466181318Z" level=info msg="CreateContainer within sandbox \"632e7ca330fa06000d40e67b8cce3044e15915ac057dc52ccd23691c95445c8a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:56:05.481797 containerd[1461]: time="2025-01-30T13:56:05.481638720Z" level=info msg="CreateContainer within sandbox \"632e7ca330fa06000d40e67b8cce3044e15915ac057dc52ccd23691c95445c8a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d7e4dfda92d8f562d1adf4d405268a3674a2dcb0b5468f9e5e02fd893135d9ae\"" Jan 30 13:56:05.483349 containerd[1461]: time="2025-01-30T13:56:05.482752612Z" level=info msg="StartContainer for \"d7e4dfda92d8f562d1adf4d405268a3674a2dcb0b5468f9e5e02fd893135d9ae\"" Jan 30 13:56:05.529083 systemd[1]: run-containerd-runc-k8s.io-d7e4dfda92d8f562d1adf4d405268a3674a2dcb0b5468f9e5e02fd893135d9ae-runc.Fd1JtG.mount: Deactivated successfully. Jan 30 13:56:05.538030 systemd[1]: Started cri-containerd-d7e4dfda92d8f562d1adf4d405268a3674a2dcb0b5468f9e5e02fd893135d9ae.scope - libcontainer container d7e4dfda92d8f562d1adf4d405268a3674a2dcb0b5468f9e5e02fd893135d9ae. Jan 30 13:56:05.574438 containerd[1461]: time="2025-01-30T13:56:05.573528029Z" level=info msg="StartContainer for \"d7e4dfda92d8f562d1adf4d405268a3674a2dcb0b5468f9e5e02fd893135d9ae\" returns successfully" Jan 30 13:56:05.893480 kubelet[2515]: E0130 13:56:05.892887 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:06.437432 kubelet[2515]: E0130 13:56:06.436588 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:06.475151 kubelet[2515]: I0130 13:56:06.475053 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-bq567" podStartSLOduration=1.666395015 podStartE2EDuration="4.47491225s" podCreationTimestamp="2025-01-30 13:56:02 +0000 UTC" firstStartedPulling="2025-01-30 13:56:02.654501018 +0000 UTC m=+5.492343944" lastFinishedPulling="2025-01-30 13:56:05.463018254 +0000 UTC m=+8.300861179" observedRunningTime="2025-01-30 13:56:06.472896487 +0000 UTC m=+9.310739436" watchObservedRunningTime="2025-01-30 13:56:06.47491225 +0000 UTC m=+9.312755190" Jan 30 13:56:09.098160 systemd[1]: Created slice kubepods-besteffort-podf9f4c7cd_30e6_4174_8edb_cae62ce00f81.slice - libcontainer container kubepods-besteffort-podf9f4c7cd_30e6_4174_8edb_cae62ce00f81.slice. Jan 30 13:56:09.228908 systemd[1]: Created slice kubepods-besteffort-pod5db3ee5e_6f53_4d59_965d_f419d0c77289.slice - libcontainer container kubepods-besteffort-pod5db3ee5e_6f53_4d59_965d_f419d0c77289.slice. Jan 30 13:56:09.231600 kubelet[2515]: I0130 13:56:09.231384 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9f4c7cd-30e6-4174-8edb-cae62ce00f81-typha-certs\") pod \"calico-typha-bd87fd744-ttks9\" (UID: \"f9f4c7cd-30e6-4174-8edb-cae62ce00f81\") " pod="calico-system/calico-typha-bd87fd744-ttks9" Jan 30 13:56:09.231600 kubelet[2515]: I0130 13:56:09.231526 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-lib-modules\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.231600 kubelet[2515]: I0130 13:56:09.231565 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-policysync\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.232379 kubelet[2515]: I0130 13:56:09.231733 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5db3ee5e-6f53-4d59-965d-f419d0c77289-node-certs\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.232960 kubelet[2515]: I0130 13:56:09.232518 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-cni-log-dir\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.232960 kubelet[2515]: I0130 13:56:09.232568 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9f4c7cd-30e6-4174-8edb-cae62ce00f81-tigera-ca-bundle\") pod \"calico-typha-bd87fd744-ttks9\" (UID: \"f9f4c7cd-30e6-4174-8edb-cae62ce00f81\") " pod="calico-system/calico-typha-bd87fd744-ttks9" Jan 30 13:56:09.232960 kubelet[2515]: I0130 13:56:09.232587 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-var-run-calico\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.232960 kubelet[2515]: I0130 13:56:09.232614 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-var-lib-calico\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.232960 kubelet[2515]: I0130 13:56:09.232633 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-flexvol-driver-host\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.233474 kubelet[2515]: I0130 13:56:09.232648 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj858\" (UniqueName: \"kubernetes.io/projected/5db3ee5e-6f53-4d59-965d-f419d0c77289-kube-api-access-pj858\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.233474 kubelet[2515]: I0130 13:56:09.232666 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5db3ee5e-6f53-4d59-965d-f419d0c77289-tigera-ca-bundle\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.233474 kubelet[2515]: I0130 13:56:09.232795 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-cni-bin-dir\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.233474 kubelet[2515]: I0130 13:56:09.232822 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65tg\" (UniqueName: \"kubernetes.io/projected/f9f4c7cd-30e6-4174-8edb-cae62ce00f81-kube-api-access-f65tg\") pod \"calico-typha-bd87fd744-ttks9\" (UID: \"f9f4c7cd-30e6-4174-8edb-cae62ce00f81\") " pod="calico-system/calico-typha-bd87fd744-ttks9" Jan 30 13:56:09.233474 kubelet[2515]: I0130 13:56:09.232861 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-xtables-lock\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.233626 kubelet[2515]: I0130 13:56:09.232879 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5db3ee5e-6f53-4d59-965d-f419d0c77289-cni-net-dir\") pod \"calico-node-jr8hk\" (UID: \"5db3ee5e-6f53-4d59-965d-f419d0c77289\") " pod="calico-system/calico-node-jr8hk" Jan 30 13:56:09.374647 kubelet[2515]: E0130 13:56:09.374404 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.374647 kubelet[2515]: W0130 13:56:09.374544 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.374647 kubelet[2515]: E0130 13:56:09.374609 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.395410 kubelet[2515]: E0130 13:56:09.395140 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dshrw" podUID="16ce8e60-3b3d-4b79-86f3-2473807ac6e1" Jan 30 13:56:09.411304 kubelet[2515]: E0130 13:56:09.408962 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.420798 kubelet[2515]: W0130 13:56:09.420725 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.420798 kubelet[2515]: E0130 13:56:09.420799 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.424907 kubelet[2515]: E0130 13:56:09.424850 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.424907 kubelet[2515]: W0130 13:56:09.424890 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.425121 kubelet[2515]: E0130 13:56:09.424926 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.427624 kubelet[2515]: E0130 13:56:09.427574 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.427624 kubelet[2515]: W0130 13:56:09.427614 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.427792 kubelet[2515]: E0130 13:56:09.427649 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.430875 kubelet[2515]: E0130 13:56:09.430815 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.430875 kubelet[2515]: W0130 13:56:09.430854 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.430875 kubelet[2515]: E0130 13:56:09.430887 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.432568 kubelet[2515]: E0130 13:56:09.432516 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.432568 kubelet[2515]: W0130 13:56:09.432551 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.432941 kubelet[2515]: E0130 13:56:09.432581 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.434392 kubelet[2515]: E0130 13:56:09.434322 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.434392 kubelet[2515]: W0130 13:56:09.434355 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.434392 kubelet[2515]: E0130 13:56:09.434384 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.435296 kubelet[2515]: E0130 13:56:09.435247 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.435633 kubelet[2515]: W0130 13:56:09.435597 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.435743 kubelet[2515]: E0130 13:56:09.435640 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.439376 kubelet[2515]: E0130 13:56:09.437386 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.439376 kubelet[2515]: W0130 13:56:09.437418 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.439376 kubelet[2515]: E0130 13:56:09.437445 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.439376 kubelet[2515]: E0130 13:56:09.437812 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.439376 kubelet[2515]: W0130 13:56:09.437831 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.439376 kubelet[2515]: E0130 13:56:09.437850 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.439376 kubelet[2515]: E0130 13:56:09.438122 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.439376 kubelet[2515]: W0130 13:56:09.438134 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.439376 kubelet[2515]: E0130 13:56:09.438148 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.439376 kubelet[2515]: E0130 13:56:09.438467 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.439775 kubelet[2515]: W0130 13:56:09.438481 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.439775 kubelet[2515]: E0130 13:56:09.438512 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.439775 kubelet[2515]: E0130 13:56:09.438873 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.439775 kubelet[2515]: W0130 13:56:09.438888 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.439775 kubelet[2515]: E0130 13:56:09.438906 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.439775 kubelet[2515]: E0130 13:56:09.439183 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.439775 kubelet[2515]: W0130 13:56:09.439199 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.439775 kubelet[2515]: E0130 13:56:09.439215 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.441473 kubelet[2515]: E0130 13:56:09.441419 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.441473 kubelet[2515]: W0130 13:56:09.441457 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.441692 kubelet[2515]: E0130 13:56:09.441486 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.441966 kubelet[2515]: E0130 13:56:09.441933 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.441966 kubelet[2515]: W0130 13:56:09.441958 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.442069 kubelet[2515]: E0130 13:56:09.441976 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.442282 kubelet[2515]: E0130 13:56:09.442246 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.442334 kubelet[2515]: W0130 13:56:09.442319 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.442373 kubelet[2515]: E0130 13:56:09.442340 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.442630 kubelet[2515]: E0130 13:56:09.442606 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.442630 kubelet[2515]: W0130 13:56:09.442624 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.442712 kubelet[2515]: E0130 13:56:09.442642 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.442919 kubelet[2515]: E0130 13:56:09.442897 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.442919 kubelet[2515]: W0130 13:56:09.442915 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.443033 kubelet[2515]: E0130 13:56:09.442930 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.443336 kubelet[2515]: E0130 13:56:09.443308 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.443336 kubelet[2515]: W0130 13:56:09.443327 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.443473 kubelet[2515]: E0130 13:56:09.443343 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.443756 kubelet[2515]: E0130 13:56:09.443718 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.443857 kubelet[2515]: W0130 13:56:09.443770 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.443942 kubelet[2515]: E0130 13:56:09.443788 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.444301 kubelet[2515]: E0130 13:56:09.444240 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.444402 kubelet[2515]: W0130 13:56:09.444366 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.444402 kubelet[2515]: E0130 13:56:09.444390 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.444917 kubelet[2515]: E0130 13:56:09.444748 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.444917 kubelet[2515]: W0130 13:56:09.444768 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.444917 kubelet[2515]: E0130 13:56:09.444784 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.445802 kubelet[2515]: E0130 13:56:09.445553 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.445802 kubelet[2515]: W0130 13:56:09.445584 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.445802 kubelet[2515]: E0130 13:56:09.445601 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.445802 kubelet[2515]: I0130 13:56:09.445673 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16ce8e60-3b3d-4b79-86f3-2473807ac6e1-kubelet-dir\") pod \"csi-node-driver-dshrw\" (UID: \"16ce8e60-3b3d-4b79-86f3-2473807ac6e1\") " pod="calico-system/csi-node-driver-dshrw" Jan 30 13:56:09.446319 kubelet[2515]: E0130 13:56:09.446144 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.446319 kubelet[2515]: W0130 13:56:09.446165 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.446319 kubelet[2515]: E0130 13:56:09.446240 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.447569 kubelet[2515]: E0130 13:56:09.447542 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.447569 kubelet[2515]: W0130 13:56:09.447563 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.447702 kubelet[2515]: E0130 13:56:09.447582 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.448552 kubelet[2515]: E0130 13:56:09.448514 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.448637 kubelet[2515]: W0130 13:56:09.448543 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.448749 kubelet[2515]: E0130 13:56:09.448724 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.448834 kubelet[2515]: I0130 13:56:09.448777 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnhv7\" (UniqueName: \"kubernetes.io/projected/16ce8e60-3b3d-4b79-86f3-2473807ac6e1-kube-api-access-xnhv7\") pod \"csi-node-driver-dshrw\" (UID: \"16ce8e60-3b3d-4b79-86f3-2473807ac6e1\") " pod="calico-system/csi-node-driver-dshrw" Jan 30 13:56:09.451111 kubelet[2515]: E0130 13:56:09.451064 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.451111 kubelet[2515]: W0130 13:56:09.451100 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.451369 kubelet[2515]: E0130 13:56:09.451126 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.451369 kubelet[2515]: I0130 13:56:09.451163 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/16ce8e60-3b3d-4b79-86f3-2473807ac6e1-socket-dir\") pod \"csi-node-driver-dshrw\" (UID: \"16ce8e60-3b3d-4b79-86f3-2473807ac6e1\") " pod="calico-system/csi-node-driver-dshrw" Jan 30 13:56:09.452751 kubelet[2515]: E0130 13:56:09.452502 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.452751 kubelet[2515]: W0130 13:56:09.452531 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.452751 kubelet[2515]: E0130 13:56:09.452579 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.452751 kubelet[2515]: I0130 13:56:09.452619 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/16ce8e60-3b3d-4b79-86f3-2473807ac6e1-varrun\") pod \"csi-node-driver-dshrw\" (UID: \"16ce8e60-3b3d-4b79-86f3-2473807ac6e1\") " pod="calico-system/csi-node-driver-dshrw" Jan 30 13:56:09.453562 kubelet[2515]: E0130 13:56:09.453365 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.453562 kubelet[2515]: W0130 13:56:09.453388 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.453562 kubelet[2515]: E0130 13:56:09.453409 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.453562 kubelet[2515]: I0130 13:56:09.453440 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/16ce8e60-3b3d-4b79-86f3-2473807ac6e1-registration-dir\") pod \"csi-node-driver-dshrw\" (UID: \"16ce8e60-3b3d-4b79-86f3-2473807ac6e1\") " pod="calico-system/csi-node-driver-dshrw" Jan 30 13:56:09.456076 kubelet[2515]: E0130 13:56:09.456026 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.456076 kubelet[2515]: W0130 13:56:09.456063 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.456364 kubelet[2515]: E0130 13:56:09.456091 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.456750 kubelet[2515]: E0130 13:56:09.456556 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.456750 kubelet[2515]: W0130 13:56:09.456581 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.456750 kubelet[2515]: E0130 13:56:09.456601 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.457987 kubelet[2515]: E0130 13:56:09.457428 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.457987 kubelet[2515]: W0130 13:56:09.457976 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.458145 kubelet[2515]: E0130 13:56:09.458037 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.458538 kubelet[2515]: E0130 13:56:09.458506 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.458538 kubelet[2515]: W0130 13:56:09.458532 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.459317 kubelet[2515]: E0130 13:56:09.458743 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.459399 kubelet[2515]: E0130 13:56:09.459357 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.459399 kubelet[2515]: W0130 13:56:09.459373 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.459539 kubelet[2515]: E0130 13:56:09.459514 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.460140 kubelet[2515]: E0130 13:56:09.460116 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.460140 kubelet[2515]: W0130 13:56:09.460134 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.460140 kubelet[2515]: E0130 13:56:09.460157 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.460771 kubelet[2515]: E0130 13:56:09.460741 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.460771 kubelet[2515]: W0130 13:56:09.460764 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.460856 kubelet[2515]: E0130 13:56:09.460782 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.462317 kubelet[2515]: E0130 13:56:09.461563 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.462317 kubelet[2515]: W0130 13:56:09.461585 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.462317 kubelet[2515]: E0130 13:56:09.461604 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.547195 kubelet[2515]: E0130 13:56:09.547124 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:09.553881 containerd[1461]: time="2025-01-30T13:56:09.553820184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jr8hk,Uid:5db3ee5e-6f53-4d59-965d-f419d0c77289,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:09.555925 kubelet[2515]: E0130 13:56:09.555823 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.555925 kubelet[2515]: W0130 13:56:09.555846 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.555925 kubelet[2515]: E0130 13:56:09.555870 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.556623 kubelet[2515]: E0130 13:56:09.556296 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.556623 kubelet[2515]: W0130 13:56:09.556312 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.556623 kubelet[2515]: E0130 13:56:09.556327 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.557405 kubelet[2515]: E0130 13:56:09.557208 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.557405 kubelet[2515]: W0130 13:56:09.557226 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.557405 kubelet[2515]: E0130 13:56:09.557257 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.558769 kubelet[2515]: E0130 13:56:09.558571 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.558769 kubelet[2515]: W0130 13:56:09.558592 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.558769 kubelet[2515]: E0130 13:56:09.558651 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.562108 kubelet[2515]: E0130 13:56:09.562047 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.562108 kubelet[2515]: W0130 13:56:09.562090 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.562335 kubelet[2515]: E0130 13:56:09.562243 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.565432 kubelet[2515]: E0130 13:56:09.565373 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.565432 kubelet[2515]: W0130 13:56:09.565410 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.565754 kubelet[2515]: E0130 13:56:09.565688 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.603417 kubelet[2515]: E0130 13:56:09.566941 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.603417 kubelet[2515]: W0130 13:56:09.566981 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.604313 kubelet[2515]: E0130 13:56:09.603849 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.609087 kubelet[2515]: E0130 13:56:09.605713 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.609087 kubelet[2515]: W0130 13:56:09.605747 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.609087 kubelet[2515]: E0130 13:56:09.605901 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.609087 kubelet[2515]: E0130 13:56:09.606387 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.609087 kubelet[2515]: W0130 13:56:09.606406 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.609087 kubelet[2515]: E0130 13:56:09.606506 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.609087 kubelet[2515]: E0130 13:56:09.608013 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.609087 kubelet[2515]: W0130 13:56:09.608030 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.609087 kubelet[2515]: E0130 13:56:09.608208 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.609087 kubelet[2515]: E0130 13:56:09.608259 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.609580 kubelet[2515]: W0130 13:56:09.608279 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.609580 kubelet[2515]: E0130 13:56:09.608458 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.609580 kubelet[2515]: E0130 13:56:09.608630 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.609580 kubelet[2515]: W0130 13:56:09.608638 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.609580 kubelet[2515]: E0130 13:56:09.608771 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.609580 kubelet[2515]: E0130 13:56:09.608808 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.609580 kubelet[2515]: W0130 13:56:09.608814 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.609580 kubelet[2515]: E0130 13:56:09.608904 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.609580 kubelet[2515]: E0130 13:56:09.609004 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.609580 kubelet[2515]: W0130 13:56:09.609012 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.610031 kubelet[2515]: E0130 13:56:09.609087 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.610031 kubelet[2515]: E0130 13:56:09.609313 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.610031 kubelet[2515]: W0130 13:56:09.609326 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.610031 kubelet[2515]: E0130 13:56:09.609346 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.610031 kubelet[2515]: E0130 13:56:09.609580 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.610031 kubelet[2515]: W0130 13:56:09.609590 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.610031 kubelet[2515]: E0130 13:56:09.609826 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.610330 kubelet[2515]: E0130 13:56:09.610178 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.610330 kubelet[2515]: W0130 13:56:09.610193 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.610330 kubelet[2515]: E0130 13:56:09.610251 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.613525 kubelet[2515]: E0130 13:56:09.613463 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.613525 kubelet[2515]: W0130 13:56:09.613507 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.613525 kubelet[2515]: E0130 13:56:09.613550 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.616035 kubelet[2515]: E0130 13:56:09.613980 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.616035 kubelet[2515]: W0130 13:56:09.614009 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.616035 kubelet[2515]: E0130 13:56:09.614030 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.616035 kubelet[2515]: E0130 13:56:09.614515 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.616035 kubelet[2515]: W0130 13:56:09.614534 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.616035 kubelet[2515]: E0130 13:56:09.614553 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.616035 kubelet[2515]: E0130 13:56:09.614853 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.616035 kubelet[2515]: W0130 13:56:09.614868 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.616035 kubelet[2515]: E0130 13:56:09.614889 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.616035 kubelet[2515]: E0130 13:56:09.615255 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.616619 kubelet[2515]: W0130 13:56:09.615285 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.616619 kubelet[2515]: E0130 13:56:09.615319 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.616619 kubelet[2515]: E0130 13:56:09.615585 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.616619 kubelet[2515]: W0130 13:56:09.615598 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.616619 kubelet[2515]: E0130 13:56:09.615614 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.616619 kubelet[2515]: E0130 13:56:09.615854 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.616619 kubelet[2515]: W0130 13:56:09.615868 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.616619 kubelet[2515]: E0130 13:56:09.615883 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.617617 kubelet[2515]: E0130 13:56:09.617580 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.617617 kubelet[2515]: W0130 13:56:09.617608 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.617617 kubelet[2515]: E0130 13:56:09.617631 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.632623 containerd[1461]: time="2025-01-30T13:56:09.632005484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:09.633385 containerd[1461]: time="2025-01-30T13:56:09.632424847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:09.635163 containerd[1461]: time="2025-01-30T13:56:09.634174509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:09.635297 containerd[1461]: time="2025-01-30T13:56:09.635201937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:09.660024 kubelet[2515]: E0130 13:56:09.659582 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.660024 kubelet[2515]: W0130 13:56:09.659622 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.660024 kubelet[2515]: E0130 13:56:09.659654 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.705327 kubelet[2515]: E0130 13:56:09.704025 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:09.706612 systemd[1]: Started cri-containerd-33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20.scope - libcontainer container 33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20. Jan 30 13:56:09.711062 kubelet[2515]: E0130 13:56:09.710635 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:09.713817 containerd[1461]: time="2025-01-30T13:56:09.713750649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd87fd744-ttks9,Uid:f9f4c7cd-30e6-4174-8edb-cae62ce00f81,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:09.751450 kubelet[2515]: E0130 13:56:09.749548 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.751450 kubelet[2515]: W0130 13:56:09.749599 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.751450 kubelet[2515]: E0130 13:56:09.749631 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.751450 kubelet[2515]: E0130 13:56:09.749989 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.751450 kubelet[2515]: W0130 13:56:09.750019 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.751450 kubelet[2515]: E0130 13:56:09.750037 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.751450 kubelet[2515]: E0130 13:56:09.750428 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.751450 kubelet[2515]: W0130 13:56:09.750440 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.751450 kubelet[2515]: E0130 13:56:09.750462 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.751943 kubelet[2515]: E0130 13:56:09.751494 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.751943 kubelet[2515]: W0130 13:56:09.751508 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.751943 kubelet[2515]: E0130 13:56:09.751526 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.751943 kubelet[2515]: E0130 13:56:09.751821 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.751943 kubelet[2515]: W0130 13:56:09.751832 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.751943 kubelet[2515]: E0130 13:56:09.751844 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.755355 kubelet[2515]: E0130 13:56:09.752046 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.755355 kubelet[2515]: W0130 13:56:09.752057 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.755355 kubelet[2515]: E0130 13:56:09.752070 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.755355 kubelet[2515]: E0130 13:56:09.752331 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.755355 kubelet[2515]: W0130 13:56:09.752341 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.755355 kubelet[2515]: E0130 13:56:09.752350 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.755355 kubelet[2515]: E0130 13:56:09.752575 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.755355 kubelet[2515]: W0130 13:56:09.752584 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.755355 kubelet[2515]: E0130 13:56:09.752607 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.755355 kubelet[2515]: E0130 13:56:09.753377 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.756033 kubelet[2515]: W0130 13:56:09.753390 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.756033 kubelet[2515]: E0130 13:56:09.753450 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.756033 kubelet[2515]: E0130 13:56:09.753733 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.756033 kubelet[2515]: W0130 13:56:09.753747 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.756033 kubelet[2515]: E0130 13:56:09.753763 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.756033 kubelet[2515]: E0130 13:56:09.754343 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.756033 kubelet[2515]: W0130 13:56:09.754358 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.756033 kubelet[2515]: E0130 13:56:09.754378 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.756033 kubelet[2515]: E0130 13:56:09.754599 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.756033 kubelet[2515]: W0130 13:56:09.754611 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.757543 kubelet[2515]: E0130 13:56:09.754626 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.757543 kubelet[2515]: E0130 13:56:09.754872 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.757543 kubelet[2515]: W0130 13:56:09.754884 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.757543 kubelet[2515]: E0130 13:56:09.754897 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.757543 kubelet[2515]: E0130 13:56:09.755454 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.757543 kubelet[2515]: W0130 13:56:09.755465 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.757543 kubelet[2515]: E0130 13:56:09.755477 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.757543 kubelet[2515]: E0130 13:56:09.755713 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:09.757543 kubelet[2515]: W0130 13:56:09.755724 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:09.757543 kubelet[2515]: E0130 13:56:09.755737 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:09.812310 containerd[1461]: time="2025-01-30T13:56:09.811326851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:09.812310 containerd[1461]: time="2025-01-30T13:56:09.811442852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:09.812896 containerd[1461]: time="2025-01-30T13:56:09.812703757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:09.814474 containerd[1461]: time="2025-01-30T13:56:09.813661051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:09.863574 systemd[1]: Started cri-containerd-12682d788a28d1f98a32f96a74ff8b3eba07391fe727370f6af7360a83c75a7e.scope - libcontainer container 12682d788a28d1f98a32f96a74ff8b3eba07391fe727370f6af7360a83c75a7e. Jan 30 13:56:09.938505 containerd[1461]: time="2025-01-30T13:56:09.938446272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jr8hk,Uid:5db3ee5e-6f53-4d59-965d-f419d0c77289,Namespace:calico-system,Attempt:0,} returns sandbox id \"33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20\"" Jan 30 13:56:09.950043 kubelet[2515]: E0130 13:56:09.950000 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:09.998343 containerd[1461]: time="2025-01-30T13:56:09.997024113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:56:10.033659 containerd[1461]: time="2025-01-30T13:56:10.033582880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bd87fd744-ttks9,Uid:f9f4c7cd-30e6-4174-8edb-cae62ce00f81,Namespace:calico-system,Attempt:0,} returns sandbox id \"12682d788a28d1f98a32f96a74ff8b3eba07391fe727370f6af7360a83c75a7e\"" Jan 30 13:56:10.036896 kubelet[2515]: E0130 13:56:10.036794 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:10.479653 kubelet[2515]: E0130 13:56:10.479174 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:10.564871 kubelet[2515]: E0130 13:56:10.564552 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.564871 kubelet[2515]: W0130 13:56:10.564590 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.564871 kubelet[2515]: E0130 13:56:10.564624 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.565148 kubelet[2515]: E0130 13:56:10.564953 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.565148 kubelet[2515]: W0130 13:56:10.564971 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.565148 kubelet[2515]: E0130 13:56:10.564993 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.566026 kubelet[2515]: E0130 13:56:10.565571 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.566026 kubelet[2515]: W0130 13:56:10.565595 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.566026 kubelet[2515]: E0130 13:56:10.565614 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.566026 kubelet[2515]: E0130 13:56:10.565892 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.566026 kubelet[2515]: W0130 13:56:10.565908 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.566026 kubelet[2515]: E0130 13:56:10.565924 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.566457 kubelet[2515]: E0130 13:56:10.566202 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.566457 kubelet[2515]: W0130 13:56:10.566216 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.566457 kubelet[2515]: E0130 13:56:10.566233 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.566602 kubelet[2515]: E0130 13:56:10.566509 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.566602 kubelet[2515]: W0130 13:56:10.566523 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.566602 kubelet[2515]: E0130 13:56:10.566538 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.567595 kubelet[2515]: E0130 13:56:10.566783 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.567595 kubelet[2515]: W0130 13:56:10.566803 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.567595 kubelet[2515]: E0130 13:56:10.566821 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.567595 kubelet[2515]: E0130 13:56:10.567093 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.567595 kubelet[2515]: W0130 13:56:10.567110 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.567595 kubelet[2515]: E0130 13:56:10.567183 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.567595 kubelet[2515]: E0130 13:56:10.567472 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.567595 kubelet[2515]: W0130 13:56:10.567486 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.567595 kubelet[2515]: E0130 13:56:10.567501 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.568262 kubelet[2515]: E0130 13:56:10.567818 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.568262 kubelet[2515]: W0130 13:56:10.567834 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.568262 kubelet[2515]: E0130 13:56:10.567852 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.568262 kubelet[2515]: E0130 13:56:10.568115 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.568262 kubelet[2515]: W0130 13:56:10.568129 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.568262 kubelet[2515]: E0130 13:56:10.568145 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.568591 kubelet[2515]: E0130 13:56:10.568454 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.568591 kubelet[2515]: W0130 13:56:10.568468 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.568591 kubelet[2515]: E0130 13:56:10.568484 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.568864 kubelet[2515]: E0130 13:56:10.568845 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.568864 kubelet[2515]: W0130 13:56:10.568863 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.568979 kubelet[2515]: E0130 13:56:10.568881 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.569156 kubelet[2515]: E0130 13:56:10.569138 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.569156 kubelet[2515]: W0130 13:56:10.569154 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.569332 kubelet[2515]: E0130 13:56:10.569169 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:10.569457 kubelet[2515]: E0130 13:56:10.569439 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:10.569506 kubelet[2515]: W0130 13:56:10.569457 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:10.569506 kubelet[2515]: E0130 13:56:10.569472 2515 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:11.317350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814909204.mount: Deactivated successfully. Jan 30 13:56:11.347314 kubelet[2515]: E0130 13:56:11.346562 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dshrw" podUID="16ce8e60-3b3d-4b79-86f3-2473807ac6e1" Jan 30 13:56:11.515889 containerd[1461]: time="2025-01-30T13:56:11.514405908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:11.515889 containerd[1461]: time="2025-01-30T13:56:11.515735173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:56:11.515889 containerd[1461]: time="2025-01-30T13:56:11.515807867Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:11.518928 containerd[1461]: time="2025-01-30T13:56:11.518868391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:11.520280 containerd[1461]: time="2025-01-30T13:56:11.520196824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.523108221s" Jan 30 13:56:11.520280 containerd[1461]: time="2025-01-30T13:56:11.520282257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:56:11.522142 containerd[1461]: time="2025-01-30T13:56:11.522085563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:56:11.531907 containerd[1461]: time="2025-01-30T13:56:11.531830006Z" level=info msg="CreateContainer within sandbox \"33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:56:11.551105 containerd[1461]: time="2025-01-30T13:56:11.549491392Z" level=info msg="CreateContainer within sandbox \"33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940\"" Jan 30 13:56:11.552965 containerd[1461]: time="2025-01-30T13:56:11.551437552Z" level=info msg="StartContainer for \"c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940\"" Jan 30 13:56:11.614647 systemd[1]: Started cri-containerd-c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940.scope - libcontainer container c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940. Jan 30 13:56:11.653432 containerd[1461]: time="2025-01-30T13:56:11.653253855Z" level=info msg="StartContainer for \"c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940\" returns successfully" Jan 30 13:56:11.685939 systemd[1]: cri-containerd-c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940.scope: Deactivated successfully. Jan 30 13:56:11.739711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940-rootfs.mount: Deactivated successfully. Jan 30 13:56:11.754706 containerd[1461]: time="2025-01-30T13:56:11.750347218Z" level=info msg="shim disconnected" id=c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940 namespace=k8s.io Jan 30 13:56:11.754706 containerd[1461]: time="2025-01-30T13:56:11.754211224Z" level=warning msg="cleaning up after shim disconnected" id=c090c3975271c79f214dbd1708ead013b77b14c49cd3a52b734e4da451764940 namespace=k8s.io Jan 30 13:56:11.754706 containerd[1461]: time="2025-01-30T13:56:11.754238991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:11.790142 containerd[1461]: time="2025-01-30T13:56:11.788087763Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:56:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:56:12.493316 kubelet[2515]: E0130 13:56:12.493004 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:13.349632 kubelet[2515]: E0130 13:56:13.349413 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dshrw" podUID="16ce8e60-3b3d-4b79-86f3-2473807ac6e1" Jan 30 13:56:14.300310 containerd[1461]: time="2025-01-30T13:56:14.300225724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:14.308146 containerd[1461]: time="2025-01-30T13:56:14.308053579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:56:14.311391 containerd[1461]: time="2025-01-30T13:56:14.311304798Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:14.318192 containerd[1461]: time="2025-01-30T13:56:14.318123353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:14.320550 containerd[1461]: time="2025-01-30T13:56:14.320479457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.797896903s" Jan 30 13:56:14.320878 containerd[1461]: time="2025-01-30T13:56:14.320738031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:56:14.326609 containerd[1461]: time="2025-01-30T13:56:14.326148456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:56:14.358395 containerd[1461]: time="2025-01-30T13:56:14.358243384Z" level=info msg="CreateContainer within sandbox \"12682d788a28d1f98a32f96a74ff8b3eba07391fe727370f6af7360a83c75a7e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:56:14.395717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483383419.mount: Deactivated successfully. Jan 30 13:56:14.408446 containerd[1461]: time="2025-01-30T13:56:14.408382273Z" level=info msg="CreateContainer within sandbox \"12682d788a28d1f98a32f96a74ff8b3eba07391fe727370f6af7360a83c75a7e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9b2d46d1be4d83dd10ee75aab0d79fab4ef1de35cbd38e7e24b147a81f0f580e\"" Jan 30 13:56:14.410016 containerd[1461]: time="2025-01-30T13:56:14.409957165Z" level=info msg="StartContainer for \"9b2d46d1be4d83dd10ee75aab0d79fab4ef1de35cbd38e7e24b147a81f0f580e\"" Jan 30 13:56:14.471641 systemd[1]: Started cri-containerd-9b2d46d1be4d83dd10ee75aab0d79fab4ef1de35cbd38e7e24b147a81f0f580e.scope - libcontainer container 9b2d46d1be4d83dd10ee75aab0d79fab4ef1de35cbd38e7e24b147a81f0f580e. Jan 30 13:56:14.564732 containerd[1461]: time="2025-01-30T13:56:14.563184651Z" level=info msg="StartContainer for \"9b2d46d1be4d83dd10ee75aab0d79fab4ef1de35cbd38e7e24b147a81f0f580e\" returns successfully" Jan 30 13:56:15.347657 kubelet[2515]: E0130 13:56:15.345852 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dshrw" podUID="16ce8e60-3b3d-4b79-86f3-2473807ac6e1" Jan 30 13:56:15.515994 kubelet[2515]: E0130 13:56:15.515944 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:15.536659 kubelet[2515]: I0130 13:56:15.536516 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bd87fd744-ttks9" podStartSLOduration=3.250554088 podStartE2EDuration="7.53649806s" podCreationTimestamp="2025-01-30 13:56:08 +0000 UTC" firstStartedPulling="2025-01-30 13:56:10.038463094 +0000 UTC m=+12.876306024" lastFinishedPulling="2025-01-30 13:56:14.324407055 +0000 UTC m=+17.162249996" observedRunningTime="2025-01-30 13:56:15.535791611 +0000 UTC m=+18.373634542" watchObservedRunningTime="2025-01-30 13:56:15.53649806 +0000 UTC m=+18.374341006" Jan 30 13:56:16.519159 kubelet[2515]: I0130 13:56:16.518987 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:16.521382 kubelet[2515]: E0130 13:56:16.521330 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:17.347443 kubelet[2515]: E0130 13:56:17.346847 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dshrw" podUID="16ce8e60-3b3d-4b79-86f3-2473807ac6e1" Jan 30 13:56:18.630015 containerd[1461]: time="2025-01-30T13:56:18.629930004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:18.631312 containerd[1461]: time="2025-01-30T13:56:18.630925891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:56:18.632389 containerd[1461]: time="2025-01-30T13:56:18.631855623Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:18.634656 containerd[1461]: time="2025-01-30T13:56:18.634599357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:18.635745 containerd[1461]: time="2025-01-30T13:56:18.635696616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.30949989s" Jan 30 13:56:18.635745 containerd[1461]: time="2025-01-30T13:56:18.635741109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:56:18.640893 containerd[1461]: time="2025-01-30T13:56:18.640832565Z" level=info msg="CreateContainer within sandbox \"33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:56:18.664404 containerd[1461]: time="2025-01-30T13:56:18.664338026Z" level=info msg="CreateContainer within sandbox \"33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483\"" Jan 30 13:56:18.665455 containerd[1461]: time="2025-01-30T13:56:18.665386484Z" level=info msg="StartContainer for \"46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483\"" Jan 30 13:56:18.766626 systemd[1]: Started cri-containerd-46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483.scope - libcontainer container 46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483. Jan 30 13:56:18.820034 containerd[1461]: time="2025-01-30T13:56:18.819966313Z" level=info msg="StartContainer for \"46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483\" returns successfully" Jan 30 13:56:19.348774 kubelet[2515]: E0130 13:56:19.348700 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dshrw" podUID="16ce8e60-3b3d-4b79-86f3-2473807ac6e1" Jan 30 13:56:19.529036 systemd[1]: cri-containerd-46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483.scope: Deactivated successfully. Jan 30 13:56:19.544413 kubelet[2515]: E0130 13:56:19.542116 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:19.603978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483-rootfs.mount: Deactivated successfully. Jan 30 13:56:19.625284 containerd[1461]: time="2025-01-30T13:56:19.625155100Z" level=info msg="shim disconnected" id=46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483 namespace=k8s.io Jan 30 13:56:19.625967 containerd[1461]: time="2025-01-30T13:56:19.625665735Z" level=warning msg="cleaning up after shim disconnected" id=46b8f441d5bd5a480a186304fb4148e4235b56c116b82f0daaebc77705abd483 namespace=k8s.io Jan 30 13:56:19.625967 containerd[1461]: time="2025-01-30T13:56:19.625696492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:19.649331 kubelet[2515]: I0130 13:56:19.649170 2515 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:56:19.714624 systemd[1]: Created slice kubepods-burstable-pod3564606d_2944_426a_9353_f6fcadcd5c0d.slice - libcontainer container kubepods-burstable-pod3564606d_2944_426a_9353_f6fcadcd5c0d.slice. Jan 30 13:56:19.730046 systemd[1]: Created slice kubepods-besteffort-pod744872b6_19c7_43d9_a52d_0bde01815327.slice - libcontainer container kubepods-besteffort-pod744872b6_19c7_43d9_a52d_0bde01815327.slice. Jan 30 13:56:19.742995 systemd[1]: Created slice kubepods-besteffort-pode9049fb6_9111_4307_b692_0bfdfb1f5bc6.slice - libcontainer container kubepods-besteffort-pode9049fb6_9111_4307_b692_0bfdfb1f5bc6.slice. Jan 30 13:56:19.772077 systemd[1]: Created slice kubepods-besteffort-pod35e04996_ffc7_4ea4_8fe9_3fe75da55979.slice - libcontainer container kubepods-besteffort-pod35e04996_ffc7_4ea4_8fe9_3fe75da55979.slice. Jan 30 13:56:19.782698 systemd[1]: Created slice kubepods-burstable-pod054a7459_c704_4251_80d1_8bd9fe52159e.slice - libcontainer container kubepods-burstable-pod054a7459_c704_4251_80d1_8bd9fe52159e.slice. Jan 30 13:56:19.798671 kubelet[2515]: I0130 13:56:19.798111 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgmtg\" (UniqueName: \"kubernetes.io/projected/35e04996-ffc7-4ea4-8fe9-3fe75da55979-kube-api-access-qgmtg\") pod \"calico-apiserver-5dfd54899b-rdf7t\" (UID: \"35e04996-ffc7-4ea4-8fe9-3fe75da55979\") " pod="calico-apiserver/calico-apiserver-5dfd54899b-rdf7t" Jan 30 13:56:19.798671 kubelet[2515]: I0130 13:56:19.798176 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/054a7459-c704-4251-80d1-8bd9fe52159e-config-volume\") pod \"coredns-6f6b679f8f-zd2pv\" (UID: \"054a7459-c704-4251-80d1-8bd9fe52159e\") " pod="kube-system/coredns-6f6b679f8f-zd2pv" Jan 30 13:56:19.798671 kubelet[2515]: I0130 13:56:19.798210 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/744872b6-19c7-43d9-a52d-0bde01815327-tigera-ca-bundle\") pod \"calico-kube-controllers-7d8c6dbb4c-cq6nk\" (UID: \"744872b6-19c7-43d9-a52d-0bde01815327\") " pod="calico-system/calico-kube-controllers-7d8c6dbb4c-cq6nk" Jan 30 13:56:19.798671 kubelet[2515]: I0130 13:56:19.798311 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3564606d-2944-426a-9353-f6fcadcd5c0d-config-volume\") pod \"coredns-6f6b679f8f-d7hjc\" (UID: \"3564606d-2944-426a-9353-f6fcadcd5c0d\") " pod="kube-system/coredns-6f6b679f8f-d7hjc" Jan 30 13:56:19.798671 kubelet[2515]: I0130 13:56:19.798414 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ks7j\" (UniqueName: \"kubernetes.io/projected/3564606d-2944-426a-9353-f6fcadcd5c0d-kube-api-access-7ks7j\") pod \"coredns-6f6b679f8f-d7hjc\" (UID: \"3564606d-2944-426a-9353-f6fcadcd5c0d\") " pod="kube-system/coredns-6f6b679f8f-d7hjc" Jan 30 13:56:19.799021 kubelet[2515]: I0130 13:56:19.798473 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35e04996-ffc7-4ea4-8fe9-3fe75da55979-calico-apiserver-certs\") pod \"calico-apiserver-5dfd54899b-rdf7t\" (UID: \"35e04996-ffc7-4ea4-8fe9-3fe75da55979\") " pod="calico-apiserver/calico-apiserver-5dfd54899b-rdf7t" Jan 30 13:56:19.799021 kubelet[2515]: I0130 13:56:19.798494 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btbbp\" (UniqueName: \"kubernetes.io/projected/744872b6-19c7-43d9-a52d-0bde01815327-kube-api-access-btbbp\") pod \"calico-kube-controllers-7d8c6dbb4c-cq6nk\" (UID: \"744872b6-19c7-43d9-a52d-0bde01815327\") " pod="calico-system/calico-kube-controllers-7d8c6dbb4c-cq6nk" Jan 30 13:56:19.799021 kubelet[2515]: I0130 13:56:19.798519 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnhmp\" (UniqueName: \"kubernetes.io/projected/e9049fb6-9111-4307-b692-0bfdfb1f5bc6-kube-api-access-cnhmp\") pod \"calico-apiserver-5dfd54899b-bpksv\" (UID: \"e9049fb6-9111-4307-b692-0bfdfb1f5bc6\") " pod="calico-apiserver/calico-apiserver-5dfd54899b-bpksv" Jan 30 13:56:19.799021 kubelet[2515]: I0130 13:56:19.798541 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsgz\" (UniqueName: \"kubernetes.io/projected/054a7459-c704-4251-80d1-8bd9fe52159e-kube-api-access-6gsgz\") pod \"coredns-6f6b679f8f-zd2pv\" (UID: \"054a7459-c704-4251-80d1-8bd9fe52159e\") " pod="kube-system/coredns-6f6b679f8f-zd2pv" Jan 30 13:56:19.799583 kubelet[2515]: I0130 13:56:19.798560 2515 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e9049fb6-9111-4307-b692-0bfdfb1f5bc6-calico-apiserver-certs\") pod \"calico-apiserver-5dfd54899b-bpksv\" (UID: \"e9049fb6-9111-4307-b692-0bfdfb1f5bc6\") " pod="calico-apiserver/calico-apiserver-5dfd54899b-bpksv" Jan 30 13:56:20.021455 kubelet[2515]: E0130 13:56:20.021394 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:20.022287 containerd[1461]: time="2025-01-30T13:56:20.022226214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d7hjc,Uid:3564606d-2944-426a-9353-f6fcadcd5c0d,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:20.039832 containerd[1461]: time="2025-01-30T13:56:20.039560519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8c6dbb4c-cq6nk,Uid:744872b6-19c7-43d9-a52d-0bde01815327,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:20.052088 containerd[1461]: time="2025-01-30T13:56:20.052027505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfd54899b-bpksv,Uid:e9049fb6-9111-4307-b692-0bfdfb1f5bc6,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:56:20.089918 kubelet[2515]: E0130 13:56:20.089129 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:20.099867 containerd[1461]: time="2025-01-30T13:56:20.099803300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zd2pv,Uid:054a7459-c704-4251-80d1-8bd9fe52159e,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:20.102545 containerd[1461]: time="2025-01-30T13:56:20.102231883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfd54899b-rdf7t,Uid:35e04996-ffc7-4ea4-8fe9-3fe75da55979,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:56:20.511633 containerd[1461]: time="2025-01-30T13:56:20.511464097Z" level=error msg="Failed to destroy network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.520233 containerd[1461]: time="2025-01-30T13:56:20.520135575Z" level=error msg="Failed to destroy network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.523471 containerd[1461]: time="2025-01-30T13:56:20.523387684Z" level=error msg="encountered an error cleaning up failed sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.524554 containerd[1461]: time="2025-01-30T13:56:20.524498456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfd54899b-rdf7t,Uid:35e04996-ffc7-4ea4-8fe9-3fe75da55979,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.525937 containerd[1461]: time="2025-01-30T13:56:20.525334468Z" level=error msg="encountered an error cleaning up failed sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.525937 containerd[1461]: time="2025-01-30T13:56:20.525447811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d7hjc,Uid:3564606d-2944-426a-9353-f6fcadcd5c0d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.534962 kubelet[2515]: E0130 13:56:20.533581 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.534962 kubelet[2515]: E0130 13:56:20.533665 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-d7hjc" Jan 30 13:56:20.534962 kubelet[2515]: E0130 13:56:20.533689 2515 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-d7hjc" Jan 30 13:56:20.536106 kubelet[2515]: E0130 13:56:20.533740 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-d7hjc_kube-system(3564606d-2944-426a-9353-f6fcadcd5c0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-d7hjc_kube-system(3564606d-2944-426a-9353-f6fcadcd5c0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-d7hjc" podUID="3564606d-2944-426a-9353-f6fcadcd5c0d" Jan 30 13:56:20.536106 kubelet[2515]: E0130 13:56:20.535521 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.536106 kubelet[2515]: E0130 13:56:20.535618 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dfd54899b-rdf7t" Jan 30 13:56:20.537644 kubelet[2515]: E0130 13:56:20.535647 2515 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dfd54899b-rdf7t" Jan 30 13:56:20.537644 kubelet[2515]: E0130 13:56:20.535705 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dfd54899b-rdf7t_calico-apiserver(35e04996-ffc7-4ea4-8fe9-3fe75da55979)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dfd54899b-rdf7t_calico-apiserver(35e04996-ffc7-4ea4-8fe9-3fe75da55979)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dfd54899b-rdf7t" podUID="35e04996-ffc7-4ea4-8fe9-3fe75da55979" Jan 30 13:56:20.552582 containerd[1461]: time="2025-01-30T13:56:20.551685527Z" level=error msg="Failed to destroy network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.552582 containerd[1461]: time="2025-01-30T13:56:20.552177596Z" level=error msg="encountered an error cleaning up failed sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.552582 containerd[1461]: time="2025-01-30T13:56:20.552258391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zd2pv,Uid:054a7459-c704-4251-80d1-8bd9fe52159e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.555788 kubelet[2515]: E0130 13:56:20.554715 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.555788 kubelet[2515]: E0130 13:56:20.554801 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zd2pv" Jan 30 13:56:20.555788 kubelet[2515]: E0130 13:56:20.554837 2515 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-zd2pv" Jan 30 13:56:20.556076 kubelet[2515]: E0130 13:56:20.554896 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-zd2pv_kube-system(054a7459-c704-4251-80d1-8bd9fe52159e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-zd2pv_kube-system(054a7459-c704-4251-80d1-8bd9fe52159e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zd2pv" podUID="054a7459-c704-4251-80d1-8bd9fe52159e" Jan 30 13:56:20.561934 kubelet[2515]: I0130 13:56:20.560468 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:20.566410 containerd[1461]: time="2025-01-30T13:56:20.564624403Z" level=error msg="Failed to destroy network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.566410 containerd[1461]: time="2025-01-30T13:56:20.565246913Z" level=error msg="encountered an error cleaning up failed sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.566410 containerd[1461]: time="2025-01-30T13:56:20.565357116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfd54899b-bpksv,Uid:e9049fb6-9111-4307-b692-0bfdfb1f5bc6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.571033 kubelet[2515]: E0130 13:56:20.570606 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.571033 kubelet[2515]: E0130 13:56:20.570694 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dfd54899b-bpksv" Jan 30 13:56:20.571033 kubelet[2515]: E0130 13:56:20.570730 2515 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dfd54899b-bpksv" Jan 30 13:56:20.571345 kubelet[2515]: E0130 13:56:20.570787 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dfd54899b-bpksv_calico-apiserver(e9049fb6-9111-4307-b692-0bfdfb1f5bc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dfd54899b-bpksv_calico-apiserver(e9049fb6-9111-4307-b692-0bfdfb1f5bc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dfd54899b-bpksv" podUID="e9049fb6-9111-4307-b692-0bfdfb1f5bc6" Jan 30 13:56:20.572670 containerd[1461]: time="2025-01-30T13:56:20.572612656Z" level=info msg="StopPodSandbox for \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\"" Jan 30 13:56:20.573608 containerd[1461]: time="2025-01-30T13:56:20.573471988Z" level=error msg="Failed to destroy network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.575633 kubelet[2515]: E0130 13:56:20.575533 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:20.576589 containerd[1461]: time="2025-01-30T13:56:20.574256060Z" level=error msg="encountered an error cleaning up failed sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.576589 containerd[1461]: time="2025-01-30T13:56:20.575914778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8c6dbb4c-cq6nk,Uid:744872b6-19c7-43d9-a52d-0bde01815327,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.578229 kubelet[2515]: E0130 13:56:20.577232 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.578229 kubelet[2515]: E0130 13:56:20.577391 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d8c6dbb4c-cq6nk" Jan 30 13:56:20.578229 kubelet[2515]: E0130 13:56:20.577427 2515 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d8c6dbb4c-cq6nk" Jan 30 13:56:20.579707 kubelet[2515]: E0130 13:56:20.577643 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d8c6dbb4c-cq6nk_calico-system(744872b6-19c7-43d9-a52d-0bde01815327)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d8c6dbb4c-cq6nk_calico-system(744872b6-19c7-43d9-a52d-0bde01815327)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d8c6dbb4c-cq6nk" podUID="744872b6-19c7-43d9-a52d-0bde01815327" Jan 30 13:56:20.579799 containerd[1461]: time="2025-01-30T13:56:20.578658036Z" level=info msg="Ensure that sandbox 122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e in task-service has been cleanup successfully" Jan 30 13:56:20.583707 containerd[1461]: time="2025-01-30T13:56:20.583657039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:56:20.591229 kubelet[2515]: I0130 13:56:20.591169 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:20.592588 containerd[1461]: time="2025-01-30T13:56:20.592541225Z" level=info msg="StopPodSandbox for \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\"" Jan 30 13:56:20.593251 containerd[1461]: time="2025-01-30T13:56:20.592935783Z" level=info msg="Ensure that sandbox 3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4 in task-service has been cleanup successfully" Jan 30 13:56:20.661676 containerd[1461]: time="2025-01-30T13:56:20.661599476Z" level=error msg="StopPodSandbox for \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\" failed" error="failed to destroy network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.662337 kubelet[2515]: E0130 13:56:20.662052 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:20.662337 kubelet[2515]: E0130 13:56:20.662135 2515 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e"} Jan 30 13:56:20.662337 kubelet[2515]: E0130 13:56:20.662223 2515 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3564606d-2944-426a-9353-f6fcadcd5c0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:20.662337 kubelet[2515]: E0130 13:56:20.662258 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3564606d-2944-426a-9353-f6fcadcd5c0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-d7hjc" podUID="3564606d-2944-426a-9353-f6fcadcd5c0d" Jan 30 13:56:20.678342 containerd[1461]: time="2025-01-30T13:56:20.678138262Z" level=error msg="StopPodSandbox for \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\" failed" error="failed to destroy network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:20.678591 kubelet[2515]: E0130 13:56:20.678505 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:20.678591 kubelet[2515]: E0130 13:56:20.678574 2515 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4"} Jan 30 13:56:20.678718 kubelet[2515]: E0130 13:56:20.678634 2515 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35e04996-ffc7-4ea4-8fe9-3fe75da55979\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:20.678718 kubelet[2515]: E0130 13:56:20.678665 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35e04996-ffc7-4ea4-8fe9-3fe75da55979\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dfd54899b-rdf7t" podUID="35e04996-ffc7-4ea4-8fe9-3fe75da55979" Jan 30 13:56:21.361056 systemd[1]: Created slice kubepods-besteffort-pod16ce8e60_3b3d_4b79_86f3_2473807ac6e1.slice - libcontainer container kubepods-besteffort-pod16ce8e60_3b3d_4b79_86f3_2473807ac6e1.slice. Jan 30 13:56:21.369350 containerd[1461]: time="2025-01-30T13:56:21.369139012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dshrw,Uid:16ce8e60-3b3d-4b79-86f3-2473807ac6e1,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:21.458883 containerd[1461]: time="2025-01-30T13:56:21.458804055Z" level=error msg="Failed to destroy network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:21.460847 containerd[1461]: time="2025-01-30T13:56:21.459431187Z" level=error msg="encountered an error cleaning up failed sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:21.460847 containerd[1461]: time="2025-01-30T13:56:21.459532214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dshrw,Uid:16ce8e60-3b3d-4b79-86f3-2473807ac6e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:21.462821 kubelet[2515]: E0130 13:56:21.461724 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:21.462821 kubelet[2515]: E0130 13:56:21.461808 2515 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dshrw" Jan 30 13:56:21.462821 kubelet[2515]: E0130 13:56:21.461836 2515 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dshrw" Jan 30 13:56:21.463103 kubelet[2515]: E0130 13:56:21.461876 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dshrw_calico-system(16ce8e60-3b3d-4b79-86f3-2473807ac6e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dshrw_calico-system(16ce8e60-3b3d-4b79-86f3-2473807ac6e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dshrw" podUID="16ce8e60-3b3d-4b79-86f3-2473807ac6e1" Jan 30 13:56:21.464714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768-shm.mount: Deactivated successfully. Jan 30 13:56:21.596455 kubelet[2515]: I0130 13:56:21.595933 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:21.597877 containerd[1461]: time="2025-01-30T13:56:21.597332819Z" level=info msg="StopPodSandbox for \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\"" Jan 30 13:56:21.597877 containerd[1461]: time="2025-01-30T13:56:21.597592711Z" level=info msg="Ensure that sandbox fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d in task-service has been cleanup successfully" Jan 30 13:56:21.602436 kubelet[2515]: I0130 13:56:21.601947 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:21.605779 containerd[1461]: time="2025-01-30T13:56:21.605020429Z" level=info msg="StopPodSandbox for \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\"" Jan 30 13:56:21.606438 containerd[1461]: time="2025-01-30T13:56:21.606377718Z" level=info msg="Ensure that sandbox 82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c in task-service has been cleanup successfully" Jan 30 13:56:21.610310 kubelet[2515]: I0130 13:56:21.610187 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:21.615823 containerd[1461]: time="2025-01-30T13:56:21.613609274Z" level=info msg="StopPodSandbox for \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\"" Jan 30 13:56:21.615823 containerd[1461]: time="2025-01-30T13:56:21.613876795Z" level=info msg="Ensure that sandbox d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768 in task-service has been cleanup successfully" Jan 30 13:56:21.623704 kubelet[2515]: I0130 13:56:21.623639 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:21.629220 containerd[1461]: time="2025-01-30T13:56:21.628653616Z" level=info msg="StopPodSandbox for \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\"" Jan 30 13:56:21.629220 containerd[1461]: time="2025-01-30T13:56:21.628944597Z" level=info msg="Ensure that sandbox b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68 in task-service has been cleanup successfully" Jan 30 13:56:21.710215 containerd[1461]: time="2025-01-30T13:56:21.710083871Z" level=error msg="StopPodSandbox for \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\" failed" error="failed to destroy network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:21.710889 kubelet[2515]: E0130 13:56:21.710834 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:21.711304 kubelet[2515]: E0130 13:56:21.711158 2515 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c"} Jan 30 13:56:21.711452 kubelet[2515]: E0130 13:56:21.711430 2515 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9049fb6-9111-4307-b692-0bfdfb1f5bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:21.711646 kubelet[2515]: E0130 13:56:21.711611 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9049fb6-9111-4307-b692-0bfdfb1f5bc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dfd54899b-bpksv" podUID="e9049fb6-9111-4307-b692-0bfdfb1f5bc6" Jan 30 13:56:21.715226 containerd[1461]: time="2025-01-30T13:56:21.714971475Z" level=error msg="StopPodSandbox for \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\" failed" error="failed to destroy network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:21.717966 kubelet[2515]: E0130 13:56:21.716934 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:21.717966 kubelet[2515]: E0130 13:56:21.717002 2515 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768"} Jan 30 13:56:21.717966 kubelet[2515]: E0130 13:56:21.717054 2515 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16ce8e60-3b3d-4b79-86f3-2473807ac6e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:21.717966 kubelet[2515]: E0130 13:56:21.717088 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16ce8e60-3b3d-4b79-86f3-2473807ac6e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dshrw" podUID="16ce8e60-3b3d-4b79-86f3-2473807ac6e1" Jan 30 13:56:21.719028 containerd[1461]: time="2025-01-30T13:56:21.717743744Z" level=error msg="StopPodSandbox for \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\" failed" error="failed to destroy network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:21.721136 kubelet[2515]: E0130 13:56:21.720877 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:21.721136 kubelet[2515]: E0130 13:56:21.720928 2515 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d"} Jan 30 13:56:21.721136 kubelet[2515]: E0130 13:56:21.720971 2515 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"744872b6-19c7-43d9-a52d-0bde01815327\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:21.721136 kubelet[2515]: E0130 13:56:21.721004 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"744872b6-19c7-43d9-a52d-0bde01815327\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d8c6dbb4c-cq6nk" podUID="744872b6-19c7-43d9-a52d-0bde01815327" Jan 30 13:56:21.721739 containerd[1461]: time="2025-01-30T13:56:21.721692872Z" level=error msg="StopPodSandbox for \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\" failed" error="failed to destroy network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:56:21.721991 kubelet[2515]: E0130 13:56:21.721952 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:21.722045 kubelet[2515]: E0130 13:56:21.722005 2515 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68"} Jan 30 13:56:21.722045 kubelet[2515]: E0130 13:56:21.722038 2515 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"054a7459-c704-4251-80d1-8bd9fe52159e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:56:21.722160 kubelet[2515]: E0130 13:56:21.722063 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"054a7459-c704-4251-80d1-8bd9fe52159e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-zd2pv" podUID="054a7459-c704-4251-80d1-8bd9fe52159e" Jan 30 13:56:27.134797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324374291.mount: Deactivated successfully. Jan 30 13:56:27.255974 containerd[1461]: time="2025-01-30T13:56:27.223819471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:56:27.255974 containerd[1461]: time="2025-01-30T13:56:27.246974093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:27.289646 containerd[1461]: time="2025-01-30T13:56:27.289573791Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:27.294218 containerd[1461]: time="2025-01-30T13:56:27.294143950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:27.299711 containerd[1461]: time="2025-01-30T13:56:27.299485426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.711149675s" Jan 30 13:56:27.299711 containerd[1461]: time="2025-01-30T13:56:27.299552886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:56:27.403731 containerd[1461]: time="2025-01-30T13:56:27.403626920Z" level=info msg="CreateContainer within sandbox \"33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:56:27.538607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869053798.mount: Deactivated successfully. Jan 30 13:56:27.557227 containerd[1461]: time="2025-01-30T13:56:27.557070755Z" level=info msg="CreateContainer within sandbox \"33e15d5809ec96e0433e00bd7d1169d23257b28714a8269eeff88f0ee0abce20\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c9d2c5deed4c4487d8bf06f79d3adbe88fdb70997ef07c01813f6e64e2a36607\"" Jan 30 13:56:27.567307 containerd[1461]: time="2025-01-30T13:56:27.563572474Z" level=info msg="StartContainer for \"c9d2c5deed4c4487d8bf06f79d3adbe88fdb70997ef07c01813f6e64e2a36607\"" Jan 30 13:56:27.735629 systemd[1]: Started cri-containerd-c9d2c5deed4c4487d8bf06f79d3adbe88fdb70997ef07c01813f6e64e2a36607.scope - libcontainer container c9d2c5deed4c4487d8bf06f79d3adbe88fdb70997ef07c01813f6e64e2a36607. Jan 30 13:56:27.808629 containerd[1461]: time="2025-01-30T13:56:27.808545896Z" level=info msg="StartContainer for \"c9d2c5deed4c4487d8bf06f79d3adbe88fdb70997ef07c01813f6e64e2a36607\" returns successfully" Jan 30 13:56:27.905830 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:56:27.905984 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:56:28.472493 kubelet[2515]: I0130 13:56:28.471316 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:28.472493 kubelet[2515]: E0130 13:56:28.471878 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:28.711429 kubelet[2515]: E0130 13:56:28.710946 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:28.711429 kubelet[2515]: E0130 13:56:28.711037 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:29.714138 kubelet[2515]: E0130 13:56:29.713737 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:30.718178 kubelet[2515]: E0130 13:56:30.718125 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:31.031328 kernel: bpftool[3824]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:56:31.441409 systemd-networkd[1361]: vxlan.calico: Link UP Jan 30 13:56:31.441426 systemd-networkd[1361]: vxlan.calico: Gained carrier Jan 30 13:56:32.348708 containerd[1461]: time="2025-01-30T13:56:32.347435667Z" level=info msg="StopPodSandbox for \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\"" Jan 30 13:56:32.348708 containerd[1461]: time="2025-01-30T13:56:32.348194763Z" level=info msg="StopPodSandbox for \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\"" Jan 30 13:56:32.480864 kubelet[2515]: I0130 13:56:32.480586 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jr8hk" podStartSLOduration=6.163044384 podStartE2EDuration="23.472595344s" podCreationTimestamp="2025-01-30 13:56:09 +0000 UTC" firstStartedPulling="2025-01-30 13:56:09.996417553 +0000 UTC m=+12.834260495" lastFinishedPulling="2025-01-30 13:56:27.305968529 +0000 UTC m=+30.143811455" observedRunningTime="2025-01-30 13:56:28.73589328 +0000 UTC m=+31.573736228" watchObservedRunningTime="2025-01-30 13:56:32.472595344 +0000 UTC m=+35.310438284" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.473 [INFO][3924] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.475 [INFO][3924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" iface="eth0" netns="/var/run/netns/cni-c6e9fc95-00e4-2ffe-f154-b9c644e0eb38" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.476 [INFO][3924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" iface="eth0" netns="/var/run/netns/cni-c6e9fc95-00e4-2ffe-f154-b9c644e0eb38" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.478 [INFO][3924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" iface="eth0" netns="/var/run/netns/cni-c6e9fc95-00e4-2ffe-f154-b9c644e0eb38" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.478 [INFO][3924] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.478 [INFO][3924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.657 [INFO][3938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.659 [INFO][3938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.660 [INFO][3938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.677 [WARNING][3938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.677 [INFO][3938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.680 [INFO][3938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:32.690137 containerd[1461]: 2025-01-30 13:56:32.684 [INFO][3924] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:32.695203 containerd[1461]: time="2025-01-30T13:56:32.691599580Z" level=info msg="TearDown network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\" successfully" Jan 30 13:56:32.695203 containerd[1461]: time="2025-01-30T13:56:32.691674051Z" level=info msg="StopPodSandbox for \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\" returns successfully" Jan 30 13:56:32.697222 systemd[1]: run-netns-cni\x2dc6e9fc95\x2d00e4\x2d2ffe\x2df154\x2db9c644e0eb38.mount: Deactivated successfully. Jan 30 13:56:32.705814 containerd[1461]: time="2025-01-30T13:56:32.705749910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfd54899b-rdf7t,Uid:35e04996-ffc7-4ea4-8fe9-3fe75da55979,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.486 [INFO][3925] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.486 [INFO][3925] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" iface="eth0" netns="/var/run/netns/cni-bb184ca4-c16a-6b9d-fd35-60bb6c0c62ed" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.488 [INFO][3925] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" iface="eth0" netns="/var/run/netns/cni-bb184ca4-c16a-6b9d-fd35-60bb6c0c62ed" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.491 [INFO][3925] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" iface="eth0" netns="/var/run/netns/cni-bb184ca4-c16a-6b9d-fd35-60bb6c0c62ed" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.491 [INFO][3925] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.491 [INFO][3925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.657 [INFO][3940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.659 [INFO][3940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.680 [INFO][3940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.708 [WARNING][3940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.708 [INFO][3940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.712 [INFO][3940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:32.721740 containerd[1461]: 2025-01-30 13:56:32.717 [INFO][3925] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:32.723202 containerd[1461]: time="2025-01-30T13:56:32.722917893Z" level=info msg="TearDown network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\" successfully" Jan 30 13:56:32.723202 containerd[1461]: time="2025-01-30T13:56:32.722986025Z" level=info msg="StopPodSandbox for \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\" returns successfully" Jan 30 13:56:32.723755 kubelet[2515]: E0130 13:56:32.723680 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:32.728303 containerd[1461]: time="2025-01-30T13:56:32.726949541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d7hjc,Uid:3564606d-2944-426a-9353-f6fcadcd5c0d,Namespace:kube-system,Attempt:1,}" Jan 30 13:56:32.732543 systemd[1]: run-netns-cni\x2dbb184ca4\x2dc16a\x2d6b9d\x2dfd35\x2d60bb6c0c62ed.mount: Deactivated successfully. Jan 30 13:56:33.012439 systemd-networkd[1361]: cali3d4c847a3aa: Link UP Jan 30 13:56:33.012804 systemd-networkd[1361]: cali3d4c847a3aa: Gained carrier Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.843 [INFO][3953] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0 coredns-6f6b679f8f- kube-system 3564606d-2944-426a-9353-f6fcadcd5c0d 772 0 2025-01-30 13:56:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-b-c9e031af59 coredns-6f6b679f8f-d7hjc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3d4c847a3aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Namespace="kube-system" Pod="coredns-6f6b679f8f-d7hjc" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.843 [INFO][3953] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Namespace="kube-system" Pod="coredns-6f6b679f8f-d7hjc" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.912 [INFO][3976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" HandleID="k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.934 [INFO][3976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" HandleID="k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bcca0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-b-c9e031af59", "pod":"coredns-6f6b679f8f-d7hjc", "timestamp":"2025-01-30 13:56:32.912305755 +0000 UTC"}, Hostname:"ci-4081.3.0-b-c9e031af59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.934 [INFO][3976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.934 [INFO][3976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.934 [INFO][3976] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-b-c9e031af59' Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.939 [INFO][3976] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.951 [INFO][3976] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.962 [INFO][3976] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.967 [INFO][3976] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.970 [INFO][3976] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.970 [INFO][3976] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.973 [INFO][3976] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.980 [INFO][3976] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.997 [INFO][3976] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.1/26] block=192.168.31.0/26 handle="k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.997 [INFO][3976] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.1/26] handle="k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.997 [INFO][3976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:33.041666 containerd[1461]: 2025-01-30 13:56:32.997 [INFO][3976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.1/26] IPv6=[] ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" HandleID="k8s-pod-network.c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:33.044740 containerd[1461]: 2025-01-30 13:56:33.002 [INFO][3953] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Namespace="kube-system" Pod="coredns-6f6b679f8f-d7hjc" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3564606d-2944-426a-9353-f6fcadcd5c0d", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"", Pod:"coredns-6f6b679f8f-d7hjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d4c847a3aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:33.044740 containerd[1461]: 2025-01-30 13:56:33.002 [INFO][3953] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.1/32] ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Namespace="kube-system" Pod="coredns-6f6b679f8f-d7hjc" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:33.044740 containerd[1461]: 2025-01-30 13:56:33.002 [INFO][3953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d4c847a3aa ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Namespace="kube-system" Pod="coredns-6f6b679f8f-d7hjc" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:33.044740 containerd[1461]: 2025-01-30 13:56:33.011 [INFO][3953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Namespace="kube-system" Pod="coredns-6f6b679f8f-d7hjc" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:33.044740 containerd[1461]: 2025-01-30 13:56:33.012 [INFO][3953] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Namespace="kube-system" Pod="coredns-6f6b679f8f-d7hjc" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3564606d-2944-426a-9353-f6fcadcd5c0d", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d", Pod:"coredns-6f6b679f8f-d7hjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d4c847a3aa", MAC:"32:dd:4c:62:1a:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:33.044740 containerd[1461]: 2025-01-30 13:56:33.037 [INFO][3953] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d" Namespace="kube-system" Pod="coredns-6f6b679f8f-d7hjc" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:33.101180 containerd[1461]: time="2025-01-30T13:56:33.100939528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:33.101180 containerd[1461]: time="2025-01-30T13:56:33.101043052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:33.101180 containerd[1461]: time="2025-01-30T13:56:33.101076817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:33.102797 containerd[1461]: time="2025-01-30T13:56:33.102103304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:33.148343 systemd-networkd[1361]: cali5a2b3de1a4f: Link UP Jan 30 13:56:33.150555 systemd[1]: Started cri-containerd-c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d.scope - libcontainer container c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d. Jan 30 13:56:33.151129 systemd-networkd[1361]: cali5a2b3de1a4f: Gained carrier Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:32.868 [INFO][3954] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0 calico-apiserver-5dfd54899b- calico-apiserver 35e04996-ffc7-4ea4-8fe9-3fe75da55979 771 0 2025-01-30 13:56:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dfd54899b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-b-c9e031af59 calico-apiserver-5dfd54899b-rdf7t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5a2b3de1a4f [] []}} ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-rdf7t" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:32.870 [INFO][3954] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-rdf7t" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:32.954 [INFO][3982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" HandleID="k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:32.969 [INFO][3982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" HandleID="k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039ab00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-b-c9e031af59", "pod":"calico-apiserver-5dfd54899b-rdf7t", "timestamp":"2025-01-30 13:56:32.954603914 +0000 UTC"}, Hostname:"ci-4081.3.0-b-c9e031af59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:32.969 [INFO][3982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:32.998 [INFO][3982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:32.998 [INFO][3982] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-b-c9e031af59' Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.040 [INFO][3982] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.055 [INFO][3982] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.071 [INFO][3982] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.075 [INFO][3982] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.083 [INFO][3982] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.083 [INFO][3982] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.091 [INFO][3982] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.106 [INFO][3982] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.124 [INFO][3982] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.2/26] block=192.168.31.0/26 handle="k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.124 [INFO][3982] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.2/26] handle="k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.124 [INFO][3982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:33.206497 containerd[1461]: 2025-01-30 13:56:33.124 [INFO][3982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.2/26] IPv6=[] ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" HandleID="k8s-pod-network.a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:33.209566 containerd[1461]: 2025-01-30 13:56:33.129 [INFO][3954] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-rdf7t" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0", GenerateName:"calico-apiserver-5dfd54899b-", Namespace:"calico-apiserver", SelfLink:"", UID:"35e04996-ffc7-4ea4-8fe9-3fe75da55979", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfd54899b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"", Pod:"calico-apiserver-5dfd54899b-rdf7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a2b3de1a4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:33.209566 containerd[1461]: 2025-01-30 13:56:33.129 [INFO][3954] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.2/32] ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-rdf7t" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:33.209566 containerd[1461]: 2025-01-30 13:56:33.129 [INFO][3954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a2b3de1a4f ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-rdf7t" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:33.209566 containerd[1461]: 2025-01-30 13:56:33.158 [INFO][3954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-rdf7t" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:33.209566 containerd[1461]: 2025-01-30 13:56:33.160 [INFO][3954] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-rdf7t" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0", GenerateName:"calico-apiserver-5dfd54899b-", Namespace:"calico-apiserver", SelfLink:"", UID:"35e04996-ffc7-4ea4-8fe9-3fe75da55979", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfd54899b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c", Pod:"calico-apiserver-5dfd54899b-rdf7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a2b3de1a4f", MAC:"16:99:08:fb:7f:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:33.209566 containerd[1461]: 2025-01-30 13:56:33.188 [INFO][3954] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-rdf7t" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:33.285708 containerd[1461]: time="2025-01-30T13:56:33.284969171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:33.285708 containerd[1461]: time="2025-01-30T13:56:33.285066760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:33.285708 containerd[1461]: time="2025-01-30T13:56:33.285091094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:33.285708 containerd[1461]: time="2025-01-30T13:56:33.285240531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:33.306807 containerd[1461]: time="2025-01-30T13:56:33.306445615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-d7hjc,Uid:3564606d-2944-426a-9353-f6fcadcd5c0d,Namespace:kube-system,Attempt:1,} returns sandbox id \"c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d\"" Jan 30 13:56:33.322489 kubelet[2515]: E0130 13:56:33.322446 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:33.330603 systemd[1]: Started cri-containerd-a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c.scope - libcontainer container a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c. Jan 30 13:56:33.360380 containerd[1461]: time="2025-01-30T13:56:33.360167905Z" level=info msg="CreateContainer within sandbox \"c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:56:33.361585 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Jan 30 13:56:33.419934 containerd[1461]: time="2025-01-30T13:56:33.419320134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfd54899b-rdf7t,Uid:35e04996-ffc7-4ea4-8fe9-3fe75da55979,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c\"" Jan 30 13:56:33.422565 containerd[1461]: time="2025-01-30T13:56:33.422519602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:56:33.427673 containerd[1461]: time="2025-01-30T13:56:33.427499464Z" level=info msg="CreateContainer within sandbox \"c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"955fd451333bf0786eb0c1a56b1de9f3ded1621d4d32f0074e2a6300f22911b8\"" Jan 30 13:56:33.432310 containerd[1461]: time="2025-01-30T13:56:33.430129390Z" level=info msg="StartContainer for \"955fd451333bf0786eb0c1a56b1de9f3ded1621d4d32f0074e2a6300f22911b8\"" Jan 30 13:56:33.518750 systemd[1]: Started cri-containerd-955fd451333bf0786eb0c1a56b1de9f3ded1621d4d32f0074e2a6300f22911b8.scope - libcontainer container 955fd451333bf0786eb0c1a56b1de9f3ded1621d4d32f0074e2a6300f22911b8. Jan 30 13:56:33.623959 containerd[1461]: time="2025-01-30T13:56:33.623809807Z" level=info msg="StartContainer for \"955fd451333bf0786eb0c1a56b1de9f3ded1621d4d32f0074e2a6300f22911b8\" returns successfully" Jan 30 13:56:33.764462 kubelet[2515]: E0130 13:56:33.764412 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:34.347501 containerd[1461]: time="2025-01-30T13:56:34.347441480Z" level=info msg="StopPodSandbox for \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\"" Jan 30 13:56:34.350311 containerd[1461]: time="2025-01-30T13:56:34.349195924Z" level=info msg="StopPodSandbox for \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\"" Jan 30 13:56:34.458116 kubelet[2515]: I0130 13:56:34.458014 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-d7hjc" podStartSLOduration=32.457986192 podStartE2EDuration="32.457986192s" podCreationTimestamp="2025-01-30 13:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:33.852168797 +0000 UTC m=+36.690011744" watchObservedRunningTime="2025-01-30 13:56:34.457986192 +0000 UTC m=+37.295829138" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.459 [INFO][4164] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.459 [INFO][4164] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" iface="eth0" netns="/var/run/netns/cni-4b3fcef8-dfc3-7698-b45d-634e654c9f9e" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.462 [INFO][4164] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" iface="eth0" netns="/var/run/netns/cni-4b3fcef8-dfc3-7698-b45d-634e654c9f9e" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.462 [INFO][4164] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" iface="eth0" netns="/var/run/netns/cni-4b3fcef8-dfc3-7698-b45d-634e654c9f9e" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.462 [INFO][4164] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.462 [INFO][4164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.525 [INFO][4181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.525 [INFO][4181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.525 [INFO][4181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.548 [WARNING][4181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.548 [INFO][4181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.552 [INFO][4181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:34.561304 containerd[1461]: 2025-01-30 13:56:34.557 [INFO][4164] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:34.563394 containerd[1461]: time="2025-01-30T13:56:34.562489931Z" level=info msg="TearDown network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\" successfully" Jan 30 13:56:34.563394 containerd[1461]: time="2025-01-30T13:56:34.562542624Z" level=info msg="StopPodSandbox for \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\" returns successfully" Jan 30 13:56:34.568529 kubelet[2515]: E0130 13:56:34.566085 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:34.568694 containerd[1461]: time="2025-01-30T13:56:34.567593006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zd2pv,Uid:054a7459-c704-4251-80d1-8bd9fe52159e,Namespace:kube-system,Attempt:1,}" Jan 30 13:56:34.571889 systemd[1]: run-netns-cni\x2d4b3fcef8\x2ddfc3\x2d7698\x2db45d\x2d634e654c9f9e.mount: Deactivated successfully. Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.482 [INFO][4168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.486 [INFO][4168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" iface="eth0" netns="/var/run/netns/cni-73b76b0e-44ae-ff68-3514-0d674235fc8c" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.487 [INFO][4168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" iface="eth0" netns="/var/run/netns/cni-73b76b0e-44ae-ff68-3514-0d674235fc8c" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.492 [INFO][4168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" iface="eth0" netns="/var/run/netns/cni-73b76b0e-44ae-ff68-3514-0d674235fc8c" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.493 [INFO][4168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.493 [INFO][4168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.576 [INFO][4186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.576 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.576 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.586 [WARNING][4186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.586 [INFO][4186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.589 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:34.603584 containerd[1461]: 2025-01-30 13:56:34.592 [INFO][4168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:34.608661 containerd[1461]: time="2025-01-30T13:56:34.603812896Z" level=info msg="TearDown network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\" successfully" Jan 30 13:56:34.608661 containerd[1461]: time="2025-01-30T13:56:34.608655788Z" level=info msg="StopPodSandbox for \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\" returns successfully" Jan 30 13:56:34.617972 systemd[1]: run-netns-cni\x2d73b76b0e\x2d44ae\x2dff68\x2d3514\x2d0d674235fc8c.mount: Deactivated successfully. Jan 30 13:56:34.632727 containerd[1461]: time="2025-01-30T13:56:34.632605175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dshrw,Uid:16ce8e60-3b3d-4b79-86f3-2473807ac6e1,Namespace:calico-system,Attempt:1,}" Jan 30 13:56:34.707227 systemd-networkd[1361]: cali5a2b3de1a4f: Gained IPv6LL Jan 30 13:56:34.769241 kubelet[2515]: E0130 13:56:34.768745 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:35.027109 systemd-networkd[1361]: calibb089eb00e3: Link UP Jan 30 13:56:35.035591 systemd-networkd[1361]: calibb089eb00e3: Gained carrier Jan 30 13:56:35.090994 systemd-networkd[1361]: cali3d4c847a3aa: Gained IPv6LL Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.688 [INFO][4195] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0 coredns-6f6b679f8f- kube-system 054a7459-c704-4251-80d1-8bd9fe52159e 795 0 2025-01-30 13:56:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-b-c9e031af59 coredns-6f6b679f8f-zd2pv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibb089eb00e3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Namespace="kube-system" Pod="coredns-6f6b679f8f-zd2pv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.688 [INFO][4195] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Namespace="kube-system" Pod="coredns-6f6b679f8f-zd2pv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.793 [INFO][4216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" HandleID="k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.913 [INFO][4216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" HandleID="k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334d60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-b-c9e031af59", "pod":"coredns-6f6b679f8f-zd2pv", "timestamp":"2025-01-30 13:56:34.793063936 +0000 UTC"}, Hostname:"ci-4081.3.0-b-c9e031af59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.915 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.915 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.915 [INFO][4216] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-b-c9e031af59' Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.929 [INFO][4216] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.938 [INFO][4216] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.949 [INFO][4216] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.953 [INFO][4216] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.957 [INFO][4216] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.958 [INFO][4216] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.960 [INFO][4216] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793 Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:34.967 [INFO][4216] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:35.010 [INFO][4216] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.3/26] block=192.168.31.0/26 handle="k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:35.012 [INFO][4216] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.3/26] handle="k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:35.012 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:35.120138 containerd[1461]: 2025-01-30 13:56:35.012 [INFO][4216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.3/26] IPv6=[] ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" HandleID="k8s-pod-network.0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:35.133545 containerd[1461]: 2025-01-30 13:56:35.019 [INFO][4195] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Namespace="kube-system" Pod="coredns-6f6b679f8f-zd2pv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"054a7459-c704-4251-80d1-8bd9fe52159e", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"", Pod:"coredns-6f6b679f8f-zd2pv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb089eb00e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:35.133545 containerd[1461]: 2025-01-30 13:56:35.019 [INFO][4195] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.3/32] ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Namespace="kube-system" Pod="coredns-6f6b679f8f-zd2pv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:35.133545 containerd[1461]: 2025-01-30 13:56:35.019 [INFO][4195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb089eb00e3 ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Namespace="kube-system" Pod="coredns-6f6b679f8f-zd2pv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:35.133545 containerd[1461]: 2025-01-30 13:56:35.033 [INFO][4195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Namespace="kube-system" Pod="coredns-6f6b679f8f-zd2pv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:35.133545 containerd[1461]: 2025-01-30 13:56:35.036 [INFO][4195] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Namespace="kube-system" Pod="coredns-6f6b679f8f-zd2pv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"054a7459-c704-4251-80d1-8bd9fe52159e", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793", Pod:"coredns-6f6b679f8f-zd2pv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb089eb00e3", MAC:"5a:42:15:e1:47:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:35.133545 containerd[1461]: 2025-01-30 13:56:35.108 [INFO][4195] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793" Namespace="kube-system" Pod="coredns-6f6b679f8f-zd2pv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:35.228813 systemd-networkd[1361]: cali78028eb8545: Link UP Jan 30 13:56:35.230200 systemd-networkd[1361]: cali78028eb8545: Gained carrier Jan 30 13:56:35.242277 containerd[1461]: time="2025-01-30T13:56:35.241934585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:35.242277 containerd[1461]: time="2025-01-30T13:56:35.242028598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:35.242277 containerd[1461]: time="2025-01-30T13:56:35.242050558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:35.243797 containerd[1461]: time="2025-01-30T13:56:35.243398586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:34.743 [INFO][4205] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0 csi-node-driver- calico-system 16ce8e60-3b3d-4b79-86f3-2473807ac6e1 796 0 2025-01-30 13:56:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-b-c9e031af59 csi-node-driver-dshrw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali78028eb8545 [] []}} ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Namespace="calico-system" Pod="csi-node-driver-dshrw" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:34.744 [INFO][4205] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Namespace="calico-system" Pod="csi-node-driver-dshrw" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:34.856 [INFO][4222] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" HandleID="k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:34.925 [INFO][4222] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" HandleID="k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031afa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-b-c9e031af59", "pod":"csi-node-driver-dshrw", "timestamp":"2025-01-30 13:56:34.856024756 +0000 UTC"}, Hostname:"ci-4081.3.0-b-c9e031af59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:34.925 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.012 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.013 [INFO][4222] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-b-c9e031af59' Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.028 [INFO][4222] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.052 [INFO][4222] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.135 [INFO][4222] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.154 [INFO][4222] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.161 [INFO][4222] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.161 [INFO][4222] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.170 [INFO][4222] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.185 [INFO][4222] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.203 [INFO][4222] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.4/26] block=192.168.31.0/26 handle="k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.207 [INFO][4222] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.4/26] handle="k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.208 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:35.293843 containerd[1461]: 2025-01-30 13:56:35.209 [INFO][4222] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.4/26] IPv6=[] ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" HandleID="k8s-pod-network.675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:35.297985 containerd[1461]: 2025-01-30 13:56:35.214 [INFO][4205] cni-plugin/k8s.go 386: Populated endpoint ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Namespace="calico-system" Pod="csi-node-driver-dshrw" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16ce8e60-3b3d-4b79-86f3-2473807ac6e1", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"", Pod:"csi-node-driver-dshrw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78028eb8545", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:35.297985 containerd[1461]: 2025-01-30 13:56:35.216 [INFO][4205] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.4/32] ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Namespace="calico-system" Pod="csi-node-driver-dshrw" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:35.297985 containerd[1461]: 2025-01-30 13:56:35.216 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78028eb8545 ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Namespace="calico-system" Pod="csi-node-driver-dshrw" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:35.297985 containerd[1461]: 2025-01-30 13:56:35.233 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Namespace="calico-system" Pod="csi-node-driver-dshrw" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:35.297985 containerd[1461]: 2025-01-30 13:56:35.236 [INFO][4205] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Namespace="calico-system" Pod="csi-node-driver-dshrw" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16ce8e60-3b3d-4b79-86f3-2473807ac6e1", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a", Pod:"csi-node-driver-dshrw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78028eb8545", MAC:"3e:b6:05:2e:9f:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:35.297985 containerd[1461]: 2025-01-30 13:56:35.275 [INFO][4205] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a" Namespace="calico-system" Pod="csi-node-driver-dshrw" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:35.301793 systemd[1]: Started cri-containerd-0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793.scope - libcontainer container 0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793. Jan 30 13:56:35.352260 containerd[1461]: time="2025-01-30T13:56:35.351784789Z" level=info msg="StopPodSandbox for \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\"" Jan 30 13:56:35.385402 containerd[1461]: time="2025-01-30T13:56:35.384346862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:35.385402 containerd[1461]: time="2025-01-30T13:56:35.384595925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:35.385402 containerd[1461]: time="2025-01-30T13:56:35.384628356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:35.388109 containerd[1461]: time="2025-01-30T13:56:35.387615463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:35.457317 containerd[1461]: time="2025-01-30T13:56:35.456427887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zd2pv,Uid:054a7459-c704-4251-80d1-8bd9fe52159e,Namespace:kube-system,Attempt:1,} returns sandbox id \"0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793\"" Jan 30 13:56:35.466214 kubelet[2515]: E0130 13:56:35.463213 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:35.473005 containerd[1461]: time="2025-01-30T13:56:35.471805677Z" level=info msg="CreateContainer within sandbox \"0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:56:35.508627 systemd[1]: Started cri-containerd-675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a.scope - libcontainer container 675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a. Jan 30 13:56:35.537553 containerd[1461]: time="2025-01-30T13:56:35.537478450Z" level=info msg="CreateContainer within sandbox \"0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0935a35431db8568b3ed8d14efa43d52d818f0c1919145b4e10e2f980eed872\"" Jan 30 13:56:35.545968 containerd[1461]: time="2025-01-30T13:56:35.544889246Z" level=info msg="StartContainer for \"a0935a35431db8568b3ed8d14efa43d52d818f0c1919145b4e10e2f980eed872\"" Jan 30 13:56:35.649031 systemd[1]: Started cri-containerd-a0935a35431db8568b3ed8d14efa43d52d818f0c1919145b4e10e2f980eed872.scope - libcontainer container a0935a35431db8568b3ed8d14efa43d52d818f0c1919145b4e10e2f980eed872. Jan 30 13:56:35.794793 containerd[1461]: time="2025-01-30T13:56:35.794643060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dshrw,Uid:16ce8e60-3b3d-4b79-86f3-2473807ac6e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a\"" Jan 30 13:56:35.797990 containerd[1461]: time="2025-01-30T13:56:35.796710420Z" level=info msg="StartContainer for \"a0935a35431db8568b3ed8d14efa43d52d818f0c1919145b4e10e2f980eed872\" returns successfully" Jan 30 13:56:35.801110 kubelet[2515]: E0130 13:56:35.800774 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.618 [INFO][4323] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.618 [INFO][4323] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" iface="eth0" netns="/var/run/netns/cni-b90dc727-3584-e92e-0a4a-dde22a3db30f" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.619 [INFO][4323] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" iface="eth0" netns="/var/run/netns/cni-b90dc727-3584-e92e-0a4a-dde22a3db30f" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.619 [INFO][4323] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" iface="eth0" netns="/var/run/netns/cni-b90dc727-3584-e92e-0a4a-dde22a3db30f" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.620 [INFO][4323] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.620 [INFO][4323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.765 [INFO][4377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.766 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.766 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.789 [WARNING][4377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.789 [INFO][4377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.801 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:35.816487 containerd[1461]: 2025-01-30 13:56:35.808 [INFO][4323] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:35.821302 containerd[1461]: time="2025-01-30T13:56:35.817141693Z" level=info msg="TearDown network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\" successfully" Jan 30 13:56:35.821302 containerd[1461]: time="2025-01-30T13:56:35.817178911Z" level=info msg="StopPodSandbox for \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\" returns successfully" Jan 30 13:56:35.821302 containerd[1461]: time="2025-01-30T13:56:35.820408000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfd54899b-bpksv,Uid:e9049fb6-9111-4307-b692-0bfdfb1f5bc6,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:56:35.828507 systemd[1]: run-netns-cni\x2db90dc727\x2d3584\x2de92e\x2d0a4a\x2ddde22a3db30f.mount: Deactivated successfully. Jan 30 13:56:36.113936 systemd-networkd[1361]: calibb089eb00e3: Gained IPv6LL Jan 30 13:56:36.163677 systemd-networkd[1361]: cali8be0f824b26: Link UP Jan 30 13:56:36.163947 systemd-networkd[1361]: cali8be0f824b26: Gained carrier Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:35.979 [INFO][4409] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0 calico-apiserver-5dfd54899b- calico-apiserver e9049fb6-9111-4307-b692-0bfdfb1f5bc6 816 0 2025-01-30 13:56:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dfd54899b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-b-c9e031af59 calico-apiserver-5dfd54899b-bpksv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8be0f824b26 [] []}} ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-bpksv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:35.979 [INFO][4409] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-bpksv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.042 [INFO][4423] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" HandleID="k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.060 [INFO][4423] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" HandleID="k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efa70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-b-c9e031af59", "pod":"calico-apiserver-5dfd54899b-bpksv", "timestamp":"2025-01-30 13:56:36.042490357 +0000 UTC"}, Hostname:"ci-4081.3.0-b-c9e031af59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.060 [INFO][4423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.061 [INFO][4423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.061 [INFO][4423] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-b-c9e031af59' Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.066 [INFO][4423] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.075 [INFO][4423] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.086 [INFO][4423] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.091 [INFO][4423] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.098 [INFO][4423] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.098 [INFO][4423] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.102 [INFO][4423] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8 Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.122 [INFO][4423] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.149 [INFO][4423] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.5/26] block=192.168.31.0/26 handle="k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.149 [INFO][4423] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.5/26] handle="k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.149 [INFO][4423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:36.197982 containerd[1461]: 2025-01-30 13:56:36.149 [INFO][4423] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.5/26] IPv6=[] ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" HandleID="k8s-pod-network.b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:36.198949 containerd[1461]: 2025-01-30 13:56:36.156 [INFO][4409] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-bpksv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0", GenerateName:"calico-apiserver-5dfd54899b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9049fb6-9111-4307-b692-0bfdfb1f5bc6", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfd54899b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"", Pod:"calico-apiserver-5dfd54899b-bpksv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8be0f824b26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:36.198949 containerd[1461]: 2025-01-30 13:56:36.156 [INFO][4409] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.5/32] ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-bpksv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:36.198949 containerd[1461]: 2025-01-30 13:56:36.156 [INFO][4409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8be0f824b26 ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-bpksv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:36.198949 containerd[1461]: 2025-01-30 13:56:36.160 [INFO][4409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-bpksv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:36.198949 containerd[1461]: 2025-01-30 13:56:36.161 [INFO][4409] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-bpksv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0", GenerateName:"calico-apiserver-5dfd54899b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9049fb6-9111-4307-b692-0bfdfb1f5bc6", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfd54899b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8", Pod:"calico-apiserver-5dfd54899b-bpksv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8be0f824b26", MAC:"46:c4:22:c3:b9:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:36.198949 containerd[1461]: 2025-01-30 13:56:36.185 [INFO][4409] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8" Namespace="calico-apiserver" Pod="calico-apiserver-5dfd54899b-bpksv" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:36.281613 containerd[1461]: time="2025-01-30T13:56:36.281379920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:36.281845 containerd[1461]: time="2025-01-30T13:56:36.281723253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:36.281845 containerd[1461]: time="2025-01-30T13:56:36.281767434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:36.282034 containerd[1461]: time="2025-01-30T13:56:36.281961924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:36.323587 systemd[1]: Started cri-containerd-b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8.scope - libcontainer container b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8. Jan 30 13:56:36.348872 containerd[1461]: time="2025-01-30T13:56:36.348806488Z" level=info msg="StopPodSandbox for \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\"" Jan 30 13:56:36.369536 systemd-networkd[1361]: cali78028eb8545: Gained IPv6LL Jan 30 13:56:36.505320 containerd[1461]: time="2025-01-30T13:56:36.505242583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfd54899b-bpksv,Uid:e9049fb6-9111-4307-b692-0bfdfb1f5bc6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8\"" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.573 [INFO][4488] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.574 [INFO][4488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" iface="eth0" netns="/var/run/netns/cni-c9dad6a1-7e8d-4880-ee1a-ebd0fa2b2fea" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.575 [INFO][4488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" iface="eth0" netns="/var/run/netns/cni-c9dad6a1-7e8d-4880-ee1a-ebd0fa2b2fea" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.576 [INFO][4488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" iface="eth0" netns="/var/run/netns/cni-c9dad6a1-7e8d-4880-ee1a-ebd0fa2b2fea" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.576 [INFO][4488] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.576 [INFO][4488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.637 [INFO][4501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.639 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.640 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.693 [WARNING][4501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.693 [INFO][4501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.724 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:36.734864 containerd[1461]: 2025-01-30 13:56:36.731 [INFO][4488] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:36.737060 containerd[1461]: time="2025-01-30T13:56:36.736159348Z" level=info msg="TearDown network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\" successfully" Jan 30 13:56:36.737060 containerd[1461]: time="2025-01-30T13:56:36.736214726Z" level=info msg="StopPodSandbox for \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\" returns successfully" Jan 30 13:56:36.742181 containerd[1461]: time="2025-01-30T13:56:36.740376080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8c6dbb4c-cq6nk,Uid:744872b6-19c7-43d9-a52d-0bde01815327,Namespace:calico-system,Attempt:1,}" Jan 30 13:56:36.743651 systemd[1]: run-netns-cni\x2dc9dad6a1\x2d7e8d\x2d4880\x2dee1a\x2debd0fa2b2fea.mount: Deactivated successfully. Jan 30 13:56:36.845117 kubelet[2515]: E0130 13:56:36.844374 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:37.043898 systemd[1]: Started sshd@9-64.23.157.134:22-147.75.109.163:44702.service - OpenSSH per-connection server daemon (147.75.109.163:44702). Jan 30 13:56:37.067762 kubelet[2515]: I0130 13:56:37.067180 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zd2pv" podStartSLOduration=35.067146734 podStartE2EDuration="35.067146734s" podCreationTimestamp="2025-01-30 13:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:36.968717418 +0000 UTC m=+39.806560367" watchObservedRunningTime="2025-01-30 13:56:37.067146734 +0000 UTC m=+39.904989686" Jan 30 13:56:37.295395 sshd[4521]: Accepted publickey for core from 147.75.109.163 port 44702 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:37.313942 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:37.330503 systemd-logind[1449]: New session 10 of user core. Jan 30 13:56:37.338734 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:56:37.544641 systemd-networkd[1361]: cali00d626f24e0: Link UP Jan 30 13:56:37.548354 systemd-networkd[1361]: cali00d626f24e0: Gained carrier Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.106 [INFO][4507] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0 calico-kube-controllers-7d8c6dbb4c- calico-system 744872b6-19c7-43d9-a52d-0bde01815327 834 0 2025-01-30 13:56:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d8c6dbb4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-b-c9e031af59 calico-kube-controllers-7d8c6dbb4c-cq6nk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali00d626f24e0 [] []}} ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Namespace="calico-system" Pod="calico-kube-controllers-7d8c6dbb4c-cq6nk" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.108 [INFO][4507] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Namespace="calico-system" Pod="calico-kube-controllers-7d8c6dbb4c-cq6nk" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.318 [INFO][4528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" HandleID="k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.371 [INFO][4528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" HandleID="k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-b-c9e031af59", "pod":"calico-kube-controllers-7d8c6dbb4c-cq6nk", "timestamp":"2025-01-30 13:56:37.318881147 +0000 UTC"}, Hostname:"ci-4081.3.0-b-c9e031af59", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.371 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.371 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.371 [INFO][4528] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-b-c9e031af59' Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.378 [INFO][4528] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.392 [INFO][4528] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.419 [INFO][4528] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.432 [INFO][4528] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.445 [INFO][4528] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.445 [INFO][4528] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.452 [INFO][4528] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5 Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.477 [INFO][4528] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.518 [INFO][4528] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.6/26] block=192.168.31.0/26 handle="k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.518 [INFO][4528] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.6/26] handle="k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" host="ci-4081.3.0-b-c9e031af59" Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.518 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:37.597210 containerd[1461]: 2025-01-30 13:56:37.518 [INFO][4528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.6/26] IPv6=[] ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" HandleID="k8s-pod-network.9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:37.607762 containerd[1461]: 2025-01-30 13:56:37.529 [INFO][4507] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Namespace="calico-system" Pod="calico-kube-controllers-7d8c6dbb4c-cq6nk" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0", GenerateName:"calico-kube-controllers-7d8c6dbb4c-", Namespace:"calico-system", SelfLink:"", UID:"744872b6-19c7-43d9-a52d-0bde01815327", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d8c6dbb4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"", Pod:"calico-kube-controllers-7d8c6dbb4c-cq6nk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00d626f24e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:37.607762 containerd[1461]: 2025-01-30 13:56:37.530 [INFO][4507] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.6/32] ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Namespace="calico-system" Pod="calico-kube-controllers-7d8c6dbb4c-cq6nk" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:37.607762 containerd[1461]: 2025-01-30 13:56:37.530 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00d626f24e0 ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Namespace="calico-system" Pod="calico-kube-controllers-7d8c6dbb4c-cq6nk" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:37.607762 containerd[1461]: 2025-01-30 13:56:37.553 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Namespace="calico-system" Pod="calico-kube-controllers-7d8c6dbb4c-cq6nk" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:37.607762 containerd[1461]: 2025-01-30 13:56:37.557 [INFO][4507] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Namespace="calico-system" Pod="calico-kube-controllers-7d8c6dbb4c-cq6nk" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0", GenerateName:"calico-kube-controllers-7d8c6dbb4c-", Namespace:"calico-system", SelfLink:"", UID:"744872b6-19c7-43d9-a52d-0bde01815327", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d8c6dbb4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5", Pod:"calico-kube-controllers-7d8c6dbb4c-cq6nk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00d626f24e0", MAC:"76:4c:06:60:33:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:37.607762 containerd[1461]: 2025-01-30 13:56:37.587 [INFO][4507] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5" Namespace="calico-system" Pod="calico-kube-controllers-7d8c6dbb4c-cq6nk" WorkloadEndpoint="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:37.715091 systemd-networkd[1361]: cali8be0f824b26: Gained IPv6LL Jan 30 13:56:37.723578 containerd[1461]: time="2025-01-30T13:56:37.723432900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:37.723817 containerd[1461]: time="2025-01-30T13:56:37.723772923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:37.724067 containerd[1461]: time="2025-01-30T13:56:37.723963774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:37.724531 containerd[1461]: time="2025-01-30T13:56:37.724439004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:37.855015 kubelet[2515]: E0130 13:56:37.851970 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:37.949596 systemd[1]: Started cri-containerd-9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5.scope - libcontainer container 9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5. Jan 30 13:56:38.130602 sshd[4521]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:38.146206 systemd[1]: sshd@9-64.23.157.134:22-147.75.109.163:44702.service: Deactivated successfully. Jan 30 13:56:38.149179 containerd[1461]: time="2025-01-30T13:56:38.148177860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8c6dbb4c-cq6nk,Uid:744872b6-19c7-43d9-a52d-0bde01815327,Namespace:calico-system,Attempt:1,} returns sandbox id \"9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5\"" Jan 30 13:56:38.154997 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:56:38.160928 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:56:38.165909 systemd-logind[1449]: Removed session 10. Jan 30 13:56:38.325918 containerd[1461]: time="2025-01-30T13:56:38.325824148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:38.327792 containerd[1461]: time="2025-01-30T13:56:38.327709345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:56:38.328720 containerd[1461]: time="2025-01-30T13:56:38.328631523Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:38.332300 containerd[1461]: time="2025-01-30T13:56:38.331946074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:38.334863 containerd[1461]: time="2025-01-30T13:56:38.334702296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.912123367s" Jan 30 13:56:38.335626 containerd[1461]: time="2025-01-30T13:56:38.335436266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:56:38.337467 containerd[1461]: time="2025-01-30T13:56:38.337073730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:56:38.341028 containerd[1461]: time="2025-01-30T13:56:38.340751776Z" level=info msg="CreateContainer within sandbox \"a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:56:38.361541 containerd[1461]: time="2025-01-30T13:56:38.361477222Z" level=info msg="CreateContainer within sandbox \"a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"39dee72fed5d888a28ff62f90ac124a7535c7fac67fce6c88ed35787e289ed85\"" Jan 30 13:56:38.362780 containerd[1461]: time="2025-01-30T13:56:38.362637027Z" level=info msg="StartContainer for \"39dee72fed5d888a28ff62f90ac124a7535c7fac67fce6c88ed35787e289ed85\"" Jan 30 13:56:38.432565 systemd[1]: Started cri-containerd-39dee72fed5d888a28ff62f90ac124a7535c7fac67fce6c88ed35787e289ed85.scope - libcontainer container 39dee72fed5d888a28ff62f90ac124a7535c7fac67fce6c88ed35787e289ed85. Jan 30 13:56:38.527377 containerd[1461]: time="2025-01-30T13:56:38.526814535Z" level=info msg="StartContainer for \"39dee72fed5d888a28ff62f90ac124a7535c7fac67fce6c88ed35787e289ed85\" returns successfully" Jan 30 13:56:38.738487 systemd-networkd[1361]: cali00d626f24e0: Gained IPv6LL Jan 30 13:56:38.860089 kubelet[2515]: E0130 13:56:38.859328 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:56:38.898241 kubelet[2515]: I0130 13:56:38.896173 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dfd54899b-rdf7t" podStartSLOduration=25.981629673 podStartE2EDuration="30.896154186s" podCreationTimestamp="2025-01-30 13:56:08 +0000 UTC" firstStartedPulling="2025-01-30 13:56:33.422007538 +0000 UTC m=+36.259850465" lastFinishedPulling="2025-01-30 13:56:38.33653203 +0000 UTC m=+41.174374978" observedRunningTime="2025-01-30 13:56:38.895544031 +0000 UTC m=+41.733386982" watchObservedRunningTime="2025-01-30 13:56:38.896154186 +0000 UTC m=+41.733997132" Jan 30 13:56:39.832118 containerd[1461]: time="2025-01-30T13:56:39.832052297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:39.834921 containerd[1461]: time="2025-01-30T13:56:39.834346019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:56:39.835567 containerd[1461]: time="2025-01-30T13:56:39.835494637Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:39.841186 containerd[1461]: time="2025-01-30T13:56:39.841112671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:39.842923 containerd[1461]: time="2025-01-30T13:56:39.842858765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.505741393s" Jan 30 13:56:39.842923 containerd[1461]: time="2025-01-30T13:56:39.842920644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:56:39.844951 containerd[1461]: time="2025-01-30T13:56:39.844732460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:56:39.846745 containerd[1461]: time="2025-01-30T13:56:39.846702368Z" level=info msg="CreateContainer within sandbox \"675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:56:39.897629 containerd[1461]: time="2025-01-30T13:56:39.897118240Z" level=info msg="CreateContainer within sandbox \"675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"681bde88049bdd4fec220eb5e0ed8af79cbb6b753b09590bd1a6529bd5c02055\"" Jan 30 13:56:39.898405 containerd[1461]: time="2025-01-30T13:56:39.898047485Z" level=info msg="StartContainer for \"681bde88049bdd4fec220eb5e0ed8af79cbb6b753b09590bd1a6529bd5c02055\"" Jan 30 13:56:39.993262 systemd[1]: Started cri-containerd-681bde88049bdd4fec220eb5e0ed8af79cbb6b753b09590bd1a6529bd5c02055.scope - libcontainer container 681bde88049bdd4fec220eb5e0ed8af79cbb6b753b09590bd1a6529bd5c02055. Jan 30 13:56:40.053063 containerd[1461]: time="2025-01-30T13:56:40.052988994Z" level=info msg="StartContainer for \"681bde88049bdd4fec220eb5e0ed8af79cbb6b753b09590bd1a6529bd5c02055\" returns successfully" Jan 30 13:56:40.430316 containerd[1461]: time="2025-01-30T13:56:40.427097794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:56:40.430316 containerd[1461]: time="2025-01-30T13:56:40.428973124Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:40.433344 containerd[1461]: time="2025-01-30T13:56:40.433227647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 588.436024ms" Jan 30 13:56:40.433632 containerd[1461]: time="2025-01-30T13:56:40.433604797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:56:40.436911 containerd[1461]: time="2025-01-30T13:56:40.436859696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:56:40.441876 containerd[1461]: time="2025-01-30T13:56:40.441825227Z" level=info msg="CreateContainer within sandbox \"b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:56:40.468730 containerd[1461]: time="2025-01-30T13:56:40.468660909Z" level=info msg="CreateContainer within sandbox \"b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9bdb1e20435e21a82353f34c85714ad1e0486a520eeb94118295038f7ea1a942\"" Jan 30 13:56:40.469645 containerd[1461]: time="2025-01-30T13:56:40.469606248Z" level=info msg="StartContainer for \"9bdb1e20435e21a82353f34c85714ad1e0486a520eeb94118295038f7ea1a942\"" Jan 30 13:56:40.566623 systemd[1]: Started cri-containerd-9bdb1e20435e21a82353f34c85714ad1e0486a520eeb94118295038f7ea1a942.scope - libcontainer container 9bdb1e20435e21a82353f34c85714ad1e0486a520eeb94118295038f7ea1a942. Jan 30 13:56:40.802013 containerd[1461]: time="2025-01-30T13:56:40.801137617Z" level=info msg="StartContainer for \"9bdb1e20435e21a82353f34c85714ad1e0486a520eeb94118295038f7ea1a942\" returns successfully" Jan 30 13:56:43.160171 systemd[1]: Started sshd@10-64.23.157.134:22-147.75.109.163:43526.service - OpenSSH per-connection server daemon (147.75.109.163:43526). Jan 30 13:56:43.323885 kubelet[2515]: I0130 13:56:43.322843 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dfd54899b-bpksv" podStartSLOduration=31.396066711 podStartE2EDuration="35.322464953s" podCreationTimestamp="2025-01-30 13:56:08 +0000 UTC" firstStartedPulling="2025-01-30 13:56:36.510141114 +0000 UTC m=+39.347984055" lastFinishedPulling="2025-01-30 13:56:40.436539356 +0000 UTC m=+43.274382297" observedRunningTime="2025-01-30 13:56:40.95542531 +0000 UTC m=+43.793268257" watchObservedRunningTime="2025-01-30 13:56:43.322464953 +0000 UTC m=+46.160307894" Jan 30 13:56:43.422248 sshd[4753]: Accepted publickey for core from 147.75.109.163 port 43526 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:43.432805 sshd[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:43.463885 systemd-logind[1449]: New session 11 of user core. Jan 30 13:56:43.474060 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:56:43.647908 containerd[1461]: time="2025-01-30T13:56:43.646756931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:43.652338 containerd[1461]: time="2025-01-30T13:56:43.651481646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:56:43.656128 containerd[1461]: time="2025-01-30T13:56:43.653809622Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:43.657126 containerd[1461]: time="2025-01-30T13:56:43.657065143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:43.662581 containerd[1461]: time="2025-01-30T13:56:43.661204385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.223687874s" Jan 30 13:56:43.662581 containerd[1461]: time="2025-01-30T13:56:43.661306221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:56:43.668754 containerd[1461]: time="2025-01-30T13:56:43.668699896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:56:43.737252 containerd[1461]: time="2025-01-30T13:56:43.736918831Z" level=info msg="CreateContainer within sandbox \"9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:56:43.782403 containerd[1461]: time="2025-01-30T13:56:43.781485327Z" level=info msg="CreateContainer within sandbox \"9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"080f6159928395893499482accd062d41d867c221e68e8004ed318182f9f58d3\"" Jan 30 13:56:43.784306 containerd[1461]: time="2025-01-30T13:56:43.783947554Z" level=info msg="StartContainer for \"080f6159928395893499482accd062d41d867c221e68e8004ed318182f9f58d3\"" Jan 30 13:56:43.946781 systemd[1]: Started cri-containerd-080f6159928395893499482accd062d41d867c221e68e8004ed318182f9f58d3.scope - libcontainer container 080f6159928395893499482accd062d41d867c221e68e8004ed318182f9f58d3. Jan 30 13:56:44.165260 containerd[1461]: time="2025-01-30T13:56:44.164847609Z" level=info msg="StartContainer for \"080f6159928395893499482accd062d41d867c221e68e8004ed318182f9f58d3\" returns successfully" Jan 30 13:56:44.358674 sshd[4753]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:44.365122 systemd[1]: sshd@10-64.23.157.134:22-147.75.109.163:43526.service: Deactivated successfully. Jan 30 13:56:44.370437 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:56:44.373219 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:56:44.377025 systemd-logind[1449]: Removed session 11. Jan 30 13:56:45.019565 kubelet[2515]: I0130 13:56:45.018867 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d8c6dbb4c-cq6nk" podStartSLOduration=30.510837937 podStartE2EDuration="36.018843561s" podCreationTimestamp="2025-01-30 13:56:09 +0000 UTC" firstStartedPulling="2025-01-30 13:56:38.156103457 +0000 UTC m=+40.993946381" lastFinishedPulling="2025-01-30 13:56:43.664109064 +0000 UTC m=+46.501952005" observedRunningTime="2025-01-30 13:56:45.013198022 +0000 UTC m=+47.851040966" watchObservedRunningTime="2025-01-30 13:56:45.018843561 +0000 UTC m=+47.856686509" Jan 30 13:56:45.628930 containerd[1461]: time="2025-01-30T13:56:45.628839904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:45.630908 containerd[1461]: time="2025-01-30T13:56:45.630818585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:56:45.636259 containerd[1461]: time="2025-01-30T13:56:45.636164049Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:45.640793 containerd[1461]: time="2025-01-30T13:56:45.640697410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:45.642840 containerd[1461]: time="2025-01-30T13:56:45.641999136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.972909903s" Jan 30 13:56:45.642840 containerd[1461]: time="2025-01-30T13:56:45.642060985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:56:45.647245 containerd[1461]: time="2025-01-30T13:56:45.647177829Z" level=info msg="CreateContainer within sandbox \"675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:56:45.685877 containerd[1461]: time="2025-01-30T13:56:45.685684879Z" level=info msg="CreateContainer within sandbox \"675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"00fbc084a6e0f492c8f217ff8461f6658c0512c83e34c56efa37c99045861a94\"" Jan 30 13:56:45.687359 containerd[1461]: time="2025-01-30T13:56:45.686445601Z" level=info msg="StartContainer for \"00fbc084a6e0f492c8f217ff8461f6658c0512c83e34c56efa37c99045861a94\"" Jan 30 13:56:45.740858 systemd[1]: Started cri-containerd-00fbc084a6e0f492c8f217ff8461f6658c0512c83e34c56efa37c99045861a94.scope - libcontainer container 00fbc084a6e0f492c8f217ff8461f6658c0512c83e34c56efa37c99045861a94. Jan 30 13:56:45.783320 containerd[1461]: time="2025-01-30T13:56:45.782072944Z" level=info msg="StartContainer for \"00fbc084a6e0f492c8f217ff8461f6658c0512c83e34c56efa37c99045861a94\" returns successfully" Jan 30 13:56:46.648211 kubelet[2515]: I0130 13:56:46.648019 2515 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:56:46.653414 kubelet[2515]: I0130 13:56:46.653340 2515 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:56:49.378832 systemd[1]: Started sshd@11-64.23.157.134:22-147.75.109.163:38362.service - OpenSSH per-connection server daemon (147.75.109.163:38362). Jan 30 13:56:49.490838 sshd[4869]: Accepted publickey for core from 147.75.109.163 port 38362 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:49.493897 sshd[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:49.502752 systemd-logind[1449]: New session 12 of user core. Jan 30 13:56:49.507674 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:56:49.931389 sshd[4869]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:49.942171 systemd[1]: sshd@11-64.23.157.134:22-147.75.109.163:38362.service: Deactivated successfully. Jan 30 13:56:49.945819 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:56:49.949673 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:56:49.957921 systemd[1]: Started sshd@12-64.23.157.134:22-147.75.109.163:38376.service - OpenSSH per-connection server daemon (147.75.109.163:38376). Jan 30 13:56:49.960604 systemd-logind[1449]: Removed session 12. Jan 30 13:56:50.032872 sshd[4883]: Accepted publickey for core from 147.75.109.163 port 38376 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:50.034800 sshd[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:50.042419 systemd-logind[1449]: New session 13 of user core. Jan 30 13:56:50.048577 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:56:50.325522 sshd[4883]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:50.338564 systemd[1]: sshd@12-64.23.157.134:22-147.75.109.163:38376.service: Deactivated successfully. Jan 30 13:56:50.346091 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:56:50.348690 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:56:50.360439 systemd[1]: Started sshd@13-64.23.157.134:22-147.75.109.163:38380.service - OpenSSH per-connection server daemon (147.75.109.163:38380). Jan 30 13:56:50.366715 systemd-logind[1449]: Removed session 13. Jan 30 13:56:50.427441 sshd[4893]: Accepted publickey for core from 147.75.109.163 port 38380 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:50.429634 sshd[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:50.435785 systemd-logind[1449]: New session 14 of user core. Jan 30 13:56:50.447797 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:56:50.645513 sshd[4893]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:50.649876 systemd[1]: sshd@13-64.23.157.134:22-147.75.109.163:38380.service: Deactivated successfully. Jan 30 13:56:50.653564 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:56:50.656462 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:56:50.658069 systemd-logind[1449]: Removed session 14. Jan 30 13:56:55.674752 systemd[1]: Started sshd@14-64.23.157.134:22-147.75.109.163:38396.service - OpenSSH per-connection server daemon (147.75.109.163:38396). Jan 30 13:56:55.796449 sshd[4921]: Accepted publickey for core from 147.75.109.163 port 38396 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:56:55.798197 sshd[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:55.805826 systemd-logind[1449]: New session 15 of user core. Jan 30 13:56:55.810194 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:56:55.989748 sshd[4921]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:55.994058 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:56:55.995911 systemd[1]: sshd@14-64.23.157.134:22-147.75.109.163:38396.service: Deactivated successfully. Jan 30 13:56:55.998691 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:56:56.001357 systemd-logind[1449]: Removed session 15. Jan 30 13:56:57.344109 containerd[1461]: time="2025-01-30T13:56:57.344050647Z" level=info msg="StopPodSandbox for \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\"" Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.540 [WARNING][4967] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0", GenerateName:"calico-apiserver-5dfd54899b-", Namespace:"calico-apiserver", SelfLink:"", UID:"35e04996-ffc7-4ea4-8fe9-3fe75da55979", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfd54899b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c", Pod:"calico-apiserver-5dfd54899b-rdf7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a2b3de1a4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.542 [INFO][4967] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.542 [INFO][4967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" iface="eth0" netns="" Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.542 [INFO][4967] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.542 [INFO][4967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.571 [INFO][4974] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.571 [INFO][4974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.571 [INFO][4974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.579 [WARNING][4974] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.579 [INFO][4974] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.582 [INFO][4974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:57.589597 containerd[1461]: 2025-01-30 13:56:57.585 [INFO][4967] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:57.589597 containerd[1461]: time="2025-01-30T13:56:57.588543590Z" level=info msg="TearDown network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\" successfully" Jan 30 13:56:57.589597 containerd[1461]: time="2025-01-30T13:56:57.588580342Z" level=info msg="StopPodSandbox for \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\" returns successfully" Jan 30 13:56:57.618677 containerd[1461]: time="2025-01-30T13:56:57.618466407Z" level=info msg="RemovePodSandbox for \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\"" Jan 30 13:56:57.622732 containerd[1461]: time="2025-01-30T13:56:57.622607422Z" level=info msg="Forcibly stopping sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\"" Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.688 [WARNING][4992] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0", GenerateName:"calico-apiserver-5dfd54899b-", Namespace:"calico-apiserver", SelfLink:"", UID:"35e04996-ffc7-4ea4-8fe9-3fe75da55979", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfd54899b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"a46205ac53a89d0b9c3446a559e1b85d33b5234cc20589275f536643ffdffd0c", Pod:"calico-apiserver-5dfd54899b-rdf7t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5a2b3de1a4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.689 [INFO][4992] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.689 [INFO][4992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" iface="eth0" netns="" Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.689 [INFO][4992] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.689 [INFO][4992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.732 [INFO][5000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.732 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.732 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.742 [WARNING][5000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.742 [INFO][5000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" HandleID="k8s-pod-network.3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--rdf7t-eth0" Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.745 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:57.750631 containerd[1461]: 2025-01-30 13:56:57.748 [INFO][4992] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4" Jan 30 13:56:57.751749 containerd[1461]: time="2025-01-30T13:56:57.750651980Z" level=info msg="TearDown network for sandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\" successfully" Jan 30 13:56:57.765577 containerd[1461]: time="2025-01-30T13:56:57.765493817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:57.776678 containerd[1461]: time="2025-01-30T13:56:57.776586521Z" level=info msg="RemovePodSandbox \"3fa5a0ee01c29699696165bf03c068360bf06c49c1672f15048ad86e37967bb4\" returns successfully" Jan 30 13:56:57.778452 containerd[1461]: time="2025-01-30T13:56:57.777782064Z" level=info msg="StopPodSandbox for \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\"" Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.834 [WARNING][5018] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3564606d-2944-426a-9353-f6fcadcd5c0d", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d", Pod:"coredns-6f6b679f8f-d7hjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d4c847a3aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.835 [INFO][5018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.835 [INFO][5018] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" iface="eth0" netns="" Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.835 [INFO][5018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.835 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.875 [INFO][5024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.875 [INFO][5024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.875 [INFO][5024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.887 [WARNING][5024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.887 [INFO][5024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.891 [INFO][5024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:57.896147 containerd[1461]: 2025-01-30 13:56:57.893 [INFO][5018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:57.896147 containerd[1461]: time="2025-01-30T13:56:57.896108754Z" level=info msg="TearDown network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\" successfully" Jan 30 13:56:57.896147 containerd[1461]: time="2025-01-30T13:56:57.896146083Z" level=info msg="StopPodSandbox for \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\" returns successfully" Jan 30 13:56:57.898860 containerd[1461]: time="2025-01-30T13:56:57.897393640Z" level=info msg="RemovePodSandbox for \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\"" Jan 30 13:56:57.898860 containerd[1461]: time="2025-01-30T13:56:57.897445459Z" level=info msg="Forcibly stopping sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\"" Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:57.984 [WARNING][5042] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"3564606d-2944-426a-9353-f6fcadcd5c0d", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"c580a3c6d50134b84542dc86ab5151a75aa335080361ae97a060d5ebe5a6506d", Pod:"coredns-6f6b679f8f-d7hjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d4c847a3aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:57.984 [INFO][5042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:57.984 [INFO][5042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" iface="eth0" netns="" Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:57.985 [INFO][5042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:57.985 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:58.020 [INFO][5048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:58.020 [INFO][5048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:58.020 [INFO][5048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:58.030 [WARNING][5048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:58.030 [INFO][5048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" HandleID="k8s-pod-network.122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--d7hjc-eth0" Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:58.033 [INFO][5048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.037993 containerd[1461]: 2025-01-30 13:56:58.035 [INFO][5042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e" Jan 30 13:56:58.038933 containerd[1461]: time="2025-01-30T13:56:58.038097882Z" level=info msg="TearDown network for sandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\" successfully" Jan 30 13:56:58.044014 containerd[1461]: time="2025-01-30T13:56:58.043898047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:58.044328 containerd[1461]: time="2025-01-30T13:56:58.044038273Z" level=info msg="RemovePodSandbox \"122497b4d466486fc1acc44229542d228608f9a6fca82ec6c4742413a14fdc0e\" returns successfully" Jan 30 13:56:58.045757 containerd[1461]: time="2025-01-30T13:56:58.045709836Z" level=info msg="StopPodSandbox for \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\"" Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.148 [WARNING][5067] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0", GenerateName:"calico-apiserver-5dfd54899b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9049fb6-9111-4307-b692-0bfdfb1f5bc6", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfd54899b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8", Pod:"calico-apiserver-5dfd54899b-bpksv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8be0f824b26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.148 [INFO][5067] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.148 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" iface="eth0" netns="" Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.148 [INFO][5067] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.148 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.182 [INFO][5073] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.183 [INFO][5073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.183 [INFO][5073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.193 [WARNING][5073] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.194 [INFO][5073] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.197 [INFO][5073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.204057 containerd[1461]: 2025-01-30 13:56:58.200 [INFO][5067] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:58.207518 containerd[1461]: time="2025-01-30T13:56:58.204611854Z" level=info msg="TearDown network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\" successfully" Jan 30 13:56:58.207518 containerd[1461]: time="2025-01-30T13:56:58.204662366Z" level=info msg="StopPodSandbox for \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\" returns successfully" Jan 30 13:56:58.207518 containerd[1461]: time="2025-01-30T13:56:58.205415934Z" level=info msg="RemovePodSandbox for \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\"" Jan 30 13:56:58.207518 containerd[1461]: time="2025-01-30T13:56:58.205469634Z" level=info msg="Forcibly stopping sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\"" Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.270 [WARNING][5091] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0", GenerateName:"calico-apiserver-5dfd54899b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e9049fb6-9111-4307-b692-0bfdfb1f5bc6", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfd54899b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"b9c125d83964d809d22ae1176e4fd89541830f8bfadf927b92c5182ecfc95ca8", Pod:"calico-apiserver-5dfd54899b-bpksv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8be0f824b26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.271 [INFO][5091] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.271 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" iface="eth0" netns="" Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.271 [INFO][5091] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.271 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.301 [INFO][5097] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.301 [INFO][5097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.301 [INFO][5097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.318 [WARNING][5097] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.318 [INFO][5097] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" HandleID="k8s-pod-network.82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--apiserver--5dfd54899b--bpksv-eth0" Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.321 [INFO][5097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.326852 containerd[1461]: 2025-01-30 13:56:58.324 [INFO][5091] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c" Jan 30 13:56:58.328412 containerd[1461]: time="2025-01-30T13:56:58.327197290Z" level=info msg="TearDown network for sandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\" successfully" Jan 30 13:56:58.339507 containerd[1461]: time="2025-01-30T13:56:58.339428224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:58.339894 containerd[1461]: time="2025-01-30T13:56:58.339540231Z" level=info msg="RemovePodSandbox \"82425350cad8d9369eeb67b3008a9e4467189959a81ed20960643ca9d22e153c\" returns successfully" Jan 30 13:56:58.341045 containerd[1461]: time="2025-01-30T13:56:58.340992706Z" level=info msg="StopPodSandbox for \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\"" Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.412 [WARNING][5115] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0", GenerateName:"calico-kube-controllers-7d8c6dbb4c-", Namespace:"calico-system", SelfLink:"", UID:"744872b6-19c7-43d9-a52d-0bde01815327", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d8c6dbb4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5", Pod:"calico-kube-controllers-7d8c6dbb4c-cq6nk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00d626f24e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.412 [INFO][5115] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.412 [INFO][5115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" iface="eth0" netns="" Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.412 [INFO][5115] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.412 [INFO][5115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.449 [INFO][5121] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.449 [INFO][5121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.450 [INFO][5121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.457 [WARNING][5121] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.458 [INFO][5121] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.461 [INFO][5121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.466519 containerd[1461]: 2025-01-30 13:56:58.464 [INFO][5115] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:58.466519 containerd[1461]: time="2025-01-30T13:56:58.466081880Z" level=info msg="TearDown network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\" successfully" Jan 30 13:56:58.466519 containerd[1461]: time="2025-01-30T13:56:58.466117498Z" level=info msg="StopPodSandbox for \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\" returns successfully" Jan 30 13:56:58.469396 containerd[1461]: time="2025-01-30T13:56:58.468461578Z" level=info msg="RemovePodSandbox for \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\"" Jan 30 13:56:58.469396 containerd[1461]: time="2025-01-30T13:56:58.468514342Z" level=info msg="Forcibly stopping sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\"" Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.550 [WARNING][5139] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0", GenerateName:"calico-kube-controllers-7d8c6dbb4c-", Namespace:"calico-system", SelfLink:"", UID:"744872b6-19c7-43d9-a52d-0bde01815327", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d8c6dbb4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"9e5fd28801ae52cae67bddb412e48f7fb55fb906a82281a45b04766c188910a5", Pod:"calico-kube-controllers-7d8c6dbb4c-cq6nk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali00d626f24e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.550 [INFO][5139] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.550 [INFO][5139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" iface="eth0" netns="" Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.551 [INFO][5139] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.551 [INFO][5139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.586 [INFO][5145] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.586 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.587 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.596 [WARNING][5145] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.596 [INFO][5145] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" HandleID="k8s-pod-network.fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Workload="ci--4081.3.0--b--c9e031af59-k8s-calico--kube--controllers--7d8c6dbb4c--cq6nk-eth0" Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.600 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.606147 containerd[1461]: 2025-01-30 13:56:58.603 [INFO][5139] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d" Jan 30 13:56:58.606719 containerd[1461]: time="2025-01-30T13:56:58.606328791Z" level=info msg="TearDown network for sandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\" successfully" Jan 30 13:56:58.617411 containerd[1461]: time="2025-01-30T13:56:58.617320423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:58.617607 containerd[1461]: time="2025-01-30T13:56:58.617481960Z" level=info msg="RemovePodSandbox \"fa7130d278ca981ee9d25e777ac172a27d1da76dfb2c5510d73a226de8fa004d\" returns successfully" Jan 30 13:56:58.618723 containerd[1461]: time="2025-01-30T13:56:58.618219123Z" level=info msg="StopPodSandbox for \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\"" Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.703 [WARNING][5163] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"054a7459-c704-4251-80d1-8bd9fe52159e", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793", Pod:"coredns-6f6b679f8f-zd2pv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb089eb00e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.703 [INFO][5163] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.703 [INFO][5163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" iface="eth0" netns="" Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.703 [INFO][5163] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.703 [INFO][5163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.748 [INFO][5169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.748 [INFO][5169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.748 [INFO][5169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.758 [WARNING][5169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.758 [INFO][5169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.761 [INFO][5169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.766733 containerd[1461]: 2025-01-30 13:56:58.764 [INFO][5163] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:58.766733 containerd[1461]: time="2025-01-30T13:56:58.766596800Z" level=info msg="TearDown network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\" successfully" Jan 30 13:56:58.766733 containerd[1461]: time="2025-01-30T13:56:58.766668050Z" level=info msg="StopPodSandbox for \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\" returns successfully" Jan 30 13:56:58.768831 containerd[1461]: time="2025-01-30T13:56:58.768620234Z" level=info msg="RemovePodSandbox for \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\"" Jan 30 13:56:58.768831 containerd[1461]: time="2025-01-30T13:56:58.768745307Z" level=info msg="Forcibly stopping sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\"" Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.834 [WARNING][5187] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"054a7459-c704-4251-80d1-8bd9fe52159e", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"0ebf9bf0d201195d10e445a9882860cd5851e9ce2391e8dfd66f53d757f8c793", Pod:"coredns-6f6b679f8f-zd2pv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb089eb00e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.835 [INFO][5187] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.835 [INFO][5187] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" iface="eth0" netns="" Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.835 [INFO][5187] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.835 [INFO][5187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.869 [INFO][5194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.870 [INFO][5194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.870 [INFO][5194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.881 [WARNING][5194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.881 [INFO][5194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" HandleID="k8s-pod-network.b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Workload="ci--4081.3.0--b--c9e031af59-k8s-coredns--6f6b679f8f--zd2pv-eth0" Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.884 [INFO][5194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:58.892380 containerd[1461]: 2025-01-30 13:56:58.887 [INFO][5187] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68" Jan 30 13:56:58.892380 containerd[1461]: time="2025-01-30T13:56:58.891461256Z" level=info msg="TearDown network for sandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\" successfully" Jan 30 13:56:58.900339 containerd[1461]: time="2025-01-30T13:56:58.899165088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:58.900339 containerd[1461]: time="2025-01-30T13:56:58.899357958Z" level=info msg="RemovePodSandbox \"b793f1604c2389eadee25a1b1c9a58f716aa2dea45758feedb562d848ebc3c68\" returns successfully" Jan 30 13:56:58.900606 containerd[1461]: time="2025-01-30T13:56:58.900565138Z" level=info msg="StopPodSandbox for \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\"" Jan 30 13:56:58.939825 systemd[1]: run-containerd-runc-k8s.io-080f6159928395893499482accd062d41d867c221e68e8004ed318182f9f58d3-runc.GVG8JE.mount: Deactivated successfully. Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.017 [WARNING][5221] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16ce8e60-3b3d-4b79-86f3-2473807ac6e1", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a", Pod:"csi-node-driver-dshrw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78028eb8545", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.017 [INFO][5221] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.017 [INFO][5221] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" iface="eth0" netns="" Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.017 [INFO][5221] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.017 [INFO][5221] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.058 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.059 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.059 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.068 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.068 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.071 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.080022 containerd[1461]: 2025-01-30 13:56:59.075 [INFO][5221] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:59.080022 containerd[1461]: time="2025-01-30T13:56:59.079840544Z" level=info msg="TearDown network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\" successfully" Jan 30 13:56:59.080022 containerd[1461]: time="2025-01-30T13:56:59.079876958Z" level=info msg="StopPodSandbox for \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\" returns successfully" Jan 30 13:56:59.083710 containerd[1461]: time="2025-01-30T13:56:59.083187583Z" level=info msg="RemovePodSandbox for \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\"" Jan 30 13:56:59.083710 containerd[1461]: time="2025-01-30T13:56:59.083248295Z" level=info msg="Forcibly stopping sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\"" Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.168 [WARNING][5255] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16ce8e60-3b3d-4b79-86f3-2473807ac6e1", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-b-c9e031af59", ContainerID:"675f6f15030ced15c44ec5fffb7c112a72dda63d61afb068e2951eb6e7667d8a", Pod:"csi-node-driver-dshrw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali78028eb8545", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.169 [INFO][5255] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.169 [INFO][5255] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" iface="eth0" netns="" Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.169 [INFO][5255] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.169 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.210 [INFO][5261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.211 [INFO][5261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.211 [INFO][5261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.221 [WARNING][5261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.221 [INFO][5261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" HandleID="k8s-pod-network.d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Workload="ci--4081.3.0--b--c9e031af59-k8s-csi--node--driver--dshrw-eth0" Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.223 [INFO][5261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:56:59.229281 containerd[1461]: 2025-01-30 13:56:59.226 [INFO][5255] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768" Jan 30 13:56:59.229900 containerd[1461]: time="2025-01-30T13:56:59.229319450Z" level=info msg="TearDown network for sandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\" successfully" Jan 30 13:56:59.233776 containerd[1461]: time="2025-01-30T13:56:59.233671037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:56:59.233776 containerd[1461]: time="2025-01-30T13:56:59.233780070Z" level=info msg="RemovePodSandbox \"d4aee6d7c67e1e357c052dcab7570887215b4f284941b84d6a1f1343e65a2768\" returns successfully" Jan 30 13:57:01.016924 systemd[1]: Started sshd@15-64.23.157.134:22-147.75.109.163:52442.service - OpenSSH per-connection server daemon (147.75.109.163:52442). Jan 30 13:57:01.200784 sshd[5268]: Accepted publickey for core from 147.75.109.163 port 52442 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:01.202870 sshd[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:01.214470 systemd-logind[1449]: New session 16 of user core. Jan 30 13:57:01.224466 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:57:02.102941 sshd[5268]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:02.111001 systemd[1]: sshd@15-64.23.157.134:22-147.75.109.163:52442.service: Deactivated successfully. Jan 30 13:57:02.119776 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:57:02.126576 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:57:02.130090 systemd-logind[1449]: Removed session 16. Jan 30 13:57:07.121817 systemd[1]: Started sshd@16-64.23.157.134:22-147.75.109.163:52458.service - OpenSSH per-connection server daemon (147.75.109.163:52458). Jan 30 13:57:07.182759 sshd[5288]: Accepted publickey for core from 147.75.109.163 port 52458 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:07.185701 sshd[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:07.192629 systemd-logind[1449]: New session 17 of user core. Jan 30 13:57:07.197575 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:57:07.441050 sshd[5288]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:07.445689 systemd[1]: sshd@16-64.23.157.134:22-147.75.109.163:52458.service: Deactivated successfully. Jan 30 13:57:07.450570 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:57:07.453338 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:57:07.455351 systemd-logind[1449]: Removed session 17. Jan 30 13:57:12.461748 systemd[1]: Started sshd@17-64.23.157.134:22-147.75.109.163:45658.service - OpenSSH per-connection server daemon (147.75.109.163:45658). Jan 30 13:57:12.568073 sshd[5308]: Accepted publickey for core from 147.75.109.163 port 45658 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:12.570637 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:12.577371 systemd-logind[1449]: New session 18 of user core. Jan 30 13:57:12.587651 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:57:13.175170 sshd[5308]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:13.187719 systemd[1]: sshd@17-64.23.157.134:22-147.75.109.163:45658.service: Deactivated successfully. Jan 30 13:57:13.191599 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:57:13.195416 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:57:13.202721 systemd[1]: Started sshd@18-64.23.157.134:22-147.75.109.163:45674.service - OpenSSH per-connection server daemon (147.75.109.163:45674). Jan 30 13:57:13.206042 systemd-logind[1449]: Removed session 18. Jan 30 13:57:13.282346 sshd[5321]: Accepted publickey for core from 147.75.109.163 port 45674 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:13.284577 sshd[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:13.291983 systemd-logind[1449]: New session 19 of user core. Jan 30 13:57:13.298657 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:57:13.709174 sshd[5321]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:13.719648 systemd[1]: sshd@18-64.23.157.134:22-147.75.109.163:45674.service: Deactivated successfully. Jan 30 13:57:13.723053 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:57:13.725758 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:57:13.734747 systemd[1]: Started sshd@19-64.23.157.134:22-147.75.109.163:45688.service - OpenSSH per-connection server daemon (147.75.109.163:45688). Jan 30 13:57:13.740426 systemd-logind[1449]: Removed session 19. Jan 30 13:57:13.822262 sshd[5332]: Accepted publickey for core from 147.75.109.163 port 45688 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:13.824868 sshd[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:13.830520 systemd-logind[1449]: New session 20 of user core. Jan 30 13:57:13.839711 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:57:16.154204 sshd[5332]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:16.173692 systemd[1]: Started sshd@20-64.23.157.134:22-147.75.109.163:45702.service - OpenSSH per-connection server daemon (147.75.109.163:45702). Jan 30 13:57:16.176052 systemd[1]: sshd@19-64.23.157.134:22-147.75.109.163:45688.service: Deactivated successfully. Jan 30 13:57:16.183053 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:57:16.188098 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:57:16.193140 systemd-logind[1449]: Removed session 20. Jan 30 13:57:16.262836 sshd[5348]: Accepted publickey for core from 147.75.109.163 port 45702 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:16.264669 sshd[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:16.273456 systemd-logind[1449]: New session 21 of user core. Jan 30 13:57:16.277598 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:57:17.109862 sshd[5348]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:17.123145 systemd[1]: sshd@20-64.23.157.134:22-147.75.109.163:45702.service: Deactivated successfully. Jan 30 13:57:17.128543 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:57:17.133742 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:57:17.141978 systemd[1]: Started sshd@21-64.23.157.134:22-147.75.109.163:45706.service - OpenSSH per-connection server daemon (147.75.109.163:45706). Jan 30 13:57:17.146703 systemd-logind[1449]: Removed session 21. Jan 30 13:57:17.200456 sshd[5363]: Accepted publickey for core from 147.75.109.163 port 45706 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:17.203079 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:17.209202 systemd-logind[1449]: New session 22 of user core. Jan 30 13:57:17.217570 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:57:17.373579 sshd[5363]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:17.379732 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:57:17.380469 systemd[1]: sshd@21-64.23.157.134:22-147.75.109.163:45706.service: Deactivated successfully. Jan 30 13:57:17.384808 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:57:17.386960 systemd-logind[1449]: Removed session 22. Jan 30 13:57:18.346741 kubelet[2515]: E0130 13:57:18.346553 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:20.345981 kubelet[2515]: E0130 13:57:20.345913 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:22.392727 systemd[1]: Started sshd@22-64.23.157.134:22-147.75.109.163:35328.service - OpenSSH per-connection server daemon (147.75.109.163:35328). Jan 30 13:57:22.437651 sshd[5381]: Accepted publickey for core from 147.75.109.163 port 35328 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:22.438584 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:22.445555 systemd-logind[1449]: New session 23 of user core. Jan 30 13:57:22.450872 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:57:22.599787 sshd[5381]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:22.604099 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:57:22.604422 systemd[1]: sshd@22-64.23.157.134:22-147.75.109.163:35328.service: Deactivated successfully. Jan 30 13:57:22.608295 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:57:22.612303 systemd-logind[1449]: Removed session 23. Jan 30 13:57:25.688902 systemd[1]: run-containerd-runc-k8s.io-c9d2c5deed4c4487d8bf06f79d3adbe88fdb70997ef07c01813f6e64e2a36607-runc.8Jf3Ke.mount: Deactivated successfully. Jan 30 13:57:25.741449 kubelet[2515]: E0130 13:57:25.741048 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:25.793727 kubelet[2515]: I0130 13:57:25.792194 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dshrw" podStartSLOduration=66.939618267 podStartE2EDuration="1m16.779422143s" podCreationTimestamp="2025-01-30 13:56:09 +0000 UTC" firstStartedPulling="2025-01-30 13:56:35.803572156 +0000 UTC m=+38.641415080" lastFinishedPulling="2025-01-30 13:56:45.643376029 +0000 UTC m=+48.481218956" observedRunningTime="2025-01-30 13:56:46.008841991 +0000 UTC m=+48.846684939" watchObservedRunningTime="2025-01-30 13:57:25.779422143 +0000 UTC m=+88.617265072" Jan 30 13:57:27.616686 systemd[1]: Started sshd@23-64.23.157.134:22-147.75.109.163:38930.service - OpenSSH per-connection server daemon (147.75.109.163:38930). Jan 30 13:57:27.714851 sshd[5416]: Accepted publickey for core from 147.75.109.163 port 38930 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:27.717445 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:27.725791 systemd-logind[1449]: New session 24 of user core. Jan 30 13:57:27.730586 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:57:27.970922 sshd[5416]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:27.975364 systemd[1]: sshd@23-64.23.157.134:22-147.75.109.163:38930.service: Deactivated successfully. Jan 30 13:57:27.979170 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:57:27.981348 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:57:27.983745 systemd-logind[1449]: Removed session 24. Jan 30 13:57:32.347566 kubelet[2515]: E0130 13:57:32.346814 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:32.990859 systemd[1]: Started sshd@24-64.23.157.134:22-147.75.109.163:38936.service - OpenSSH per-connection server daemon (147.75.109.163:38936). Jan 30 13:57:33.043010 sshd[5448]: Accepted publickey for core from 147.75.109.163 port 38936 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:33.046455 sshd[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:33.054004 systemd-logind[1449]: New session 25 of user core. Jan 30 13:57:33.064656 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:57:33.221799 sshd[5448]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:33.229554 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:57:33.231742 systemd[1]: sshd@24-64.23.157.134:22-147.75.109.163:38936.service: Deactivated successfully. Jan 30 13:57:33.235308 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:57:33.238201 systemd-logind[1449]: Removed session 25. Jan 30 13:57:34.348297 kubelet[2515]: E0130 13:57:34.346568 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:57:38.243774 systemd[1]: Started sshd@25-64.23.157.134:22-147.75.109.163:45096.service - OpenSSH per-connection server daemon (147.75.109.163:45096). Jan 30 13:57:38.347600 sshd[5465]: Accepted publickey for core from 147.75.109.163 port 45096 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:57:38.350684 sshd[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:38.357935 systemd-logind[1449]: New session 26 of user core. Jan 30 13:57:38.363506 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:57:38.832091 sshd[5465]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:38.837688 systemd[1]: sshd@25-64.23.157.134:22-147.75.109.163:45096.service: Deactivated successfully. Jan 30 13:57:38.840813 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:57:38.842336 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:57:38.843989 systemd-logind[1449]: Removed session 26.