Jan 30 05:01:45.131109 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 05:01:45.131150 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:01:45.131171 kernel: BIOS-provided physical RAM map: Jan 30 05:01:45.131184 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 05:01:45.131195 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 05:01:45.131207 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 05:01:45.131222 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 05:01:45.131235 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 05:01:45.131248 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 05:01:45.131265 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 05:01:45.131277 kernel: NX (Execute Disable) protection: active Jan 30 05:01:45.131290 kernel: APIC: Static calls initialized Jan 30 05:01:45.131309 kernel: SMBIOS 2.8 present. Jan 30 05:01:45.131322 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 05:01:45.131339 kernel: Hypervisor detected: KVM Jan 30 05:01:45.131357 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 05:01:45.131376 kernel: kvm-clock: using sched offset of 3724686492 cycles Jan 30 05:01:45.131392 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 05:01:45.131407 kernel: tsc: Detected 2294.606 MHz processor Jan 30 05:01:45.131421 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 05:01:45.131437 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 05:01:45.131453 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 05:01:45.131468 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 05:01:45.131483 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 05:01:45.131500 kernel: ACPI: Early table checksum verification disabled Jan 30 05:01:45.131515 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 05:01:45.131529 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:45.131544 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:45.131559 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:45.131587 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 05:01:45.131602 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:45.131616 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:45.131631 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:45.131650 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 05:01:45.131665 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 05:01:45.131679 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 05:01:45.131694 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 05:01:45.131709 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 05:01:45.131724 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 05:01:45.131739 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 05:01:45.131763 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 05:01:45.131779 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 05:01:45.131794 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 05:01:45.131810 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 05:01:45.131843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 05:01:45.131863 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 05:01:45.131879 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 05:01:45.131898 kernel: Zone ranges: Jan 30 05:01:45.131914 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 05:01:45.131931 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 05:01:45.131947 kernel: Normal empty Jan 30 05:01:45.131962 kernel: Movable zone start for each node Jan 30 05:01:45.131977 kernel: Early memory node ranges Jan 30 05:01:45.131992 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 05:01:45.132004 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 05:01:45.132019 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 05:01:45.132038 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 05:01:45.132054 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 05:01:45.132073 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 05:01:45.132089 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 05:01:45.132104 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 05:01:45.132121 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 05:01:45.132137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 05:01:45.132154 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 05:01:45.132174 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 05:01:45.132194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 05:01:45.132210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 05:01:45.132226 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 05:01:45.132241 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 05:01:45.132257 kernel: TSC deadline timer available Jan 30 05:01:45.132274 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 05:01:45.132290 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 05:01:45.132306 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 05:01:45.132325 kernel: Booting paravirtualized kernel on KVM Jan 30 05:01:45.132342 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 05:01:45.132361 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 05:01:45.132377 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 05:01:45.132393 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 05:01:45.132408 kernel: pcpu-alloc: [0] 0 1 Jan 30 05:01:45.132424 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 05:01:45.132442 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:01:45.132458 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 05:01:45.132477 kernel: random: crng init done Jan 30 05:01:45.132493 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 05:01:45.132509 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 05:01:45.132525 kernel: Fallback order for Node 0: 0 Jan 30 05:01:45.132541 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 05:01:45.132557 kernel: Policy zone: DMA32 Jan 30 05:01:45.134679 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 05:01:45.134697 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 30 05:01:45.134715 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 05:01:45.134739 kernel: Kernel/User page tables isolation: enabled Jan 30 05:01:45.134754 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 05:01:45.134771 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 05:01:45.134788 kernel: Dynamic Preempt: voluntary Jan 30 05:01:45.134803 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 05:01:45.134820 kernel: rcu: RCU event tracing is enabled. Jan 30 05:01:45.134836 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 05:01:45.134852 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 05:01:45.134868 kernel: Rude variant of Tasks RCU enabled. Jan 30 05:01:45.134883 kernel: Tracing variant of Tasks RCU enabled. Jan 30 05:01:45.134904 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 05:01:45.134921 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 05:01:45.134936 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 05:01:45.134952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 05:01:45.134972 kernel: Console: colour VGA+ 80x25 Jan 30 05:01:45.135005 kernel: printk: console [tty0] enabled Jan 30 05:01:45.135020 kernel: printk: console [ttyS0] enabled Jan 30 05:01:45.135036 kernel: ACPI: Core revision 20230628 Jan 30 05:01:45.135053 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 05:01:45.135072 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 05:01:45.135088 kernel: x2apic enabled Jan 30 05:01:45.135105 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 05:01:45.135121 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 05:01:45.135137 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jan 30 05:01:45.135153 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294606) Jan 30 05:01:45.135169 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 05:01:45.135186 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 05:01:45.135218 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 05:01:45.135236 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 05:01:45.135253 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 05:01:45.135272 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 05:01:45.135289 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 05:01:45.135307 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 05:01:45.135324 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 05:01:45.135341 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 05:01:45.135358 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 05:01:45.135382 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 05:01:45.135398 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 05:01:45.135416 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 05:01:45.135433 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 05:01:45.135451 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 05:01:45.135468 kernel: Freeing SMP alternatives memory: 32K Jan 30 05:01:45.135486 kernel: pid_max: default: 32768 minimum: 301 Jan 30 05:01:45.135503 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 05:01:45.135536 kernel: landlock: Up and running. Jan 30 05:01:45.135553 kernel: SELinux: Initializing. Jan 30 05:01:45.135584 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:01:45.135601 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 05:01:45.135619 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 05:01:45.135636 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:01:45.135654 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:01:45.135671 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 05:01:45.135688 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 05:01:45.135710 kernel: signal: max sigframe size: 1776 Jan 30 05:01:45.135728 kernel: rcu: Hierarchical SRCU implementation. Jan 30 05:01:45.135744 kernel: rcu: Max phase no-delay instances is 400. Jan 30 05:01:45.135761 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 05:01:45.135778 kernel: smp: Bringing up secondary CPUs ... Jan 30 05:01:45.135796 kernel: smpboot: x86: Booting SMP configuration: Jan 30 05:01:45.135813 kernel: .... node #0, CPUs: #1 Jan 30 05:01:45.135830 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 05:01:45.135851 kernel: smpboot: Max logical packages: 1 Jan 30 05:01:45.135871 kernel: smpboot: Total of 2 processors activated (9178.42 BogoMIPS) Jan 30 05:01:45.135888 kernel: devtmpfs: initialized Jan 30 05:01:45.135905 kernel: x86/mm: Memory block size: 128MB Jan 30 05:01:45.135923 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 05:01:45.135939 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 05:01:45.135957 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 05:01:45.135974 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 05:01:45.135992 kernel: audit: initializing netlink subsys (disabled) Jan 30 05:01:45.136009 kernel: audit: type=2000 audit(1738213303.415:1): state=initialized audit_enabled=0 res=1 Jan 30 05:01:45.136030 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 05:01:45.136047 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 05:01:45.136064 kernel: cpuidle: using governor menu Jan 30 05:01:45.136081 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 05:01:45.136099 kernel: dca service started, version 1.12.1 Jan 30 05:01:45.136116 kernel: PCI: Using configuration type 1 for base access Jan 30 05:01:45.136134 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 05:01:45.136151 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 05:01:45.136169 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 05:01:45.136189 kernel: ACPI: Added _OSI(Module Device) Jan 30 05:01:45.136207 kernel: ACPI: Added _OSI(Processor Device) Jan 30 05:01:45.136224 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 05:01:45.136242 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 05:01:45.136259 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 05:01:45.136277 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 05:01:45.136294 kernel: ACPI: Interpreter enabled Jan 30 05:01:45.136316 kernel: ACPI: PM: (supports S0 S5) Jan 30 05:01:45.136334 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 05:01:45.136356 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 05:01:45.136375 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 05:01:45.136392 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 05:01:45.136409 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 05:01:45.136740 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 05:01:45.136927 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 05:01:45.137075 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 05:01:45.137100 kernel: acpiphp: Slot [3] registered Jan 30 05:01:45.137113 kernel: acpiphp: Slot [4] registered Jan 30 05:01:45.137132 kernel: acpiphp: Slot [5] registered Jan 30 05:01:45.137145 kernel: acpiphp: Slot [6] registered Jan 30 05:01:45.137159 kernel: acpiphp: Slot [7] registered Jan 30 05:01:45.137187 kernel: acpiphp: Slot [8] registered Jan 30 05:01:45.137200 kernel: acpiphp: Slot [9] registered Jan 30 05:01:45.137215 kernel: acpiphp: Slot [10] registered Jan 30 05:01:45.137230 kernel: acpiphp: Slot [11] registered Jan 30 05:01:45.137246 kernel: acpiphp: Slot [12] registered Jan 30 05:01:45.137268 kernel: acpiphp: Slot [13] registered Jan 30 05:01:45.137285 kernel: acpiphp: Slot [14] registered Jan 30 05:01:45.137303 kernel: acpiphp: Slot [15] registered Jan 30 05:01:45.137320 kernel: acpiphp: Slot [16] registered Jan 30 05:01:45.137349 kernel: acpiphp: Slot [17] registered Jan 30 05:01:45.137367 kernel: acpiphp: Slot [18] registered Jan 30 05:01:45.137385 kernel: acpiphp: Slot [19] registered Jan 30 05:01:45.137403 kernel: acpiphp: Slot [20] registered Jan 30 05:01:45.137420 kernel: acpiphp: Slot [21] registered Jan 30 05:01:45.137441 kernel: acpiphp: Slot [22] registered Jan 30 05:01:45.137459 kernel: acpiphp: Slot [23] registered Jan 30 05:01:45.137474 kernel: acpiphp: Slot [24] registered Jan 30 05:01:45.137489 kernel: acpiphp: Slot [25] registered Jan 30 05:01:45.137506 kernel: acpiphp: Slot [26] registered Jan 30 05:01:45.137523 kernel: acpiphp: Slot [27] registered Jan 30 05:01:45.137541 kernel: acpiphp: Slot [28] registered Jan 30 05:01:45.137558 kernel: acpiphp: Slot [29] registered Jan 30 05:01:45.137589 kernel: acpiphp: Slot [30] registered Jan 30 05:01:45.137606 kernel: acpiphp: Slot [31] registered Jan 30 05:01:45.137627 kernel: PCI host bridge to bus 0000:00 Jan 30 05:01:45.139831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 05:01:45.139985 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 05:01:45.140138 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 05:01:45.140278 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 05:01:45.140413 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 05:01:45.141671 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 05:01:45.141901 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 05:01:45.142091 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 05:01:45.142298 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 05:01:45.142505 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 05:01:45.142716 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 05:01:45.142888 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 05:01:45.143062 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 05:01:45.143220 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 05:01:45.143421 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 05:01:45.146711 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 05:01:45.146946 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 05:01:45.147124 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 05:01:45.147306 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 05:01:45.147527 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 05:01:45.147796 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 05:01:45.148004 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 05:01:45.148175 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 05:01:45.148367 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 05:01:45.148557 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 05:01:45.150821 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 05:01:45.151012 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 05:01:45.151181 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 05:01:45.151351 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 05:01:45.151643 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 05:01:45.151851 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 05:01:45.152020 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 05:01:45.154851 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 05:01:45.155064 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 05:01:45.155244 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 05:01:45.155418 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 05:01:45.155618 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 05:01:45.155810 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 05:01:45.155985 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 05:01:45.156180 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 05:01:45.156515 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 05:01:45.156832 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 05:01:45.157023 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 05:01:45.157191 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 05:01:45.157373 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 05:01:45.158688 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 05:01:45.158913 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 05:01:45.159088 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 05:01:45.159112 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 05:01:45.159129 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 05:01:45.159145 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 05:01:45.159168 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 05:01:45.159182 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 05:01:45.159206 kernel: iommu: Default domain type: Translated Jan 30 05:01:45.159221 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 05:01:45.159236 kernel: PCI: Using ACPI for IRQ routing Jan 30 05:01:45.159259 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 05:01:45.159284 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 05:01:45.159305 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 05:01:45.159486 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 05:01:45.160831 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 05:01:45.161046 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 05:01:45.161076 kernel: vgaarb: loaded Jan 30 05:01:45.161098 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 05:01:45.161118 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 05:01:45.161132 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 05:01:45.161147 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 05:01:45.161162 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 05:01:45.161176 kernel: pnp: PnP ACPI init Jan 30 05:01:45.161217 kernel: pnp: PnP ACPI: found 4 devices Jan 30 05:01:45.161242 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 05:01:45.161258 kernel: NET: Registered PF_INET protocol family Jan 30 05:01:45.161276 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 05:01:45.161293 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 05:01:45.161309 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 05:01:45.161330 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 05:01:45.161346 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 05:01:45.161364 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 05:01:45.161380 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:01:45.161401 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 05:01:45.161417 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 05:01:45.161433 kernel: NET: Registered PF_XDP protocol family Jan 30 05:01:45.162687 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 05:01:45.162842 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 05:01:45.162986 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 05:01:45.163134 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 05:01:45.163281 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 05:01:45.163466 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 05:01:45.164770 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 05:01:45.164807 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 05:01:45.164979 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 45782 usecs Jan 30 05:01:45.165003 kernel: PCI: CLS 0 bytes, default 64 Jan 30 05:01:45.165021 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 05:01:45.165040 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134dbeb26, max_idle_ns: 440795298546 ns Jan 30 05:01:45.165057 kernel: Initialise system trusted keyrings Jan 30 05:01:45.165071 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 05:01:45.165097 kernel: Key type asymmetric registered Jan 30 05:01:45.165111 kernel: Asymmetric key parser 'x509' registered Jan 30 05:01:45.165126 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 05:01:45.165141 kernel: io scheduler mq-deadline registered Jan 30 05:01:45.165157 kernel: io scheduler kyber registered Jan 30 05:01:45.165172 kernel: io scheduler bfq registered Jan 30 05:01:45.165188 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 05:01:45.165205 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 05:01:45.165221 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 05:01:45.165242 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 05:01:45.165258 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 05:01:45.165274 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 05:01:45.165289 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 05:01:45.165305 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 05:01:45.165318 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 05:01:45.165578 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 05:01:45.167631 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 05:01:45.167912 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 05:01:45.168061 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T05:01:44 UTC (1738213304) Jan 30 05:01:45.168213 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 05:01:45.168235 kernel: intel_pstate: CPU model not supported Jan 30 05:01:45.168251 kernel: NET: Registered PF_INET6 protocol family Jan 30 05:01:45.168267 kernel: Segment Routing with IPv6 Jan 30 05:01:45.168282 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 05:01:45.168298 kernel: NET: Registered PF_PACKET protocol family Jan 30 05:01:45.168315 kernel: Key type dns_resolver registered Jan 30 05:01:45.168340 kernel: IPI shorthand broadcast: enabled Jan 30 05:01:45.168357 kernel: sched_clock: Marking stable (1240006233, 178032944)->(1454116093, -36076916) Jan 30 05:01:45.168374 kernel: registered taskstats version 1 Jan 30 05:01:45.168390 kernel: Loading compiled-in X.509 certificates Jan 30 05:01:45.168407 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 05:01:45.168423 kernel: Key type .fscrypt registered Jan 30 05:01:45.168438 kernel: Key type fscrypt-provisioning registered Jan 30 05:01:45.168453 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 05:01:45.168474 kernel: ima: Allocated hash algorithm: sha1 Jan 30 05:01:45.168490 kernel: ima: No architecture policies found Jan 30 05:01:45.168505 kernel: clk: Disabling unused clocks Jan 30 05:01:45.168521 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 05:01:45.168548 kernel: Write protecting the kernel read-only data: 36864k Jan 30 05:01:45.168654 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 05:01:45.168679 kernel: Run /init as init process Jan 30 05:01:45.168702 kernel: with arguments: Jan 30 05:01:45.168725 kernel: /init Jan 30 05:01:45.168747 kernel: with environment: Jan 30 05:01:45.168762 kernel: HOME=/ Jan 30 05:01:45.168776 kernel: TERM=linux Jan 30 05:01:45.168791 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 05:01:45.168812 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:01:45.168832 systemd[1]: Detected virtualization kvm. Jan 30 05:01:45.168849 systemd[1]: Detected architecture x86-64. Jan 30 05:01:45.168865 systemd[1]: Running in initrd. Jan 30 05:01:45.168890 systemd[1]: No hostname configured, using default hostname. Jan 30 05:01:45.168914 systemd[1]: Hostname set to . Jan 30 05:01:45.168931 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:01:45.168949 systemd[1]: Queued start job for default target initrd.target. Jan 30 05:01:45.168966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:01:45.168989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:01:45.169009 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 05:01:45.169027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:01:45.169049 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 05:01:45.169067 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 05:01:45.169087 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 05:01:45.169103 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 05:01:45.169119 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:01:45.169135 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:01:45.169151 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:01:45.169185 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:01:45.169210 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:01:45.169236 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:01:45.169260 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:01:45.169283 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:01:45.169310 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:01:45.169334 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:01:45.169357 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:01:45.169380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:01:45.169403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:01:45.169419 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:01:45.169435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 05:01:45.169451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:01:45.169469 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 05:01:45.169493 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 05:01:45.169510 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:01:45.169525 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:01:45.169542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:45.171651 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 05:01:45.171744 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 05:01:45.171764 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:01:45.171781 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 05:01:45.171801 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:01:45.171825 systemd-journald[183]: Journal started Jan 30 05:01:45.171861 systemd-journald[183]: Runtime Journal (/run/log/journal/65fa1db9036c4a778240cba45d996f51) is 4.9M, max 39.3M, 34.4M free. Jan 30 05:01:45.154729 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 05:01:45.224466 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 05:01:45.224512 kernel: Bridge firewalling registered Jan 30 05:01:45.224553 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:01:45.199239 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 05:01:45.223578 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:01:45.227214 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:45.231810 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:01:45.240952 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:01:45.254813 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:01:45.258176 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:01:45.277847 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:01:45.281669 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:01:45.282814 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:01:45.286853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:01:45.290815 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 05:01:45.300787 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:01:45.301715 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:01:45.325438 dracut-cmdline[216]: dracut-dracut-053 Jan 30 05:01:45.333696 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 05:01:45.366192 systemd-resolved[217]: Positive Trust Anchors: Jan 30 05:01:45.366214 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:01:45.366300 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:01:45.376183 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 30 05:01:45.380600 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:01:45.381402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:01:45.463642 kernel: SCSI subsystem initialized Jan 30 05:01:45.475644 kernel: Loading iSCSI transport class v2.0-870. Jan 30 05:01:45.490607 kernel: iscsi: registered transport (tcp) Jan 30 05:01:45.520125 kernel: iscsi: registered transport (qla4xxx) Jan 30 05:01:45.520228 kernel: QLogic iSCSI HBA Driver Jan 30 05:01:45.582359 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 05:01:45.587847 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 05:01:45.639629 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 05:01:45.639721 kernel: device-mapper: uevent: version 1.0.3 Jan 30 05:01:45.641763 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 05:01:45.695629 kernel: raid6: avx2x4 gen() 15650 MB/s Jan 30 05:01:45.713628 kernel: raid6: avx2x2 gen() 15476 MB/s Jan 30 05:01:45.732317 kernel: raid6: avx2x1 gen() 12191 MB/s Jan 30 05:01:45.732414 kernel: raid6: using algorithm avx2x4 gen() 15650 MB/s Jan 30 05:01:45.751244 kernel: raid6: .... xor() 3978 MB/s, rmw enabled Jan 30 05:01:45.751344 kernel: raid6: using avx2x2 recovery algorithm Jan 30 05:01:45.785609 kernel: xor: automatically using best checksumming function avx Jan 30 05:01:46.030645 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 05:01:46.046324 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:01:46.052866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:01:46.087415 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 30 05:01:46.096259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:01:46.102769 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 05:01:46.131281 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 30 05:01:46.180959 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:01:46.186888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:01:46.280169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:01:46.290796 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 05:01:46.316760 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 05:01:46.321591 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:01:46.323001 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:01:46.324844 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:01:46.333657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 05:01:46.360693 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:01:46.407613 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 05:01:46.453094 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 05:01:46.453127 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 05:01:46.453309 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 05:01:46.453336 kernel: GPT:9289727 != 125829119 Jan 30 05:01:46.453361 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 05:01:46.453386 kernel: GPT:9289727 != 125829119 Jan 30 05:01:46.453411 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 05:01:46.453436 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:46.453470 kernel: scsi host0: Virtio SCSI HBA Jan 30 05:01:46.459835 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 05:01:46.466531 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 30 05:01:46.466759 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 05:01:46.466783 kernel: AES CTR mode by8 optimization enabled Jan 30 05:01:46.462015 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:01:46.462359 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:01:46.466839 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:01:46.471747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:01:46.472037 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:46.474483 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:46.488552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:46.520760 kernel: ACPI: bus type USB registered Jan 30 05:01:46.520838 kernel: usbcore: registered new interface driver usbfs Jan 30 05:01:46.523199 kernel: usbcore: registered new interface driver hub Jan 30 05:01:46.527620 kernel: usbcore: registered new device driver usb Jan 30 05:01:46.592602 kernel: libata version 3.00 loaded. Jan 30 05:01:46.596112 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 05:01:46.601173 kernel: scsi host1: ata_piix Jan 30 05:01:46.601461 kernel: scsi host2: ata_piix Jan 30 05:01:46.603409 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 05:01:46.603440 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 05:01:46.624596 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Jan 30 05:01:46.626593 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (450) Jan 30 05:01:46.647062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 05:01:46.672381 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 05:01:46.678735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:46.688587 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 05:01:46.689431 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 05:01:46.699814 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 05:01:46.705864 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 05:01:46.710821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 05:01:46.724969 disk-uuid[542]: Primary Header is updated. Jan 30 05:01:46.724969 disk-uuid[542]: Secondary Entries is updated. Jan 30 05:01:46.724969 disk-uuid[542]: Secondary Header is updated. Jan 30 05:01:46.736756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:46.747251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:46.750288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:01:46.764605 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:46.808678 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 05:01:46.824749 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 05:01:46.825020 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 05:01:46.825235 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 05:01:46.825463 kernel: hub 1-0:1.0: USB hub found Jan 30 05:01:46.825759 kernel: hub 1-0:1.0: 2 ports detected Jan 30 05:01:47.758675 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 05:01:47.759103 disk-uuid[544]: The operation has completed successfully. Jan 30 05:01:47.834001 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 05:01:47.834201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 05:01:47.859909 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 05:01:47.872299 sh[567]: Success Jan 30 05:01:47.892609 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 05:01:47.959367 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 05:01:47.968701 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 05:01:47.971909 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 05:01:48.013639 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 05:01:48.013739 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:01:48.015932 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 05:01:48.019606 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 05:01:48.019698 kernel: BTRFS info (device dm-0): using free space tree Jan 30 05:01:48.033874 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 05:01:48.035622 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 05:01:48.040895 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 05:01:48.044528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 05:01:48.062281 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:48.062358 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:01:48.065258 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:01:48.070598 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:01:48.085312 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 05:01:48.089019 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:48.100699 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 05:01:48.108945 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 05:01:48.280509 ignition[665]: Ignition 2.19.0 Jan 30 05:01:48.280524 ignition[665]: Stage: fetch-offline Jan 30 05:01:48.280646 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:48.280663 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:48.284721 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:01:48.280886 ignition[665]: parsed url from cmdline: "" Jan 30 05:01:48.286295 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:01:48.280892 ignition[665]: no config URL provided Jan 30 05:01:48.280900 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:01:48.280913 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:01:48.280922 ignition[665]: failed to fetch config: resource requires networking Jan 30 05:01:48.281336 ignition[665]: Ignition finished successfully Jan 30 05:01:48.296916 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:01:48.342542 systemd-networkd[756]: lo: Link UP Jan 30 05:01:48.342556 systemd-networkd[756]: lo: Gained carrier Jan 30 05:01:48.345538 systemd-networkd[756]: Enumeration completed Jan 30 05:01:48.346118 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 05:01:48.346124 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 05:01:48.347081 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:01:48.347395 systemd-networkd[756]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:01:48.347402 systemd-networkd[756]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 05:01:48.348386 systemd-networkd[756]: eth0: Link UP Jan 30 05:01:48.348392 systemd-networkd[756]: eth0: Gained carrier Jan 30 05:01:48.348403 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 05:01:48.349025 systemd[1]: Reached target network.target - Network. Jan 30 05:01:48.353057 systemd-networkd[756]: eth1: Link UP Jan 30 05:01:48.353063 systemd-networkd[756]: eth1: Gained carrier Jan 30 05:01:48.353079 systemd-networkd[756]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 05:01:48.355790 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 05:01:48.366667 systemd-networkd[756]: eth0: DHCPv4 address 137.184.120.173/20, gateway 137.184.112.1 acquired from 169.254.169.253 Jan 30 05:01:48.370694 systemd-networkd[756]: eth1: DHCPv4 address 10.124.0.22/20 acquired from 169.254.169.253 Jan 30 05:01:48.395671 ignition[758]: Ignition 2.19.0 Jan 30 05:01:48.395690 ignition[758]: Stage: fetch Jan 30 05:01:48.396041 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:48.396062 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:48.396225 ignition[758]: parsed url from cmdline: "" Jan 30 05:01:48.396232 ignition[758]: no config URL provided Jan 30 05:01:48.396242 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 05:01:48.396259 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 30 05:01:48.396292 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 05:01:48.424580 ignition[758]: GET result: OK Jan 30 05:01:48.425293 ignition[758]: parsing config with SHA512: 2be21073bcafdf3cf343592137b33804b2541bfa99885555c0ab94927450fc43d1fcfb985d16fa4b6ede1505084402902a4c2f30999b87038d292af88f27c622 Jan 30 05:01:48.432176 unknown[758]: fetched base config from "system" Jan 30 05:01:48.432190 unknown[758]: fetched base config from "system" Jan 30 05:01:48.432200 unknown[758]: fetched user config from "digitalocean" Jan 30 05:01:48.434877 ignition[758]: fetch: fetch complete Jan 30 05:01:48.434894 ignition[758]: fetch: fetch passed Jan 30 05:01:48.434998 ignition[758]: Ignition finished successfully Jan 30 05:01:48.437955 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 05:01:48.442834 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 05:01:48.491731 ignition[765]: Ignition 2.19.0 Jan 30 05:01:48.491754 ignition[765]: Stage: kargs Jan 30 05:01:48.492249 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:48.492269 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:48.494787 ignition[765]: kargs: kargs passed Jan 30 05:01:48.494891 ignition[765]: Ignition finished successfully Jan 30 05:01:48.496470 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 05:01:48.500911 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 05:01:48.525934 ignition[771]: Ignition 2.19.0 Jan 30 05:01:48.525946 ignition[771]: Stage: disks Jan 30 05:01:48.526261 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:48.526276 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:48.529772 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 05:01:48.527949 ignition[771]: disks: disks passed Jan 30 05:01:48.531278 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 05:01:48.528020 ignition[771]: Ignition finished successfully Jan 30 05:01:48.537600 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:01:48.539024 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:01:48.540078 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:01:48.541301 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:01:48.552898 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 05:01:48.575477 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 05:01:48.582807 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 05:01:48.589771 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 05:01:48.711593 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 05:01:48.712015 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 05:01:48.713160 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 05:01:48.720724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:01:48.723703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 05:01:48.727436 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 05:01:48.736786 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 05:01:48.752626 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Jan 30 05:01:48.752658 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:48.752672 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:01:48.752694 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:01:48.752706 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:01:48.737848 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 05:01:48.737894 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:01:48.766089 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:01:48.768318 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 05:01:48.775849 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 05:01:48.849292 coreos-metadata[789]: Jan 30 05:01:48.848 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:01:48.863587 coreos-metadata[789]: Jan 30 05:01:48.863 INFO Fetch successful Jan 30 05:01:48.867784 coreos-metadata[790]: Jan 30 05:01:48.867 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:01:48.873106 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 05:01:48.876037 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 05:01:48.876180 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 05:01:48.883610 coreos-metadata[790]: Jan 30 05:01:48.883 INFO Fetch successful Jan 30 05:01:48.885709 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Jan 30 05:01:48.892610 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 05:01:48.894705 coreos-metadata[790]: Jan 30 05:01:48.892 INFO wrote hostname ci-4081.3.0-d-47de560844 to /sysroot/etc/hostname Jan 30 05:01:48.895843 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:01:48.903923 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 05:01:49.033956 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 05:01:49.039735 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 05:01:49.054974 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 05:01:49.066335 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 05:01:49.068428 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:49.099680 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 05:01:49.117728 ignition[908]: INFO : Ignition 2.19.0 Jan 30 05:01:49.117728 ignition[908]: INFO : Stage: mount Jan 30 05:01:49.119516 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:49.119516 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:49.121969 ignition[908]: INFO : mount: mount passed Jan 30 05:01:49.121969 ignition[908]: INFO : Ignition finished successfully Jan 30 05:01:49.121361 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 05:01:49.132740 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 05:01:49.156225 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 05:01:49.170606 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Jan 30 05:01:49.176637 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 05:01:49.176733 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 05:01:49.176754 kernel: BTRFS info (device vda6): using free space tree Jan 30 05:01:49.181629 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 05:01:49.185907 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 05:01:49.219386 ignition[935]: INFO : Ignition 2.19.0 Jan 30 05:01:49.219386 ignition[935]: INFO : Stage: files Jan 30 05:01:49.221136 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:49.221136 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:49.223227 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 30 05:01:49.224146 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 05:01:49.224146 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 05:01:49.228687 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 05:01:49.229956 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 05:01:49.229956 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 05:01:49.229603 unknown[935]: wrote ssh authorized keys file for user: core Jan 30 05:01:49.233229 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 05:01:49.233229 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 05:01:49.233229 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:01:49.233229 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 05:01:49.274595 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 05:01:49.402152 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 05:01:49.403691 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:01:49.413664 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 05:01:49.413664 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:01:49.413664 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:01:49.413664 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:01:49.413664 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 05:01:49.512826 systemd-networkd[756]: eth0: Gained IPv6LL Jan 30 05:01:49.704477 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 05:01:49.769335 systemd-networkd[756]: eth1: Gained IPv6LL Jan 30 05:01:50.029289 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 05:01:50.029289 ignition[935]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 05:01:50.031881 ignition[935]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 05:01:50.041380 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:01:50.041380 ignition[935]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 05:01:50.041380 ignition[935]: INFO : files: files passed Jan 30 05:01:50.041380 ignition[935]: INFO : Ignition finished successfully Jan 30 05:01:50.033802 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 05:01:50.041903 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 05:01:50.051865 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 05:01:50.060762 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 05:01:50.060937 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 05:01:50.068912 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:01:50.068912 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:01:50.073017 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 05:01:50.076690 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:01:50.078442 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 05:01:50.084839 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 05:01:50.133495 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 05:01:50.133707 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 05:01:50.135323 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 05:01:50.136541 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 05:01:50.137966 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 05:01:50.146868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 05:01:50.167202 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:01:50.171826 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 05:01:50.196982 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:01:50.199009 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:01:50.200840 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 05:01:50.201769 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 05:01:50.202048 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 05:01:50.203737 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 05:01:50.204644 systemd[1]: Stopped target basic.target - Basic System. Jan 30 05:01:50.205939 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 05:01:50.207241 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 05:01:50.208591 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 05:01:50.210232 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 05:01:50.211727 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 05:01:50.213358 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 05:01:50.214907 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 05:01:50.216530 systemd[1]: Stopped target swap.target - Swaps. Jan 30 05:01:50.218005 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 05:01:50.218268 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 05:01:50.219891 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:01:50.220894 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:01:50.222154 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 05:01:50.222327 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:01:50.224505 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 05:01:50.224841 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 05:01:50.227003 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 05:01:50.227374 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 05:01:50.231216 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 05:01:50.231424 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 05:01:50.232833 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 05:01:50.233078 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 05:01:50.241068 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 05:01:50.243979 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 05:01:50.245674 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 05:01:50.245973 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:01:50.252929 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 05:01:50.253130 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 05:01:50.262977 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 05:01:50.263142 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 05:01:50.281251 ignition[989]: INFO : Ignition 2.19.0 Jan 30 05:01:50.281251 ignition[989]: INFO : Stage: umount Jan 30 05:01:50.283076 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 05:01:50.283076 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 05:01:50.286243 ignition[989]: INFO : umount: umount passed Jan 30 05:01:50.286243 ignition[989]: INFO : Ignition finished successfully Jan 30 05:01:50.290123 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 05:01:50.290286 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 05:01:50.293239 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 05:01:50.293360 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 05:01:50.294259 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 05:01:50.294344 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 05:01:50.295214 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 05:01:50.295289 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 05:01:50.295975 systemd[1]: Stopped target network.target - Network. Jan 30 05:01:50.297166 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 05:01:50.297269 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 05:01:50.300077 systemd[1]: Stopped target paths.target - Path Units. Jan 30 05:01:50.317323 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 05:01:50.320681 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:01:50.321493 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 05:01:50.322150 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 05:01:50.325985 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 05:01:50.326088 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 05:01:50.344694 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 05:01:50.344774 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 05:01:50.346026 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 05:01:50.346230 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 05:01:50.347555 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 05:01:50.347657 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 05:01:50.349381 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 05:01:50.350616 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 05:01:50.385793 systemd-networkd[756]: eth0: DHCPv6 lease lost Jan 30 05:01:50.389696 systemd-networkd[756]: eth1: DHCPv6 lease lost Jan 30 05:01:50.392920 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 05:01:50.394000 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 05:01:50.394208 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 05:01:50.398243 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 05:01:50.398422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 05:01:50.400312 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 05:01:50.400451 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 05:01:50.404141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 05:01:50.404226 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:01:50.405701 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 05:01:50.405791 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 05:01:50.412767 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 05:01:50.414406 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 05:01:50.415280 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 05:01:50.417383 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 05:01:50.417476 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:01:50.418325 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 05:01:50.418428 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 05:01:50.419232 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 05:01:50.419299 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:01:50.421050 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:01:50.437893 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 05:01:50.438199 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:01:50.439900 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 05:01:50.440004 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 05:01:50.442973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 05:01:50.443039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:01:50.444260 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 05:01:50.444338 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 05:01:50.446385 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 05:01:50.446475 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 05:01:50.447806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 05:01:50.447889 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 05:01:50.456907 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 05:01:50.457820 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 05:01:50.457925 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:01:50.460894 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 05:01:50.461000 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:01:50.461865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 05:01:50.461944 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:01:50.464970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:01:50.465054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:50.466604 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 05:01:50.467846 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 05:01:50.469902 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 05:01:50.470037 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 05:01:50.472367 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 05:01:50.478860 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 05:01:50.494394 systemd[1]: Switching root. Jan 30 05:01:50.580401 systemd-journald[183]: Journal stopped Jan 30 05:01:52.203413 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 05:01:52.203537 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 05:01:52.203587 kernel: SELinux: policy capability open_perms=1 Jan 30 05:01:52.203613 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 05:01:52.203641 kernel: SELinux: policy capability always_check_network=0 Jan 30 05:01:52.203659 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 05:01:52.203678 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 05:01:52.203702 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 05:01:52.203720 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 05:01:52.203743 kernel: audit: type=1403 audit(1738213310.838:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 05:01:52.203764 systemd[1]: Successfully loaded SELinux policy in 58.926ms. Jan 30 05:01:52.203792 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.036ms. Jan 30 05:01:52.203816 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 05:01:52.203842 systemd[1]: Detected virtualization kvm. Jan 30 05:01:52.203862 systemd[1]: Detected architecture x86-64. Jan 30 05:01:52.203883 systemd[1]: Detected first boot. Jan 30 05:01:52.203903 systemd[1]: Hostname set to . Jan 30 05:01:52.203924 systemd[1]: Initializing machine ID from VM UUID. Jan 30 05:01:52.203944 zram_generator::config[1049]: No configuration found. Jan 30 05:01:52.203965 systemd[1]: Populated /etc with preset unit settings. Jan 30 05:01:52.203985 systemd[1]: Queued start job for default target multi-user.target. Jan 30 05:01:52.204010 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 05:01:52.204033 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 05:01:52.204054 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 05:01:52.204080 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 05:01:52.204100 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 05:01:52.204121 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 05:01:52.204141 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 05:01:52.204162 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 05:01:52.204188 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 05:01:52.204210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 05:01:52.204231 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 05:01:52.204255 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 05:01:52.204277 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 05:01:52.204303 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 05:01:52.204326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 05:01:52.204347 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 05:01:52.204368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 05:01:52.204394 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 05:01:52.204416 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 05:01:52.204438 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 05:01:52.204460 systemd[1]: Reached target slices.target - Slice Units. Jan 30 05:01:52.204482 systemd[1]: Reached target swap.target - Swaps. Jan 30 05:01:52.204503 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 05:01:52.204524 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 05:01:52.204549 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 05:01:52.208467 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 05:01:52.208516 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 05:01:52.208542 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 05:01:52.208596 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 05:01:52.208624 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 05:01:52.208651 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 05:01:52.208676 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 05:01:52.208702 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 05:01:52.208737 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:52.208762 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 05:01:52.208788 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 05:01:52.208814 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 05:01:52.208839 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 05:01:52.208865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:52.208889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 05:01:52.208914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 05:01:52.208937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:01:52.208967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:01:52.208993 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:01:52.209019 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 05:01:52.209043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:01:52.209068 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:01:52.209094 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 05:01:52.209119 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 05:01:52.209144 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 05:01:52.209173 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 05:01:52.209198 kernel: loop: module loaded Jan 30 05:01:52.209226 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 05:01:52.209251 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 05:01:52.209277 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 05:01:52.209302 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:52.209327 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 05:01:52.209351 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 05:01:52.209380 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 05:01:52.209405 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 05:01:52.209431 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 05:01:52.209454 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 05:01:52.209479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 05:01:52.209504 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 05:01:52.209528 kernel: fuse: init (API version 7.39) Jan 30 05:01:52.209552 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 05:01:52.209590 kernel: ACPI: bus type drm_connector registered Jan 30 05:01:52.209620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:01:52.209645 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:01:52.209671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:01:52.209696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:01:52.209720 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:01:52.209750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:01:52.209776 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 05:01:52.209805 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 05:01:52.209830 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 05:01:52.209902 systemd-journald[1146]: Collecting audit messages is disabled. Jan 30 05:01:52.209955 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:01:52.209981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:01:52.210022 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 05:01:52.210048 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 05:01:52.210073 systemd-journald[1146]: Journal started Jan 30 05:01:52.210119 systemd-journald[1146]: Runtime Journal (/run/log/journal/65fa1db9036c4a778240cba45d996f51) is 4.9M, max 39.3M, 34.4M free. Jan 30 05:01:52.214314 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 05:01:52.215481 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 05:01:52.237134 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 05:01:52.245791 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 05:01:52.256880 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 05:01:52.259396 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:01:52.272055 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 05:01:52.285798 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 05:01:52.286716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:01:52.298772 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 05:01:52.299700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:01:52.303736 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 05:01:52.319769 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 05:01:52.332892 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 05:01:52.334950 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 05:01:52.341359 systemd-journald[1146]: Time spent on flushing to /var/log/journal/65fa1db9036c4a778240cba45d996f51 is 40.310ms for 977 entries. Jan 30 05:01:52.341359 systemd-journald[1146]: System Journal (/var/log/journal/65fa1db9036c4a778240cba45d996f51) is 8.0M, max 195.6M, 187.6M free. Jan 30 05:01:52.434698 systemd-journald[1146]: Received client request to flush runtime journal. Jan 30 05:01:52.354219 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 05:01:52.369932 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 05:01:52.372324 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 05:01:52.381978 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 05:01:52.414145 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 05:01:52.419913 udevadm[1199]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 05:01:52.427636 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 30 05:01:52.427661 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 30 05:01:52.440985 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 05:01:52.446470 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 05:01:52.456023 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 05:01:52.501113 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 05:01:52.513966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 05:01:52.536512 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 30 05:01:52.537012 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 30 05:01:52.544480 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 05:01:53.558188 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 05:01:53.567838 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 05:01:53.623937 systemd-udevd[1219]: Using default interface naming scheme 'v255'. Jan 30 05:01:53.659691 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 05:01:53.673775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 05:01:53.710284 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 05:01:53.795192 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 05:01:53.809637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1227) Jan 30 05:01:53.865120 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 05:01:53.908231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:53.909544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:53.916805 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:01:53.930779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:01:53.946792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:01:53.947870 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 05:01:53.948046 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 05:01:53.948231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:53.953825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:01:53.954115 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:01:53.965209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:01:53.965501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:01:53.968976 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:01:53.976790 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:01:53.979864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:01:53.986471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:01:53.999595 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 05:01:54.041590 kernel: ACPI: button: Power Button [PWRF] Jan 30 05:01:54.056814 systemd-networkd[1224]: lo: Link UP Jan 30 05:01:54.058224 systemd-networkd[1224]: lo: Gained carrier Jan 30 05:01:54.062134 systemd-networkd[1224]: Enumeration completed Jan 30 05:01:54.063753 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 05:01:54.062910 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 05:01:54.065386 systemd-networkd[1224]: eth0: Configuring with /run/systemd/network/10-9a:86:85:fa:8c:82.network. Jan 30 05:01:54.070088 systemd-networkd[1224]: eth1: Configuring with /run/systemd/network/10-b6:2b:75:8a:ec:1b.network. Jan 30 05:01:54.070947 systemd-networkd[1224]: eth0: Link UP Jan 30 05:01:54.070954 systemd-networkd[1224]: eth0: Gained carrier Jan 30 05:01:54.073872 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 05:01:54.076072 systemd-networkd[1224]: eth1: Link UP Jan 30 05:01:54.076079 systemd-networkd[1224]: eth1: Gained carrier Jan 30 05:01:54.100813 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 05:01:54.165643 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 05:01:54.170936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 05:01:54.176952 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 05:01:54.186698 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 05:01:54.187410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:54.205589 kernel: Console: switching to colour dummy device 80x25 Jan 30 05:01:54.205703 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 05:01:54.205743 kernel: [drm] features: -context_init Jan 30 05:01:54.214394 kernel: [drm] number of scanouts: 1 Jan 30 05:01:54.214479 kernel: [drm] number of cap sets: 0 Jan 30 05:01:54.219503 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 05:01:54.215546 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:01:54.215957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:54.229899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:54.234327 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 05:01:54.234407 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 05:01:54.243730 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 05:01:54.263656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 05:01:54.264002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:54.279098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 05:01:54.416623 kernel: EDAC MC: Ver: 3.0.0 Jan 30 05:01:54.444044 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 05:01:54.454812 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 05:01:54.457328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 05:01:54.476803 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:01:54.517301 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 05:01:54.519225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 05:01:54.526866 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 05:01:54.545251 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 05:01:54.581350 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 05:01:54.582487 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 05:01:54.589782 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 05:01:54.590353 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 05:01:54.591181 systemd[1]: Reached target machines.target - Containers. Jan 30 05:01:54.593803 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 05:01:54.612588 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 05:01:54.616016 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 05:01:54.619225 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 05:01:54.623925 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 05:01:54.631887 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 05:01:54.638072 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 05:01:54.638402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:54.642646 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 05:01:54.652746 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 05:01:54.664641 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 05:01:54.666514 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 05:01:54.696298 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 05:01:54.720454 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 05:01:54.724579 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 05:01:54.744219 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 05:01:54.779617 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 05:01:54.833603 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 05:01:54.881607 kernel: loop3: detected capacity change from 0 to 8 Jan 30 05:01:54.910727 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 05:01:54.947606 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 05:01:54.981626 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 05:01:55.008628 kernel: loop7: detected capacity change from 0 to 8 Jan 30 05:01:55.010329 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 05:01:55.013172 (sd-merge)[1310]: Merged extensions into '/usr'. Jan 30 05:01:55.022478 systemd[1]: Reloading requested from client PID 1298 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 05:01:55.022730 systemd[1]: Reloading... Jan 30 05:01:55.125833 zram_generator::config[1338]: No configuration found. Jan 30 05:01:55.400970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:01:55.504593 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 05:01:55.522830 systemd[1]: Reloading finished in 499 ms. Jan 30 05:01:55.528874 systemd-networkd[1224]: eth1: Gained IPv6LL Jan 30 05:01:55.544532 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 05:01:55.548345 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 05:01:55.549363 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 05:01:55.561942 systemd[1]: Starting ensure-sysext.service... Jan 30 05:01:55.567875 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 05:01:55.586887 systemd[1]: Reloading requested from client PID 1390 ('systemctl') (unit ensure-sysext.service)... Jan 30 05:01:55.586923 systemd[1]: Reloading... Jan 30 05:01:55.623254 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 05:01:55.625199 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 05:01:55.626643 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 05:01:55.627493 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jan 30 05:01:55.627652 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jan 30 05:01:55.633613 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:01:55.633781 systemd-tmpfiles[1391]: Skipping /boot Jan 30 05:01:55.650700 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 05:01:55.650854 systemd-tmpfiles[1391]: Skipping /boot Jan 30 05:01:55.702674 zram_generator::config[1415]: No configuration found. Jan 30 05:01:55.905452 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:01:55.912856 systemd-networkd[1224]: eth0: Gained IPv6LL Jan 30 05:01:55.995785 systemd[1]: Reloading finished in 408 ms. Jan 30 05:01:56.014168 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 05:01:56.026761 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:01:56.034707 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 05:01:56.038042 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 05:01:56.052791 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 05:01:56.070763 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 05:01:56.086745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:56.087331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:56.090968 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:01:56.104723 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:01:56.119769 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:01:56.120449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:56.122198 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:56.143855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:01:56.144168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:01:56.148241 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 05:01:56.151870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:01:56.152055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:01:56.158388 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:01:56.158589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:01:56.168733 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:56.171184 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:56.171727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:56.172075 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:01:56.172426 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:01:56.182253 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 05:01:56.185602 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:56.204546 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:56.205235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 05:01:56.223821 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 05:01:56.237190 augenrules[1507]: No rules Jan 30 05:01:56.247005 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 05:01:56.272843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 05:01:56.280360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 05:01:56.282459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 05:01:56.287501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 05:01:56.291498 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:01:56.299106 systemd-resolved[1473]: Positive Trust Anchors: Jan 30 05:01:56.299124 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 05:01:56.299164 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 05:01:56.301938 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 05:01:56.303386 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 05:01:56.308378 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 05:01:56.313357 systemd-resolved[1473]: Using system hostname 'ci-4081.3.0-d-47de560844'. Jan 30 05:01:56.314530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 05:01:56.314942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 05:01:56.317219 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 05:01:56.319417 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 05:01:56.319674 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 05:01:56.321578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 05:01:56.321812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 05:01:56.323542 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 05:01:56.324434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 05:01:56.334850 systemd[1]: Finished ensure-sysext.service. Jan 30 05:01:56.343229 systemd[1]: Reached target network.target - Network. Jan 30 05:01:56.344879 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 05:01:56.345606 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 05:01:56.346337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 05:01:56.346418 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 05:01:56.353906 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 05:01:56.357817 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 05:01:56.447809 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 05:01:56.448919 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 05:01:56.451280 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 05:01:56.451944 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 05:01:56.452482 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 05:01:56.454365 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 05:01:56.454417 systemd[1]: Reached target paths.target - Path Units. Jan 30 05:01:56.455218 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 05:01:56.456128 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 05:01:56.457177 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 05:01:56.457991 systemd[1]: Reached target timers.target - Timer Units. Jan 30 05:01:56.459653 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 05:01:56.464410 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 05:01:56.469799 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 05:01:56.473571 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 05:01:56.474231 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 05:01:56.475662 systemd[1]: Reached target basic.target - Basic System. Jan 30 05:01:56.476413 systemd[1]: System is tainted: cgroupsv1 Jan 30 05:01:56.476478 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:01:56.476515 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 05:01:56.479276 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 05:01:56.484830 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 05:01:56.492833 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 05:01:56.505779 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 05:01:56.516866 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 05:01:56.520226 jq[1540]: false Jan 30 05:01:56.522686 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 05:01:56.531681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:01:56.544796 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 05:01:56.552183 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 05:01:56.563406 dbus-daemon[1538]: [system] SELinux support is enabled Jan 30 05:01:56.567972 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 05:01:56.585425 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 05:01:56.597840 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 05:01:56.609106 extend-filesystems[1541]: Found loop4 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found loop5 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found loop6 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found loop7 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found vda Jan 30 05:01:56.609106 extend-filesystems[1541]: Found vda1 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found vda2 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found vda3 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found usr Jan 30 05:01:56.609106 extend-filesystems[1541]: Found vda4 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found vda6 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found vda7 Jan 30 05:01:56.609106 extend-filesystems[1541]: Found vda9 Jan 30 05:01:56.609106 extend-filesystems[1541]: Checking size of /dev/vda9 Jan 30 05:01:56.651138 coreos-metadata[1537]: Jan 30 05:01:56.610 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:01:56.651138 coreos-metadata[1537]: Jan 30 05:01:56.639 INFO Fetch successful Jan 30 05:01:56.618833 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 05:01:56.626706 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 05:01:56.649690 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 05:01:56.663895 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 05:01:56.665965 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 05:01:56.694124 extend-filesystems[1541]: Resized partition /dev/vda9 Jan 30 05:01:56.701418 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 05:01:56.709821 extend-filesystems[1574]: resize2fs 1.47.1 (20-May-2024) Jan 30 05:01:56.703078 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 05:01:56.733800 jq[1568]: true Jan 30 05:01:56.726924 systemd-timesyncd[1532]: Contacted time server 45.61.187.39:123 (0.flatcar.pool.ntp.org). Jan 30 05:01:56.745162 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 05:01:56.727025 systemd-timesyncd[1532]: Initial clock synchronization to Thu 2025-01-30 05:01:56.936381 UTC. Jan 30 05:01:56.727743 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 05:01:56.728133 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 05:01:56.741188 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 05:01:56.741512 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 05:01:56.778451 update_engine[1564]: I20250130 05:01:56.778340 1564 main.cc:92] Flatcar Update Engine starting Jan 30 05:01:56.794987 update_engine[1564]: I20250130 05:01:56.794101 1564 update_check_scheduler.cc:74] Next update check in 6m31s Jan 30 05:01:56.796435 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 05:01:56.814275 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 05:01:56.831055 jq[1584]: true Jan 30 05:01:56.852856 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 05:01:56.865009 tar[1582]: linux-amd64/helm Jan 30 05:01:56.872242 systemd[1]: Started update-engine.service - Update Engine. Jan 30 05:01:56.878135 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 05:01:56.878268 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 05:01:56.878299 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 05:01:56.882194 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 05:01:56.882293 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 05:01:56.882315 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 05:01:56.883460 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 05:01:56.902613 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 05:01:56.927432 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 05:01:56.950047 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1610) Jan 30 05:01:56.968278 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 05:01:56.968278 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 05:01:56.968278 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 05:01:56.989806 extend-filesystems[1541]: Resized filesystem in /dev/vda9 Jan 30 05:01:56.989806 extend-filesystems[1541]: Found vdb Jan 30 05:01:56.973302 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 05:01:56.976983 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 05:01:57.113360 systemd-logind[1561]: New seat seat0. Jan 30 05:01:57.116072 systemd-logind[1561]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 05:01:57.116102 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 05:01:57.125126 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 05:01:57.164143 bash[1632]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:01:57.158558 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 05:01:57.175030 systemd[1]: Starting sshkeys.service... Jan 30 05:01:57.229143 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 05:01:57.255101 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 05:01:57.333946 coreos-metadata[1644]: Jan 30 05:01:57.333 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 05:01:57.351679 coreos-metadata[1644]: Jan 30 05:01:57.351 INFO Fetch successful Jan 30 05:01:57.382055 unknown[1644]: wrote ssh authorized keys file for user: core Jan 30 05:01:57.439970 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 05:01:57.477620 update-ssh-keys[1653]: Updated "/home/core/.ssh/authorized_keys" Jan 30 05:01:57.472947 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 05:01:57.485358 systemd[1]: Finished sshkeys.service. Jan 30 05:01:57.531794 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 05:01:57.666471 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 05:01:57.676608 containerd[1585]: time="2025-01-30T05:01:57.676185495Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 05:01:57.690042 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 05:01:57.742411 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 05:01:57.744129 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 05:01:57.767088 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 05:01:57.784443 containerd[1585]: time="2025-01-30T05:01:57.784036826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:57.792151 containerd[1585]: time="2025-01-30T05:01:57.792097136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:57.792151 containerd[1585]: time="2025-01-30T05:01:57.792143070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 05:01:57.792151 containerd[1585]: time="2025-01-30T05:01:57.792164643Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 05:01:57.794662 containerd[1585]: time="2025-01-30T05:01:57.792352338Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 05:01:57.794662 containerd[1585]: time="2025-01-30T05:01:57.792380524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:57.794662 containerd[1585]: time="2025-01-30T05:01:57.792443853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:57.794662 containerd[1585]: time="2025-01-30T05:01:57.792457149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:57.794662 containerd[1585]: time="2025-01-30T05:01:57.794402348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:57.794662 containerd[1585]: time="2025-01-30T05:01:57.794442961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:57.794662 containerd[1585]: time="2025-01-30T05:01:57.794465435Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:57.798241 containerd[1585]: time="2025-01-30T05:01:57.796765663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:57.798241 containerd[1585]: time="2025-01-30T05:01:57.796973172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:57.798241 containerd[1585]: time="2025-01-30T05:01:57.797310461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 05:01:57.798241 containerd[1585]: time="2025-01-30T05:01:57.797639145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 05:01:57.798241 containerd[1585]: time="2025-01-30T05:01:57.797668143Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 05:01:57.798241 containerd[1585]: time="2025-01-30T05:01:57.797796029Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 05:01:57.798241 containerd[1585]: time="2025-01-30T05:01:57.797854151Z" level=info msg="metadata content store policy set" policy=shared Jan 30 05:01:57.807586 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.818752843Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.818863703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.818890094Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.818965246Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.818996326Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.819242379Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.820236643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.820421923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.820445303Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.820466438Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.820486291Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.820505370Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.820526144Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 05:01:57.821630 containerd[1585]: time="2025-01-30T05:01:57.820545570Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820567068Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820607755Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820633830Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820657823Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820686025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820705698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820744732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820774307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820794124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820814768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820833043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820852708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820872723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.822563 containerd[1585]: time="2025-01-30T05:01:57.820896792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.820918636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.820943018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.820963623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.820994060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821033070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821053215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821069276Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821132541Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821161779Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821182375Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821208200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821225264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821245503Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 05:01:57.826015 containerd[1585]: time="2025-01-30T05:01:57.821267359Z" level=info msg="NRI interface is disabled by configuration." Jan 30 05:01:57.825318 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 05:01:57.832671 containerd[1585]: time="2025-01-30T05:01:57.821283389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 05:01:57.832767 containerd[1585]: time="2025-01-30T05:01:57.827933360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 05:01:57.832767 containerd[1585]: time="2025-01-30T05:01:57.828048819Z" level=info msg="Connect containerd service" Jan 30 05:01:57.832767 containerd[1585]: time="2025-01-30T05:01:57.828117587Z" level=info msg="using legacy CRI server" Jan 30 05:01:57.832767 containerd[1585]: time="2025-01-30T05:01:57.828128512Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 05:01:57.832767 containerd[1585]: time="2025-01-30T05:01:57.828306742Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 05:01:57.832767 containerd[1585]: time="2025-01-30T05:01:57.831177287Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 05:01:57.835194 containerd[1585]: time="2025-01-30T05:01:57.835135706Z" level=info msg="Start subscribing containerd event" Jan 30 05:01:57.835515 containerd[1585]: time="2025-01-30T05:01:57.835482746Z" level=info msg="Start recovering state" Jan 30 05:01:57.838966 containerd[1585]: time="2025-01-30T05:01:57.835711905Z" level=info msg="Start event monitor" Jan 30 05:01:57.838966 containerd[1585]: time="2025-01-30T05:01:57.835729906Z" level=info msg="Start snapshots syncer" Jan 30 05:01:57.838966 containerd[1585]: time="2025-01-30T05:01:57.835740796Z" level=info msg="Start cni network conf syncer for default" Jan 30 05:01:57.838966 containerd[1585]: time="2025-01-30T05:01:57.835749038Z" level=info msg="Start streaming server" Jan 30 05:01:57.841464 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 05:01:57.842457 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 05:01:57.843736 containerd[1585]: time="2025-01-30T05:01:57.843698979Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 05:01:57.843975 containerd[1585]: time="2025-01-30T05:01:57.843951596Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 05:01:57.844240 containerd[1585]: time="2025-01-30T05:01:57.844223629Z" level=info msg="containerd successfully booted in 0.171612s" Jan 30 05:01:57.846794 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 05:01:58.161041 tar[1582]: linux-amd64/LICENSE Jan 30 05:01:58.161737 tar[1582]: linux-amd64/README.md Jan 30 05:01:58.185440 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 05:01:58.609492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:01:58.615615 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 05:01:58.619187 systemd[1]: Startup finished in 7.484s (kernel) + 7.835s (userspace) = 15.320s. Jan 30 05:01:58.625143 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:01:59.498167 kubelet[1699]: E0130 05:01:59.498027 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:01:59.501472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:01:59.503580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:02:04.479017 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 05:02:04.491458 systemd[1]: Started sshd@0-137.184.120.173:22-147.75.109.163:54314.service - OpenSSH per-connection server daemon (147.75.109.163:54314). Jan 30 05:02:04.579362 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 54314 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:04.582712 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:04.600989 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 05:02:04.607096 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 05:02:04.612098 systemd-logind[1561]: New session 1 of user core. Jan 30 05:02:04.637623 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 05:02:04.649350 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 05:02:04.657849 (systemd)[1718]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 05:02:04.836199 systemd[1718]: Queued start job for default target default.target. Jan 30 05:02:04.837024 systemd[1718]: Created slice app.slice - User Application Slice. Jan 30 05:02:04.837070 systemd[1718]: Reached target paths.target - Paths. Jan 30 05:02:04.837093 systemd[1718]: Reached target timers.target - Timers. Jan 30 05:02:04.847806 systemd[1718]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 05:02:04.859285 systemd[1718]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 05:02:04.859375 systemd[1718]: Reached target sockets.target - Sockets. Jan 30 05:02:04.859395 systemd[1718]: Reached target basic.target - Basic System. Jan 30 05:02:04.859464 systemd[1718]: Reached target default.target - Main User Target. Jan 30 05:02:04.859510 systemd[1718]: Startup finished in 190ms. Jan 30 05:02:04.860297 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 05:02:04.866157 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 05:02:04.939295 systemd[1]: Started sshd@1-137.184.120.173:22-147.75.109.163:54324.service - OpenSSH per-connection server daemon (147.75.109.163:54324). Jan 30 05:02:05.013394 sshd[1730]: Accepted publickey for core from 147.75.109.163 port 54324 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:05.015743 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:05.023924 systemd-logind[1561]: New session 2 of user core. Jan 30 05:02:05.030223 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 05:02:05.099967 sshd[1730]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:05.109125 systemd[1]: Started sshd@2-137.184.120.173:22-147.75.109.163:54334.service - OpenSSH per-connection server daemon (147.75.109.163:54334). Jan 30 05:02:05.110117 systemd[1]: sshd@1-137.184.120.173:22-147.75.109.163:54324.service: Deactivated successfully. Jan 30 05:02:05.119855 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 05:02:05.121545 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Jan 30 05:02:05.123233 systemd-logind[1561]: Removed session 2. Jan 30 05:02:05.172708 sshd[1735]: Accepted publickey for core from 147.75.109.163 port 54334 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:05.174749 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:05.180797 systemd-logind[1561]: New session 3 of user core. Jan 30 05:02:05.189245 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 05:02:05.252884 sshd[1735]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:05.266209 systemd[1]: Started sshd@3-137.184.120.173:22-147.75.109.163:54348.service - OpenSSH per-connection server daemon (147.75.109.163:54348). Jan 30 05:02:05.267148 systemd[1]: sshd@2-137.184.120.173:22-147.75.109.163:54334.service: Deactivated successfully. Jan 30 05:02:05.277053 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 05:02:05.280085 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Jan 30 05:02:05.282058 systemd-logind[1561]: Removed session 3. Jan 30 05:02:05.320644 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 54348 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:05.323638 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:05.331037 systemd-logind[1561]: New session 4 of user core. Jan 30 05:02:05.339163 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 05:02:05.407965 sshd[1743]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:05.418112 systemd[1]: Started sshd@4-137.184.120.173:22-147.75.109.163:54352.service - OpenSSH per-connection server daemon (147.75.109.163:54352). Jan 30 05:02:05.421104 systemd[1]: sshd@3-137.184.120.173:22-147.75.109.163:54348.service: Deactivated successfully. Jan 30 05:02:05.424278 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 05:02:05.427177 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Jan 30 05:02:05.431628 systemd-logind[1561]: Removed session 4. Jan 30 05:02:05.483269 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 54352 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:05.485391 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:05.492918 systemd-logind[1561]: New session 5 of user core. Jan 30 05:02:05.504128 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 05:02:05.588358 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 05:02:05.588908 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:02:05.609684 sudo[1758]: pam_unix(sudo:session): session closed for user root Jan 30 05:02:05.616029 sshd[1751]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:05.627177 systemd[1]: Started sshd@5-137.184.120.173:22-147.75.109.163:54368.service - OpenSSH per-connection server daemon (147.75.109.163:54368). Jan 30 05:02:05.628320 systemd[1]: sshd@4-137.184.120.173:22-147.75.109.163:54352.service: Deactivated successfully. Jan 30 05:02:05.632567 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 05:02:05.638011 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Jan 30 05:02:05.640207 systemd-logind[1561]: Removed session 5. Jan 30 05:02:05.686797 sshd[1760]: Accepted publickey for core from 147.75.109.163 port 54368 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:05.689087 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:05.697471 systemd-logind[1561]: New session 6 of user core. Jan 30 05:02:05.700097 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 05:02:05.765748 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 05:02:05.766318 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:02:05.773151 sudo[1768]: pam_unix(sudo:session): session closed for user root Jan 30 05:02:05.782285 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 05:02:05.783376 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:02:05.814178 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 05:02:05.816963 auditctl[1771]: No rules Jan 30 05:02:05.817844 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 05:02:05.818201 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 05:02:05.830278 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 05:02:05.869529 augenrules[1790]: No rules Jan 30 05:02:05.872342 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 05:02:05.876685 sudo[1767]: pam_unix(sudo:session): session closed for user root Jan 30 05:02:05.882787 sshd[1760]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:05.890015 systemd[1]: Started sshd@6-137.184.120.173:22-147.75.109.163:54380.service - OpenSSH per-connection server daemon (147.75.109.163:54380). Jan 30 05:02:05.890522 systemd[1]: sshd@5-137.184.120.173:22-147.75.109.163:54368.service: Deactivated successfully. Jan 30 05:02:05.895335 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 05:02:05.895748 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 30 05:02:05.899969 systemd-logind[1561]: Removed session 6. Jan 30 05:02:05.959663 sshd[1796]: Accepted publickey for core from 147.75.109.163 port 54380 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:02:05.961747 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:02:05.970906 systemd-logind[1561]: New session 7 of user core. Jan 30 05:02:05.979058 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 05:02:06.042425 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 05:02:06.043085 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 05:02:06.725093 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 05:02:06.737505 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 05:02:07.353219 dockerd[1820]: time="2025-01-30T05:02:07.353112013Z" level=info msg="Starting up" Jan 30 05:02:07.529299 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3135654577-merged.mount: Deactivated successfully. Jan 30 05:02:07.663730 systemd[1]: var-lib-docker-metacopy\x2dcheck2324781923-merged.mount: Deactivated successfully. Jan 30 05:02:07.696151 dockerd[1820]: time="2025-01-30T05:02:07.696100592Z" level=info msg="Loading containers: start." Jan 30 05:02:07.883595 kernel: Initializing XFRM netlink socket Jan 30 05:02:07.995264 systemd-networkd[1224]: docker0: Link UP Jan 30 05:02:08.028869 dockerd[1820]: time="2025-01-30T05:02:08.028793143Z" level=info msg="Loading containers: done." Jan 30 05:02:08.097916 dockerd[1820]: time="2025-01-30T05:02:08.097846937Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 05:02:08.098144 dockerd[1820]: time="2025-01-30T05:02:08.098017334Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 05:02:08.098203 dockerd[1820]: time="2025-01-30T05:02:08.098177719Z" level=info msg="Daemon has completed initialization" Jan 30 05:02:08.349105 dockerd[1820]: time="2025-01-30T05:02:08.348767054Z" level=info msg="API listen on /run/docker.sock" Jan 30 05:02:08.350066 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 05:02:09.707547 containerd[1585]: time="2025-01-30T05:02:09.707416537Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 05:02:09.753638 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 05:02:09.765024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:09.961225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:09.978363 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:02:10.072877 kubelet[1982]: E0130 05:02:10.072815 1982 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:02:10.080773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:02:10.081094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:02:10.453927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3129667124.mount: Deactivated successfully. Jan 30 05:02:12.485857 containerd[1585]: time="2025-01-30T05:02:12.485770370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:12.489501 containerd[1585]: time="2025-01-30T05:02:12.489412093Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 05:02:12.493289 containerd[1585]: time="2025-01-30T05:02:12.493198639Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:12.500768 containerd[1585]: time="2025-01-30T05:02:12.500638445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:12.504480 containerd[1585]: time="2025-01-30T05:02:12.502663597Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.795142546s" Jan 30 05:02:12.504480 containerd[1585]: time="2025-01-30T05:02:12.502727619Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 05:02:12.544117 containerd[1585]: time="2025-01-30T05:02:12.543780843Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 05:02:14.504925 containerd[1585]: time="2025-01-30T05:02:14.504811159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:14.509979 containerd[1585]: time="2025-01-30T05:02:14.509901613Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 05:02:14.512844 containerd[1585]: time="2025-01-30T05:02:14.512786885Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:14.522793 containerd[1585]: time="2025-01-30T05:02:14.522676926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:14.525402 containerd[1585]: time="2025-01-30T05:02:14.525197974Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.981364815s" Jan 30 05:02:14.525402 containerd[1585]: time="2025-01-30T05:02:14.525267530Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 05:02:14.563611 containerd[1585]: time="2025-01-30T05:02:14.563272275Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 05:02:15.853554 containerd[1585]: time="2025-01-30T05:02:15.853483024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:15.859681 containerd[1585]: time="2025-01-30T05:02:15.859607135Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 05:02:15.864095 containerd[1585]: time="2025-01-30T05:02:15.864013639Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:15.873535 containerd[1585]: time="2025-01-30T05:02:15.873430616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:15.875271 containerd[1585]: time="2025-01-30T05:02:15.875120580Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.311796239s" Jan 30 05:02:15.875271 containerd[1585]: time="2025-01-30T05:02:15.875165613Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 05:02:15.919645 containerd[1585]: time="2025-01-30T05:02:15.919229100Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 05:02:15.924328 systemd-resolved[1473]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 05:02:17.115407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256446935.mount: Deactivated successfully. Jan 30 05:02:17.676257 containerd[1585]: time="2025-01-30T05:02:17.676189302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:17.680623 containerd[1585]: time="2025-01-30T05:02:17.680509076Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 05:02:17.684943 containerd[1585]: time="2025-01-30T05:02:17.684859340Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:17.689854 containerd[1585]: time="2025-01-30T05:02:17.689796944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:17.691605 containerd[1585]: time="2025-01-30T05:02:17.691299359Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.771944572s" Jan 30 05:02:17.691605 containerd[1585]: time="2025-01-30T05:02:17.691375879Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 05:02:17.726181 containerd[1585]: time="2025-01-30T05:02:17.726137889Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 05:02:18.346054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480559676.mount: Deactivated successfully. Jan 30 05:02:19.016789 systemd-resolved[1473]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 05:02:19.491874 containerd[1585]: time="2025-01-30T05:02:19.491704259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:19.496039 containerd[1585]: time="2025-01-30T05:02:19.495951581Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 05:02:19.498178 containerd[1585]: time="2025-01-30T05:02:19.498134173Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:19.504532 containerd[1585]: time="2025-01-30T05:02:19.504444769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:19.506729 containerd[1585]: time="2025-01-30T05:02:19.506487917Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.780306259s" Jan 30 05:02:19.506729 containerd[1585]: time="2025-01-30T05:02:19.506549039Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 05:02:19.546692 containerd[1585]: time="2025-01-30T05:02:19.546626485Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 05:02:20.123318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 05:02:20.132877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:20.137512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955819797.mount: Deactivated successfully. Jan 30 05:02:20.157640 containerd[1585]: time="2025-01-30T05:02:20.157509585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:20.161712 containerd[1585]: time="2025-01-30T05:02:20.161648717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 05:02:20.164829 containerd[1585]: time="2025-01-30T05:02:20.164756420Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:20.174002 containerd[1585]: time="2025-01-30T05:02:20.173787951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:20.178194 containerd[1585]: time="2025-01-30T05:02:20.177845487Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 631.137479ms" Jan 30 05:02:20.178194 containerd[1585]: time="2025-01-30T05:02:20.177899731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 05:02:20.237583 containerd[1585]: time="2025-01-30T05:02:20.237500004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 05:02:20.303818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:20.309621 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 05:02:20.378780 kubelet[2148]: E0130 05:02:20.377849 2148 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 05:02:20.383118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 05:02:20.383622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 05:02:20.779717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349739983.mount: Deactivated successfully. Jan 30 05:02:22.885801 containerd[1585]: time="2025-01-30T05:02:22.885721592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:22.888273 containerd[1585]: time="2025-01-30T05:02:22.888179219Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 05:02:22.892457 containerd[1585]: time="2025-01-30T05:02:22.892367944Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:22.900518 containerd[1585]: time="2025-01-30T05:02:22.900415368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:22.903209 containerd[1585]: time="2025-01-30T05:02:22.902949641Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.6653998s" Jan 30 05:02:22.903209 containerd[1585]: time="2025-01-30T05:02:22.903007404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 05:02:25.840785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:25.851040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:25.880589 systemd[1]: Reloading requested from client PID 2267 ('systemctl') (unit session-7.scope)... Jan 30 05:02:25.880606 systemd[1]: Reloading... Jan 30 05:02:26.024630 zram_generator::config[2307]: No configuration found. Jan 30 05:02:26.189712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:02:26.270053 systemd[1]: Reloading finished in 389 ms. Jan 30 05:02:26.341824 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 05:02:26.341957 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 05:02:26.342377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:26.352148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:26.485774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:26.494092 (kubelet)[2370]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:02:26.559385 kubelet[2370]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:02:26.559977 kubelet[2370]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:02:26.559977 kubelet[2370]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:02:26.566593 kubelet[2370]: I0130 05:02:26.565855 2370 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:02:27.190942 kubelet[2370]: I0130 05:02:27.190883 2370 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:02:27.190942 kubelet[2370]: I0130 05:02:27.190927 2370 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:02:27.191266 kubelet[2370]: I0130 05:02:27.191240 2370 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:02:27.218322 kubelet[2370]: I0130 05:02:27.217899 2370 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:02:27.222169 kubelet[2370]: E0130 05:02:27.222091 2370 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.120.173:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.244886 kubelet[2370]: I0130 05:02:27.244818 2370 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:02:27.248858 kubelet[2370]: I0130 05:02:27.248721 2370 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:02:27.249267 kubelet[2370]: I0130 05:02:27.248838 2370 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-d-47de560844","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:02:27.250049 kubelet[2370]: I0130 05:02:27.249543 2370 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:02:27.250049 kubelet[2370]: I0130 05:02:27.249594 2370 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:02:27.250049 kubelet[2370]: I0130 05:02:27.249795 2370 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:02:27.251203 kubelet[2370]: I0130 05:02:27.250916 2370 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:02:27.251203 kubelet[2370]: I0130 05:02:27.250945 2370 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:02:27.251203 kubelet[2370]: I0130 05:02:27.250979 2370 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:02:27.251203 kubelet[2370]: I0130 05:02:27.251004 2370 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:02:27.255337 kubelet[2370]: W0130 05:02:27.255252 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.120.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-47de560844&limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.256000 kubelet[2370]: E0130 05:02:27.255647 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.120.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-47de560844&limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.256000 kubelet[2370]: W0130 05:02:27.255791 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.120.173:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.256000 kubelet[2370]: E0130 05:02:27.255843 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.120.173:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.256830 kubelet[2370]: I0130 05:02:27.256450 2370 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:02:27.260080 kubelet[2370]: I0130 05:02:27.259268 2370 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:02:27.260080 kubelet[2370]: W0130 05:02:27.259387 2370 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 05:02:27.261225 kubelet[2370]: I0130 05:02:27.261197 2370 server.go:1264] "Started kubelet" Jan 30 05:02:27.267695 kubelet[2370]: I0130 05:02:27.266953 2370 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:02:27.268400 kubelet[2370]: I0130 05:02:27.268351 2370 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:02:27.270139 kubelet[2370]: I0130 05:02:27.269799 2370 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:02:27.270405 kubelet[2370]: I0130 05:02:27.270351 2370 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:02:27.273247 kubelet[2370]: E0130 05:02:27.272660 2370 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.120.173:6443/api/v1/namespaces/default/events\": dial tcp 137.184.120.173:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-d-47de560844.181f5fd4735815f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-d-47de560844,UID:ci-4081.3.0-d-47de560844,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-d-47de560844,},FirstTimestamp:2025-01-30 05:02:27.261158904 +0000 UTC m=+0.760627222,LastTimestamp:2025-01-30 05:02:27.261158904 +0000 UTC m=+0.760627222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-d-47de560844,}" Jan 30 05:02:27.273643 kubelet[2370]: I0130 05:02:27.273619 2370 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:02:27.282301 kubelet[2370]: E0130 05:02:27.282256 2370 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:02:27.282611 kubelet[2370]: E0130 05:02:27.282588 2370 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-d-47de560844\" not found" Jan 30 05:02:27.282761 kubelet[2370]: I0130 05:02:27.282748 2370 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:02:27.282997 kubelet[2370]: I0130 05:02:27.282977 2370 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:02:27.283162 kubelet[2370]: I0130 05:02:27.283150 2370 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:02:27.285271 kubelet[2370]: W0130 05:02:27.285201 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.120.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.285456 kubelet[2370]: E0130 05:02:27.285436 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.120.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.286504 kubelet[2370]: E0130 05:02:27.286463 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.120.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-47de560844?timeout=10s\": dial tcp 137.184.120.173:6443: connect: connection refused" interval="200ms" Jan 30 05:02:27.287344 kubelet[2370]: I0130 05:02:27.287316 2370 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:02:27.287547 kubelet[2370]: I0130 05:02:27.287522 2370 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:02:27.289653 kubelet[2370]: I0130 05:02:27.289243 2370 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:02:27.305359 kubelet[2370]: I0130 05:02:27.305293 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:02:27.308740 kubelet[2370]: I0130 05:02:27.308701 2370 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:02:27.309370 kubelet[2370]: I0130 05:02:27.308926 2370 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:02:27.309370 kubelet[2370]: I0130 05:02:27.308962 2370 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:02:27.309370 kubelet[2370]: E0130 05:02:27.309031 2370 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:02:27.334247 kubelet[2370]: W0130 05:02:27.334119 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.120.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.334247 kubelet[2370]: E0130 05:02:27.334256 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.120.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:27.347084 kubelet[2370]: I0130 05:02:27.347048 2370 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:02:27.347084 kubelet[2370]: I0130 05:02:27.347074 2370 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:02:27.347308 kubelet[2370]: I0130 05:02:27.347107 2370 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:02:27.351321 kubelet[2370]: I0130 05:02:27.351256 2370 policy_none.go:49] "None policy: Start" Jan 30 05:02:27.352811 kubelet[2370]: I0130 05:02:27.352782 2370 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:02:27.353612 kubelet[2370]: I0130 05:02:27.353099 2370 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:02:27.364634 kubelet[2370]: I0130 05:02:27.364595 2370 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:02:27.365941 kubelet[2370]: I0130 05:02:27.365083 2370 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:02:27.365941 kubelet[2370]: I0130 05:02:27.365243 2370 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:02:27.369290 kubelet[2370]: E0130 05:02:27.369263 2370 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-d-47de560844\" not found" Jan 30 05:02:27.385384 kubelet[2370]: I0130 05:02:27.385332 2370 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:27.386279 kubelet[2370]: E0130 05:02:27.386235 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.120.173:6443/api/v1/nodes\": dial tcp 137.184.120.173:6443: connect: connection refused" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:27.409794 kubelet[2370]: I0130 05:02:27.409711 2370 topology_manager.go:215] "Topology Admit Handler" podUID="9de225daf33f0a454b78d00bccbcdaeb" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.411534 kubelet[2370]: I0130 05:02:27.411491 2370 topology_manager.go:215] "Topology Admit Handler" podUID="0eabf34e653659506e63d4683dbde737" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.414994 kubelet[2370]: I0130 05:02:27.414815 2370 topology_manager.go:215] "Topology Admit Handler" podUID="369ef9c5fd18b23a4f93830565ccd349" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.484485 kubelet[2370]: I0130 05:02:27.484341 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.484978 kubelet[2370]: I0130 05:02:27.484729 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.484978 kubelet[2370]: I0130 05:02:27.484777 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9de225daf33f0a454b78d00bccbcdaeb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-d-47de560844\" (UID: \"9de225daf33f0a454b78d00bccbcdaeb\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.484978 kubelet[2370]: I0130 05:02:27.484807 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9de225daf33f0a454b78d00bccbcdaeb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-d-47de560844\" (UID: \"9de225daf33f0a454b78d00bccbcdaeb\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.484978 kubelet[2370]: I0130 05:02:27.484836 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.484978 kubelet[2370]: I0130 05:02:27.484859 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.485323 kubelet[2370]: I0130 05:02:27.484884 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9de225daf33f0a454b78d00bccbcdaeb-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-d-47de560844\" (UID: \"9de225daf33f0a454b78d00bccbcdaeb\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.485323 kubelet[2370]: I0130 05:02:27.484909 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.485323 kubelet[2370]: I0130 05:02:27.484934 2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/369ef9c5fd18b23a4f93830565ccd349-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-d-47de560844\" (UID: \"369ef9c5fd18b23a4f93830565ccd349\") " pod="kube-system/kube-scheduler-ci-4081.3.0-d-47de560844" Jan 30 05:02:27.487758 kubelet[2370]: E0130 05:02:27.487628 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.120.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-47de560844?timeout=10s\": dial tcp 137.184.120.173:6443: connect: connection refused" interval="400ms" Jan 30 05:02:27.587646 kubelet[2370]: I0130 05:02:27.587605 2370 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:27.588508 kubelet[2370]: E0130 05:02:27.588158 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.120.173:6443/api/v1/nodes\": dial tcp 137.184.120.173:6443: connect: connection refused" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:27.717229 kubelet[2370]: E0130 05:02:27.717133 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:27.718164 containerd[1585]: time="2025-01-30T05:02:27.718077632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-d-47de560844,Uid:9de225daf33f0a454b78d00bccbcdaeb,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:27.724282 systemd-resolved[1473]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 05:02:27.726441 kubelet[2370]: E0130 05:02:27.726177 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:27.726441 kubelet[2370]: E0130 05:02:27.726346 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:27.731768 containerd[1585]: time="2025-01-30T05:02:27.731715185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-d-47de560844,Uid:0eabf34e653659506e63d4683dbde737,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:27.732238 containerd[1585]: time="2025-01-30T05:02:27.731716906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-d-47de560844,Uid:369ef9c5fd18b23a4f93830565ccd349,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:27.888287 kubelet[2370]: E0130 05:02:27.888140 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.120.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-47de560844?timeout=10s\": dial tcp 137.184.120.173:6443: connect: connection refused" interval="800ms" Jan 30 05:02:27.990456 kubelet[2370]: I0130 05:02:27.990346 2370 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:27.990804 kubelet[2370]: E0130 05:02:27.990773 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.120.173:6443/api/v1/nodes\": dial tcp 137.184.120.173:6443: connect: connection refused" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:28.183651 kubelet[2370]: W0130 05:02:28.183392 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://137.184.120.173:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:28.183651 kubelet[2370]: E0130 05:02:28.183540 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://137.184.120.173:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:28.310859 kubelet[2370]: W0130 05:02:28.310772 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://137.184.120.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-47de560844&limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:28.311005 kubelet[2370]: E0130 05:02:28.310881 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://137.184.120.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-d-47de560844&limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:28.320090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132705305.mount: Deactivated successfully. Jan 30 05:02:28.341348 containerd[1585]: time="2025-01-30T05:02:28.341255274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:28.347049 containerd[1585]: time="2025-01-30T05:02:28.346982469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:02:28.351625 containerd[1585]: time="2025-01-30T05:02:28.350522846Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:28.354481 containerd[1585]: time="2025-01-30T05:02:28.354408769Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:28.359759 containerd[1585]: time="2025-01-30T05:02:28.359625190Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 05:02:28.361257 containerd[1585]: time="2025-01-30T05:02:28.361052120Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:28.365735 containerd[1585]: time="2025-01-30T05:02:28.365680342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 05:02:28.367812 containerd[1585]: time="2025-01-30T05:02:28.366758040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.742777ms" Jan 30 05:02:28.370025 containerd[1585]: time="2025-01-30T05:02:28.369972291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 05:02:28.381087 containerd[1585]: time="2025-01-30T05:02:28.381025440Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.892294ms" Jan 30 05:02:28.383038 containerd[1585]: time="2025-01-30T05:02:28.382987035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.791576ms" Jan 30 05:02:28.393185 kubelet[2370]: W0130 05:02:28.393128 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://137.184.120.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:28.393478 kubelet[2370]: E0130 05:02:28.393450 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://137.184.120.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:28.560559 kubelet[2370]: W0130 05:02:28.558288 2370 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://137.184.120.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:28.560559 kubelet[2370]: E0130 05:02:28.558379 2370 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://137.184.120.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:28.622379 containerd[1585]: time="2025-01-30T05:02:28.611715283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:28.622379 containerd[1585]: time="2025-01-30T05:02:28.615233603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:28.622379 containerd[1585]: time="2025-01-30T05:02:28.615274757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:28.622379 containerd[1585]: time="2025-01-30T05:02:28.615430294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:28.650444 containerd[1585]: time="2025-01-30T05:02:28.650257991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:28.651471 containerd[1585]: time="2025-01-30T05:02:28.650768107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:28.651471 containerd[1585]: time="2025-01-30T05:02:28.650813281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:28.651471 containerd[1585]: time="2025-01-30T05:02:28.650994315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:28.678740 containerd[1585]: time="2025-01-30T05:02:28.678500049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:28.680548 containerd[1585]: time="2025-01-30T05:02:28.678605703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:28.683747 containerd[1585]: time="2025-01-30T05:02:28.680468954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:28.685621 containerd[1585]: time="2025-01-30T05:02:28.685420179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:28.691890 kubelet[2370]: E0130 05:02:28.691826 2370 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.120.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-d-47de560844?timeout=10s\": dial tcp 137.184.120.173:6443: connect: connection refused" interval="1.6s" Jan 30 05:02:28.768526 containerd[1585]: time="2025-01-30T05:02:28.767876096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-d-47de560844,Uid:0eabf34e653659506e63d4683dbde737,Namespace:kube-system,Attempt:0,} returns sandbox id \"51033b81e18febada2a22cd039ebe13e2eb5b5d0ca661d06e3cd99afb5cb4182\"" Jan 30 05:02:28.773331 kubelet[2370]: E0130 05:02:28.772625 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:28.791589 containerd[1585]: time="2025-01-30T05:02:28.787430057Z" level=info msg="CreateContainer within sandbox \"51033b81e18febada2a22cd039ebe13e2eb5b5d0ca661d06e3cd99afb5cb4182\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 05:02:28.792425 kubelet[2370]: I0130 05:02:28.792386 2370 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:28.792879 kubelet[2370]: E0130 05:02:28.792849 2370 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://137.184.120.173:6443/api/v1/nodes\": dial tcp 137.184.120.173:6443: connect: connection refused" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:28.812126 containerd[1585]: time="2025-01-30T05:02:28.811872972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-d-47de560844,Uid:9de225daf33f0a454b78d00bccbcdaeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"22d9810588a8ce1628aaa0bc88ea1e75477fcb4764359b1bf52b4323a7665ae0\"" Jan 30 05:02:28.813150 kubelet[2370]: E0130 05:02:28.813054 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:28.819250 containerd[1585]: time="2025-01-30T05:02:28.819192051Z" level=info msg="CreateContainer within sandbox \"22d9810588a8ce1628aaa0bc88ea1e75477fcb4764359b1bf52b4323a7665ae0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 05:02:28.823789 containerd[1585]: time="2025-01-30T05:02:28.823714595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-d-47de560844,Uid:369ef9c5fd18b23a4f93830565ccd349,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7b8021248ae0986314c323f2a48513a95e8ea9585cac3d9d7ca447f2eac21c0\"" Jan 30 05:02:28.825901 kubelet[2370]: E0130 05:02:28.825851 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:28.829203 containerd[1585]: time="2025-01-30T05:02:28.829150965Z" level=info msg="CreateContainer within sandbox \"a7b8021248ae0986314c323f2a48513a95e8ea9585cac3d9d7ca447f2eac21c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 05:02:28.849493 containerd[1585]: time="2025-01-30T05:02:28.849420084Z" level=info msg="CreateContainer within sandbox \"51033b81e18febada2a22cd039ebe13e2eb5b5d0ca661d06e3cd99afb5cb4182\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6dcdede5938cd78182f24fcc1d768138cc92adadb024ab615c96012f2d92aced\"" Jan 30 05:02:28.851132 containerd[1585]: time="2025-01-30T05:02:28.851084907Z" level=info msg="StartContainer for \"6dcdede5938cd78182f24fcc1d768138cc92adadb024ab615c96012f2d92aced\"" Jan 30 05:02:28.876081 containerd[1585]: time="2025-01-30T05:02:28.875908697Z" level=info msg="CreateContainer within sandbox \"22d9810588a8ce1628aaa0bc88ea1e75477fcb4764359b1bf52b4323a7665ae0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"217abc6cfc2ec66dd5327192ac0171422e8aef4c9d744d14a4c5f320099c53e2\"" Jan 30 05:02:28.877589 containerd[1585]: time="2025-01-30T05:02:28.877163080Z" level=info msg="StartContainer for \"217abc6cfc2ec66dd5327192ac0171422e8aef4c9d744d14a4c5f320099c53e2\"" Jan 30 05:02:28.904605 containerd[1585]: time="2025-01-30T05:02:28.904498669Z" level=info msg="CreateContainer within sandbox \"a7b8021248ae0986314c323f2a48513a95e8ea9585cac3d9d7ca447f2eac21c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a42e26cb355189ffbd9527016791d2a80441d7484e40bbc91f42c274f3a3687c\"" Jan 30 05:02:28.907258 containerd[1585]: time="2025-01-30T05:02:28.906973278Z" level=info msg="StartContainer for \"a42e26cb355189ffbd9527016791d2a80441d7484e40bbc91f42c274f3a3687c\"" Jan 30 05:02:29.042427 containerd[1585]: time="2025-01-30T05:02:29.042359790Z" level=info msg="StartContainer for \"6dcdede5938cd78182f24fcc1d768138cc92adadb024ab615c96012f2d92aced\" returns successfully" Jan 30 05:02:29.099121 containerd[1585]: time="2025-01-30T05:02:29.098969702Z" level=info msg="StartContainer for \"a42e26cb355189ffbd9527016791d2a80441d7484e40bbc91f42c274f3a3687c\" returns successfully" Jan 30 05:02:29.109518 containerd[1585]: time="2025-01-30T05:02:29.108778970Z" level=info msg="StartContainer for \"217abc6cfc2ec66dd5327192ac0171422e8aef4c9d744d14a4c5f320099c53e2\" returns successfully" Jan 30 05:02:29.251603 kubelet[2370]: E0130 05:02:29.250417 2370 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://137.184.120.173:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 137.184.120.173:6443: connect: connection refused Jan 30 05:02:29.372633 kubelet[2370]: E0130 05:02:29.372389 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:29.387960 kubelet[2370]: E0130 05:02:29.387907 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:29.400594 kubelet[2370]: E0130 05:02:29.399207 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:30.398240 kubelet[2370]: I0130 05:02:30.398147 2370 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:30.405372 kubelet[2370]: E0130 05:02:30.405318 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:31.342260 kubelet[2370]: E0130 05:02:31.342000 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:31.835023 kubelet[2370]: E0130 05:02:31.834877 2370 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-d-47de560844\" not found" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:32.015783 kubelet[2370]: I0130 05:02:32.015667 2370 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:32.254449 kubelet[2370]: I0130 05:02:32.254214 2370 apiserver.go:52] "Watching apiserver" Jan 30 05:02:32.284332 kubelet[2370]: I0130 05:02:32.284261 2370 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:02:33.955699 systemd[1]: Reloading requested from client PID 2642 ('systemctl') (unit session-7.scope)... Jan 30 05:02:33.955720 systemd[1]: Reloading... Jan 30 05:02:34.053286 kubelet[2370]: W0130 05:02:34.052874 2370 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:34.056309 kubelet[2370]: E0130 05:02:34.056140 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:34.094678 zram_generator::config[2684]: No configuration found. Jan 30 05:02:34.313258 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 05:02:34.412436 kubelet[2370]: E0130 05:02:34.412341 2370 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:34.441130 systemd[1]: Reloading finished in 484 ms. Jan 30 05:02:34.487575 kubelet[2370]: E0130 05:02:34.487260 2370 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.0-d-47de560844.181f5fd4735815f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-d-47de560844,UID:ci-4081.3.0-d-47de560844,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-d-47de560844,},FirstTimestamp:2025-01-30 05:02:27.261158904 +0000 UTC m=+0.760627222,LastTimestamp:2025-01-30 05:02:27.261158904 +0000 UTC m=+0.760627222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-d-47de560844,}" Jan 30 05:02:34.487491 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:34.503536 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 05:02:34.504263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:34.514198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 05:02:34.667741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 05:02:34.680263 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 05:02:34.777629 kubelet[2742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:02:34.779061 kubelet[2742]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 05:02:34.779061 kubelet[2742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 05:02:34.779061 kubelet[2742]: I0130 05:02:34.778243 2742 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 05:02:34.785162 kubelet[2742]: I0130 05:02:34.785118 2742 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 05:02:34.785162 kubelet[2742]: I0130 05:02:34.785152 2742 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 05:02:34.785434 kubelet[2742]: I0130 05:02:34.785419 2742 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 05:02:34.787933 kubelet[2742]: I0130 05:02:34.787741 2742 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 05:02:34.790425 kubelet[2742]: I0130 05:02:34.790368 2742 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 05:02:34.798965 kubelet[2742]: I0130 05:02:34.798927 2742 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 05:02:34.801777 kubelet[2742]: I0130 05:02:34.801694 2742 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 05:02:34.801955 kubelet[2742]: I0130 05:02:34.801759 2742 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-d-47de560844","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 05:02:34.802139 kubelet[2742]: I0130 05:02:34.801973 2742 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 05:02:34.802139 kubelet[2742]: I0130 05:02:34.801985 2742 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 05:02:34.802139 kubelet[2742]: I0130 05:02:34.802053 2742 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:02:34.803221 kubelet[2742]: I0130 05:02:34.802171 2742 kubelet.go:400] "Attempting to sync node with API server" Jan 30 05:02:34.803221 kubelet[2742]: I0130 05:02:34.802184 2742 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 05:02:34.803221 kubelet[2742]: I0130 05:02:34.802208 2742 kubelet.go:312] "Adding apiserver pod source" Jan 30 05:02:34.803221 kubelet[2742]: I0130 05:02:34.802226 2742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 05:02:34.814667 kubelet[2742]: I0130 05:02:34.814627 2742 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 05:02:34.816616 kubelet[2742]: I0130 05:02:34.816334 2742 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 05:02:34.817579 kubelet[2742]: I0130 05:02:34.816902 2742 server.go:1264] "Started kubelet" Jan 30 05:02:34.821686 kubelet[2742]: I0130 05:02:34.821654 2742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 05:02:34.829836 kubelet[2742]: I0130 05:02:34.829783 2742 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 05:02:34.830993 kubelet[2742]: I0130 05:02:34.830969 2742 server.go:455] "Adding debug handlers to kubelet server" Jan 30 05:02:34.832950 kubelet[2742]: I0130 05:02:34.832913 2742 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 05:02:34.833200 kubelet[2742]: I0130 05:02:34.832922 2742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 05:02:34.834621 kubelet[2742]: I0130 05:02:34.834594 2742 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 05:02:34.836929 kubelet[2742]: I0130 05:02:34.836897 2742 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 05:02:34.837144 kubelet[2742]: I0130 05:02:34.837083 2742 reconciler.go:26] "Reconciler: start to sync state" Jan 30 05:02:34.839791 kubelet[2742]: E0130 05:02:34.839720 2742 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 05:02:34.842361 kubelet[2742]: I0130 05:02:34.841252 2742 factory.go:221] Registration of the systemd container factory successfully Jan 30 05:02:34.842361 kubelet[2742]: I0130 05:02:34.842266 2742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 05:02:34.842548 kubelet[2742]: I0130 05:02:34.842469 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 05:02:34.847182 kubelet[2742]: I0130 05:02:34.846359 2742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 05:02:34.847182 kubelet[2742]: I0130 05:02:34.846427 2742 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 05:02:34.847182 kubelet[2742]: I0130 05:02:34.846473 2742 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 05:02:34.847182 kubelet[2742]: E0130 05:02:34.846542 2742 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 05:02:34.855653 kubelet[2742]: I0130 05:02:34.855620 2742 factory.go:221] Registration of the containerd container factory successfully Jan 30 05:02:34.943274 kubelet[2742]: I0130 05:02:34.943172 2742 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:34.945618 kubelet[2742]: I0130 05:02:34.944423 2742 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 05:02:34.945618 kubelet[2742]: I0130 05:02:34.944444 2742 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 05:02:34.945618 kubelet[2742]: I0130 05:02:34.944485 2742 state_mem.go:36] "Initialized new in-memory state store" Jan 30 05:02:34.946636 kubelet[2742]: I0130 05:02:34.946486 2742 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 05:02:34.946841 kubelet[2742]: I0130 05:02:34.946522 2742 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 05:02:34.946841 kubelet[2742]: I0130 05:02:34.946840 2742 policy_none.go:49] "None policy: Start" Jan 30 05:02:34.948468 kubelet[2742]: E0130 05:02:34.947158 2742 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 05:02:34.953081 kubelet[2742]: I0130 05:02:34.951912 2742 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 05:02:34.959313 kubelet[2742]: I0130 05:02:34.953317 2742 state_mem.go:35] "Initializing new in-memory state store" Jan 30 05:02:34.959313 kubelet[2742]: I0130 05:02:34.953435 2742 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:34.959313 kubelet[2742]: I0130 05:02:34.953503 2742 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-d-47de560844" Jan 30 05:02:34.959313 kubelet[2742]: I0130 05:02:34.953578 2742 state_mem.go:75] "Updated machine memory state" Jan 30 05:02:34.959313 kubelet[2742]: I0130 05:02:34.955223 2742 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 05:02:34.959313 kubelet[2742]: I0130 05:02:34.955444 2742 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 05:02:34.961982 kubelet[2742]: I0130 05:02:34.961951 2742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 05:02:35.148138 kubelet[2742]: I0130 05:02:35.148069 2742 topology_manager.go:215] "Topology Admit Handler" podUID="9de225daf33f0a454b78d00bccbcdaeb" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.148315 kubelet[2742]: I0130 05:02:35.148226 2742 topology_manager.go:215] "Topology Admit Handler" podUID="0eabf34e653659506e63d4683dbde737" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.148366 kubelet[2742]: I0130 05:02:35.148332 2742 topology_manager.go:215] "Topology Admit Handler" podUID="369ef9c5fd18b23a4f93830565ccd349" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.157682 kubelet[2742]: W0130 05:02:35.156961 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:35.157682 kubelet[2742]: W0130 05:02:35.157012 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:35.157682 kubelet[2742]: E0130 05:02:35.157064 2742 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-d-47de560844\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.158073 kubelet[2742]: W0130 05:02:35.157989 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:35.235543 systemd[1]: Started sshd@7-137.184.120.173:22-218.92.0.157:57813.service - OpenSSH per-connection server daemon (218.92.0.157:57813). Jan 30 05:02:35.240817 kubelet[2742]: I0130 05:02:35.240773 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/369ef9c5fd18b23a4f93830565ccd349-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-d-47de560844\" (UID: \"369ef9c5fd18b23a4f93830565ccd349\") " pod="kube-system/kube-scheduler-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.241285 kubelet[2742]: I0130 05:02:35.241077 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9de225daf33f0a454b78d00bccbcdaeb-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-d-47de560844\" (UID: \"9de225daf33f0a454b78d00bccbcdaeb\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.241285 kubelet[2742]: I0130 05:02:35.241105 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.241285 kubelet[2742]: I0130 05:02:35.241148 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.241285 kubelet[2742]: I0130 05:02:35.241165 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.241285 kubelet[2742]: I0130 05:02:35.241181 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.241552 kubelet[2742]: I0130 05:02:35.241226 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0eabf34e653659506e63d4683dbde737-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-d-47de560844\" (UID: \"0eabf34e653659506e63d4683dbde737\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.241552 kubelet[2742]: I0130 05:02:35.241243 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9de225daf33f0a454b78d00bccbcdaeb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-d-47de560844\" (UID: \"9de225daf33f0a454b78d00bccbcdaeb\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.241552 kubelet[2742]: I0130 05:02:35.241365 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9de225daf33f0a454b78d00bccbcdaeb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-d-47de560844\" (UID: \"9de225daf33f0a454b78d00bccbcdaeb\") " pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.461197 kubelet[2742]: E0130 05:02:35.459206 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:35.461197 kubelet[2742]: E0130 05:02:35.459466 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:35.461197 kubelet[2742]: E0130 05:02:35.459781 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:35.802917 kubelet[2742]: I0130 05:02:35.802673 2742 apiserver.go:52] "Watching apiserver" Jan 30 05:02:35.837731 kubelet[2742]: I0130 05:02:35.837676 2742 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 05:02:35.902132 kubelet[2742]: E0130 05:02:35.902096 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:35.903112 kubelet[2742]: E0130 05:02:35.902467 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:35.951022 kubelet[2742]: W0130 05:02:35.950230 2742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 05:02:35.951022 kubelet[2742]: E0130 05:02:35.950358 2742 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-d-47de560844\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" Jan 30 05:02:35.951022 kubelet[2742]: E0130 05:02:35.951017 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:36.021593 kubelet[2742]: I0130 05:02:36.019013 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-d-47de560844" podStartSLOduration=1.018994208 podStartE2EDuration="1.018994208s" podCreationTimestamp="2025-01-30 05:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:36.016862371 +0000 UTC m=+1.325738215" watchObservedRunningTime="2025-01-30 05:02:36.018994208 +0000 UTC m=+1.327870124" Jan 30 05:02:36.087506 kubelet[2742]: I0130 05:02:36.087342 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-d-47de560844" podStartSLOduration=2.087317329 podStartE2EDuration="2.087317329s" podCreationTimestamp="2025-01-30 05:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:36.059764737 +0000 UTC m=+1.368640582" watchObservedRunningTime="2025-01-30 05:02:36.087317329 +0000 UTC m=+1.396193165" Jan 30 05:02:36.119663 kubelet[2742]: I0130 05:02:36.118524 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-d-47de560844" podStartSLOduration=1.11850448 podStartE2EDuration="1.11850448s" podCreationTimestamp="2025-01-30 05:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:36.08915157 +0000 UTC m=+1.398027411" watchObservedRunningTime="2025-01-30 05:02:36.11850448 +0000 UTC m=+1.427380324" Jan 30 05:02:36.576218 sshd[2780]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Jan 30 05:02:36.903936 kubelet[2742]: E0130 05:02:36.903871 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:38.466555 sshd[2776]: PAM: Permission denied for root from 218.92.0.157 Jan 30 05:02:38.758545 sshd[2800]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Jan 30 05:02:40.846665 kubelet[2742]: E0130 05:02:40.846594 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:40.914833 kubelet[2742]: E0130 05:02:40.914446 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:40.923841 sshd[2776]: PAM: Permission denied for root from 218.92.0.157 Jan 30 05:02:40.998029 sudo[1803]: pam_unix(sudo:session): session closed for user root Jan 30 05:02:41.003070 sshd[1796]: pam_unix(sshd:session): session closed for user core Jan 30 05:02:41.008838 systemd[1]: sshd@6-137.184.120.173:22-147.75.109.163:54380.service: Deactivated successfully. Jan 30 05:02:41.014052 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 30 05:02:41.014166 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 05:02:41.016223 systemd-logind[1561]: Removed session 7. Jan 30 05:02:41.214839 sshd[2820]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Jan 30 05:02:41.326809 kubelet[2742]: E0130 05:02:41.324899 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:41.916167 kubelet[2742]: E0130 05:02:41.916097 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:42.300746 update_engine[1564]: I20250130 05:02:42.300386 1564 update_attempter.cc:509] Updating boot flags... Jan 30 05:02:42.352357 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2828) Jan 30 05:02:42.421625 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2831) Jan 30 05:02:43.124423 sshd[2776]: PAM: Permission denied for root from 218.92.0.157 Jan 30 05:02:43.269644 sshd[2776]: Received disconnect from 218.92.0.157 port 57813:11: [preauth] Jan 30 05:02:43.269644 sshd[2776]: Disconnected from authenticating user root 218.92.0.157 port 57813 [preauth] Jan 30 05:02:43.270832 systemd[1]: sshd@7-137.184.120.173:22-218.92.0.157:57813.service: Deactivated successfully. Jan 30 05:02:44.856369 kubelet[2742]: E0130 05:02:44.855908 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:44.922964 kubelet[2742]: E0130 05:02:44.922114 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:50.245388 kubelet[2742]: I0130 05:02:50.245090 2742 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 05:02:50.248450 containerd[1585]: time="2025-01-30T05:02:50.248344536Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 05:02:50.251478 kubelet[2742]: I0130 05:02:50.251443 2742 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 05:02:50.605502 kubelet[2742]: I0130 05:02:50.602066 2742 topology_manager.go:215] "Topology Admit Handler" podUID="0a17d04c-bfbc-4c34-846b-346079f2d8bc" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-2mwx4" Jan 30 05:02:50.673288 kubelet[2742]: I0130 05:02:50.673237 2742 topology_manager.go:215] "Topology Admit Handler" podUID="b95dfb1b-aa0e-476b-9730-046153a20a66" podNamespace="kube-system" podName="kube-proxy-48hb9" Jan 30 05:02:50.683966 kubelet[2742]: W0130 05:02:50.683890 2742 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.0-d-47de560844" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-d-47de560844' and this object Jan 30 05:02:50.684401 kubelet[2742]: E0130 05:02:50.684197 2742 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4081.3.0-d-47de560844" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-d-47de560844' and this object Jan 30 05:02:50.684401 kubelet[2742]: W0130 05:02:50.684326 2742 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-d-47de560844" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-d-47de560844' and this object Jan 30 05:02:50.684401 kubelet[2742]: E0130 05:02:50.684374 2742 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.0-d-47de560844" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081.3.0-d-47de560844' and this object Jan 30 05:02:50.747315 kubelet[2742]: I0130 05:02:50.747107 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a17d04c-bfbc-4c34-846b-346079f2d8bc-var-lib-calico\") pod \"tigera-operator-7bc55997bb-2mwx4\" (UID: \"0a17d04c-bfbc-4c34-846b-346079f2d8bc\") " pod="tigera-operator/tigera-operator-7bc55997bb-2mwx4" Jan 30 05:02:50.747315 kubelet[2742]: I0130 05:02:50.747188 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84vgh\" (UniqueName: \"kubernetes.io/projected/0a17d04c-bfbc-4c34-846b-346079f2d8bc-kube-api-access-84vgh\") pod \"tigera-operator-7bc55997bb-2mwx4\" (UID: \"0a17d04c-bfbc-4c34-846b-346079f2d8bc\") " pod="tigera-operator/tigera-operator-7bc55997bb-2mwx4" Jan 30 05:02:50.847842 kubelet[2742]: I0130 05:02:50.847762 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b95dfb1b-aa0e-476b-9730-046153a20a66-kube-proxy\") pod \"kube-proxy-48hb9\" (UID: \"b95dfb1b-aa0e-476b-9730-046153a20a66\") " pod="kube-system/kube-proxy-48hb9" Jan 30 05:02:50.848003 kubelet[2742]: I0130 05:02:50.847863 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b95dfb1b-aa0e-476b-9730-046153a20a66-lib-modules\") pod \"kube-proxy-48hb9\" (UID: \"b95dfb1b-aa0e-476b-9730-046153a20a66\") " pod="kube-system/kube-proxy-48hb9" Jan 30 05:02:50.848003 kubelet[2742]: I0130 05:02:50.847893 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bnn\" (UniqueName: \"kubernetes.io/projected/b95dfb1b-aa0e-476b-9730-046153a20a66-kube-api-access-g4bnn\") pod \"kube-proxy-48hb9\" (UID: \"b95dfb1b-aa0e-476b-9730-046153a20a66\") " pod="kube-system/kube-proxy-48hb9" Jan 30 05:02:50.848003 kubelet[2742]: I0130 05:02:50.847924 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b95dfb1b-aa0e-476b-9730-046153a20a66-xtables-lock\") pod \"kube-proxy-48hb9\" (UID: \"b95dfb1b-aa0e-476b-9730-046153a20a66\") " pod="kube-system/kube-proxy-48hb9" Jan 30 05:02:50.918545 containerd[1585]: time="2025-01-30T05:02:50.918412429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-2mwx4,Uid:0a17d04c-bfbc-4c34-846b-346079f2d8bc,Namespace:tigera-operator,Attempt:0,}" Jan 30 05:02:51.018913 containerd[1585]: time="2025-01-30T05:02:51.018692047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:51.018913 containerd[1585]: time="2025-01-30T05:02:51.018801930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:51.018913 containerd[1585]: time="2025-01-30T05:02:51.018821410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:51.019527 containerd[1585]: time="2025-01-30T05:02:51.018963062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:51.110146 containerd[1585]: time="2025-01-30T05:02:51.110052995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-2mwx4,Uid:0a17d04c-bfbc-4c34-846b-346079f2d8bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8d6592b18260180990cccc73901cb2cd2379b20cf59e48260b00c2b53d200ab1\"" Jan 30 05:02:51.113963 containerd[1585]: time="2025-01-30T05:02:51.112902902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 05:02:51.882488 kubelet[2742]: E0130 05:02:51.882075 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:51.884225 containerd[1585]: time="2025-01-30T05:02:51.883623501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48hb9,Uid:b95dfb1b-aa0e-476b-9730-046153a20a66,Namespace:kube-system,Attempt:0,}" Jan 30 05:02:51.933973 containerd[1585]: time="2025-01-30T05:02:51.933846055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:51.934220 containerd[1585]: time="2025-01-30T05:02:51.933991391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:51.934220 containerd[1585]: time="2025-01-30T05:02:51.934024633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:51.934501 containerd[1585]: time="2025-01-30T05:02:51.934223344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:51.992530 containerd[1585]: time="2025-01-30T05:02:51.992477514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48hb9,Uid:b95dfb1b-aa0e-476b-9730-046153a20a66,Namespace:kube-system,Attempt:0,} returns sandbox id \"d78bed348c1a1d306159b400f9705ba8e16a72ee0f73317a6f119c2c892ad36b\"" Jan 30 05:02:51.994154 kubelet[2742]: E0130 05:02:51.993448 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:51.998012 containerd[1585]: time="2025-01-30T05:02:51.997961690Z" level=info msg="CreateContainer within sandbox \"d78bed348c1a1d306159b400f9705ba8e16a72ee0f73317a6f119c2c892ad36b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 05:02:52.076930 containerd[1585]: time="2025-01-30T05:02:52.076837892Z" level=info msg="CreateContainer within sandbox \"d78bed348c1a1d306159b400f9705ba8e16a72ee0f73317a6f119c2c892ad36b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63b4ff07c7a17035b388ad2baf867afecd6cf1cfdcbe284efb7630a7f288a6df\"" Jan 30 05:02:52.079129 containerd[1585]: time="2025-01-30T05:02:52.078182181Z" level=info msg="StartContainer for \"63b4ff07c7a17035b388ad2baf867afecd6cf1cfdcbe284efb7630a7f288a6df\"" Jan 30 05:02:52.180107 containerd[1585]: time="2025-01-30T05:02:52.179980038Z" level=info msg="StartContainer for \"63b4ff07c7a17035b388ad2baf867afecd6cf1cfdcbe284efb7630a7f288a6df\" returns successfully" Jan 30 05:02:52.967600 kubelet[2742]: E0130 05:02:52.967457 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:53.195878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724611785.mount: Deactivated successfully. Jan 30 05:02:53.865830 containerd[1585]: time="2025-01-30T05:02:53.865738909Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:53.869913 containerd[1585]: time="2025-01-30T05:02:53.869830906Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 05:02:53.873729 containerd[1585]: time="2025-01-30T05:02:53.873660065Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:53.879508 containerd[1585]: time="2025-01-30T05:02:53.879427932Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:02:53.881395 containerd[1585]: time="2025-01-30T05:02:53.881155431Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.767397827s" Jan 30 05:02:53.881395 containerd[1585]: time="2025-01-30T05:02:53.881210194Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 05:02:53.888849 containerd[1585]: time="2025-01-30T05:02:53.888783385Z" level=info msg="CreateContainer within sandbox \"8d6592b18260180990cccc73901cb2cd2379b20cf59e48260b00c2b53d200ab1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 05:02:53.924020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3247772191.mount: Deactivated successfully. Jan 30 05:02:53.941759 containerd[1585]: time="2025-01-30T05:02:53.941551157Z" level=info msg="CreateContainer within sandbox \"8d6592b18260180990cccc73901cb2cd2379b20cf59e48260b00c2b53d200ab1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"81665de087c4abc9e4292494430d2215a39f247e4b9043e3b51ba5333cbfe853\"" Jan 30 05:02:53.942387 containerd[1585]: time="2025-01-30T05:02:53.942344300Z" level=info msg="StartContainer for \"81665de087c4abc9e4292494430d2215a39f247e4b9043e3b51ba5333cbfe853\"" Jan 30 05:02:54.007680 systemd[1]: run-containerd-runc-k8s.io-81665de087c4abc9e4292494430d2215a39f247e4b9043e3b51ba5333cbfe853-runc.bZZXre.mount: Deactivated successfully. Jan 30 05:02:54.049289 containerd[1585]: time="2025-01-30T05:02:54.049241380Z" level=info msg="StartContainer for \"81665de087c4abc9e4292494430d2215a39f247e4b9043e3b51ba5333cbfe853\" returns successfully" Jan 30 05:02:54.869531 kubelet[2742]: I0130 05:02:54.869086 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-48hb9" podStartSLOduration=4.869065152 podStartE2EDuration="4.869065152s" podCreationTimestamp="2025-01-30 05:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:02:53.004932402 +0000 UTC m=+18.313808247" watchObservedRunningTime="2025-01-30 05:02:54.869065152 +0000 UTC m=+20.177941046" Jan 30 05:02:57.359630 kubelet[2742]: I0130 05:02:57.359129 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-2mwx4" podStartSLOduration=4.587799287 podStartE2EDuration="7.359088289s" podCreationTimestamp="2025-01-30 05:02:50 +0000 UTC" firstStartedPulling="2025-01-30 05:02:51.112090403 +0000 UTC m=+16.420966241" lastFinishedPulling="2025-01-30 05:02:53.883379418 +0000 UTC m=+19.192255243" observedRunningTime="2025-01-30 05:02:54.998982661 +0000 UTC m=+20.307858508" watchObservedRunningTime="2025-01-30 05:02:57.359088289 +0000 UTC m=+22.667964134" Jan 30 05:02:57.360496 kubelet[2742]: I0130 05:02:57.359904 2742 topology_manager.go:215] "Topology Admit Handler" podUID="85413716-d934-4846-9d00-52f8e328b411" podNamespace="calico-system" podName="calico-typha-f84549d7-c5xxs" Jan 30 05:02:57.397825 kubelet[2742]: I0130 05:02:57.397425 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85413716-d934-4846-9d00-52f8e328b411-tigera-ca-bundle\") pod \"calico-typha-f84549d7-c5xxs\" (UID: \"85413716-d934-4846-9d00-52f8e328b411\") " pod="calico-system/calico-typha-f84549d7-c5xxs" Jan 30 05:02:57.397825 kubelet[2742]: I0130 05:02:57.397479 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/85413716-d934-4846-9d00-52f8e328b411-typha-certs\") pod \"calico-typha-f84549d7-c5xxs\" (UID: \"85413716-d934-4846-9d00-52f8e328b411\") " pod="calico-system/calico-typha-f84549d7-c5xxs" Jan 30 05:02:57.397825 kubelet[2742]: I0130 05:02:57.397514 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zxs\" (UniqueName: \"kubernetes.io/projected/85413716-d934-4846-9d00-52f8e328b411-kube-api-access-57zxs\") pod \"calico-typha-f84549d7-c5xxs\" (UID: \"85413716-d934-4846-9d00-52f8e328b411\") " pod="calico-system/calico-typha-f84549d7-c5xxs" Jan 30 05:02:57.477927 kubelet[2742]: I0130 05:02:57.475402 2742 topology_manager.go:215] "Topology Admit Handler" podUID="01f1c928-1706-42a5-bb56-01a783fa8509" podNamespace="calico-system" podName="calico-node-tnnm7" Jan 30 05:02:57.499338 kubelet[2742]: I0130 05:02:57.498651 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-log-dir\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.499338 kubelet[2742]: I0130 05:02:57.498726 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-var-run-calico\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.499338 kubelet[2742]: I0130 05:02:57.498769 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-xtables-lock\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.499338 kubelet[2742]: I0130 05:02:57.498794 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-net-dir\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.499338 kubelet[2742]: I0130 05:02:57.498851 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-policysync\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.501057 kubelet[2742]: I0130 05:02:57.498879 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-var-lib-calico\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.501057 kubelet[2742]: I0130 05:02:57.498918 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n62zt\" (UniqueName: \"kubernetes.io/projected/01f1c928-1706-42a5-bb56-01a783fa8509-kube-api-access-n62zt\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.501057 kubelet[2742]: I0130 05:02:57.498950 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-flexvol-driver-host\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.501057 kubelet[2742]: I0130 05:02:57.498985 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f1c928-1706-42a5-bb56-01a783fa8509-tigera-ca-bundle\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.501057 kubelet[2742]: I0130 05:02:57.499017 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/01f1c928-1706-42a5-bb56-01a783fa8509-node-certs\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.501352 kubelet[2742]: I0130 05:02:57.499044 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-bin-dir\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.501352 kubelet[2742]: I0130 05:02:57.499071 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-lib-modules\") pod \"calico-node-tnnm7\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " pod="calico-system/calico-node-tnnm7" Jan 30 05:02:57.602320 kubelet[2742]: I0130 05:02:57.602239 2742 topology_manager.go:215] "Topology Admit Handler" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" podNamespace="calico-system" podName="csi-node-driver-bhbgz" Jan 30 05:02:57.604588 kubelet[2742]: E0130 05:02:57.603367 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhbgz" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" Jan 30 05:02:57.614243 kubelet[2742]: E0130 05:02:57.614088 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.614243 kubelet[2742]: W0130 05:02:57.614138 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.614243 kubelet[2742]: E0130 05:02:57.614172 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.627626 kubelet[2742]: E0130 05:02:57.627366 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.627626 kubelet[2742]: W0130 05:02:57.627416 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.627626 kubelet[2742]: E0130 05:02:57.627446 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.645179 kubelet[2742]: E0130 05:02:57.645137 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.645440 kubelet[2742]: W0130 05:02:57.645348 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.645440 kubelet[2742]: E0130 05:02:57.645383 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.690632 kubelet[2742]: E0130 05:02:57.686856 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:57.690813 containerd[1585]: time="2025-01-30T05:02:57.689085909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f84549d7-c5xxs,Uid:85413716-d934-4846-9d00-52f8e328b411,Namespace:calico-system,Attempt:0,}" Jan 30 05:02:57.695093 kubelet[2742]: E0130 05:02:57.695033 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.695093 kubelet[2742]: W0130 05:02:57.695074 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.695947 kubelet[2742]: E0130 05:02:57.695138 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.697906 kubelet[2742]: E0130 05:02:57.697870 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.698059 kubelet[2742]: W0130 05:02:57.697919 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.698059 kubelet[2742]: E0130 05:02:57.697947 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.698298 kubelet[2742]: E0130 05:02:57.698276 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.698647 kubelet[2742]: W0130 05:02:57.698297 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.698647 kubelet[2742]: E0130 05:02:57.698335 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.701397 kubelet[2742]: E0130 05:02:57.701352 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.701397 kubelet[2742]: W0130 05:02:57.701400 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.701641 kubelet[2742]: E0130 05:02:57.701428 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.702280 kubelet[2742]: E0130 05:02:57.702138 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.703055 kubelet[2742]: W0130 05:02:57.702644 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.703055 kubelet[2742]: E0130 05:02:57.702679 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.704705 kubelet[2742]: E0130 05:02:57.704668 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.704705 kubelet[2742]: W0130 05:02:57.704698 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.704895 kubelet[2742]: E0130 05:02:57.704739 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.709241 kubelet[2742]: E0130 05:02:57.709166 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.709241 kubelet[2742]: W0130 05:02:57.709208 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.709488 kubelet[2742]: E0130 05:02:57.709354 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.711334 kubelet[2742]: E0130 05:02:57.711290 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.711334 kubelet[2742]: W0130 05:02:57.711319 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.711518 kubelet[2742]: E0130 05:02:57.711343 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.713836 kubelet[2742]: E0130 05:02:57.713799 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.713836 kubelet[2742]: W0130 05:02:57.713827 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.714076 kubelet[2742]: E0130 05:02:57.713851 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.715035 kubelet[2742]: E0130 05:02:57.715008 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.716598 kubelet[2742]: W0130 05:02:57.715032 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.716598 kubelet[2742]: E0130 05:02:57.715601 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.717751 kubelet[2742]: E0130 05:02:57.717724 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.717751 kubelet[2742]: W0130 05:02:57.717746 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.717933 kubelet[2742]: E0130 05:02:57.717769 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.721945 kubelet[2742]: E0130 05:02:57.721712 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.721945 kubelet[2742]: W0130 05:02:57.721741 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.721945 kubelet[2742]: E0130 05:02:57.721770 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.722310 kubelet[2742]: E0130 05:02:57.722295 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.722537 kubelet[2742]: W0130 05:02:57.722448 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.722537 kubelet[2742]: E0130 05:02:57.722474 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.722921 kubelet[2742]: E0130 05:02:57.722907 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.723092 kubelet[2742]: W0130 05:02:57.723009 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.723092 kubelet[2742]: E0130 05:02:57.723029 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.724522 kubelet[2742]: E0130 05:02:57.723843 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.724522 kubelet[2742]: W0130 05:02:57.723860 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.724522 kubelet[2742]: E0130 05:02:57.723875 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.727983 kubelet[2742]: E0130 05:02:57.727649 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.727983 kubelet[2742]: W0130 05:02:57.727679 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.727983 kubelet[2742]: E0130 05:02:57.727705 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.729533 kubelet[2742]: E0130 05:02:57.728688 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.729533 kubelet[2742]: W0130 05:02:57.729001 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.729533 kubelet[2742]: E0130 05:02:57.729036 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.736869 kubelet[2742]: E0130 05:02:57.736691 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.736869 kubelet[2742]: W0130 05:02:57.736753 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.736869 kubelet[2742]: E0130 05:02:57.736799 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.744166 kubelet[2742]: E0130 05:02:57.743942 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.744166 kubelet[2742]: W0130 05:02:57.743975 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.744166 kubelet[2742]: E0130 05:02:57.744003 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.745927 kubelet[2742]: E0130 05:02:57.745633 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.745927 kubelet[2742]: W0130 05:02:57.745660 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.745927 kubelet[2742]: E0130 05:02:57.745685 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.756587 kubelet[2742]: E0130 05:02:57.756319 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.756587 kubelet[2742]: W0130 05:02:57.756370 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.756587 kubelet[2742]: E0130 05:02:57.756399 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.756587 kubelet[2742]: I0130 05:02:57.756437 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a048fe9b-2075-4d81-9452-b1dc14c3972a-varrun\") pod \"csi-node-driver-bhbgz\" (UID: \"a048fe9b-2075-4d81-9452-b1dc14c3972a\") " pod="calico-system/csi-node-driver-bhbgz" Jan 30 05:02:57.759912 kubelet[2742]: E0130 05:02:57.759681 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.759912 kubelet[2742]: W0130 05:02:57.759710 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.759912 kubelet[2742]: E0130 05:02:57.759766 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.759912 kubelet[2742]: I0130 05:02:57.759813 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tkq5\" (UniqueName: \"kubernetes.io/projected/a048fe9b-2075-4d81-9452-b1dc14c3972a-kube-api-access-2tkq5\") pod \"csi-node-driver-bhbgz\" (UID: \"a048fe9b-2075-4d81-9452-b1dc14c3972a\") " pod="calico-system/csi-node-driver-bhbgz" Jan 30 05:02:57.762376 kubelet[2742]: E0130 05:02:57.761457 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.762376 kubelet[2742]: W0130 05:02:57.761493 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.763222 kubelet[2742]: E0130 05:02:57.762735 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.763222 kubelet[2742]: I0130 05:02:57.763064 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a048fe9b-2075-4d81-9452-b1dc14c3972a-kubelet-dir\") pod \"csi-node-driver-bhbgz\" (UID: \"a048fe9b-2075-4d81-9452-b1dc14c3972a\") " pod="calico-system/csi-node-driver-bhbgz" Jan 30 05:02:57.765179 kubelet[2742]: E0130 05:02:57.765008 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.765179 kubelet[2742]: W0130 05:02:57.765029 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.765179 kubelet[2742]: E0130 05:02:57.765082 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.765634 kubelet[2742]: E0130 05:02:57.765619 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.765850 kubelet[2742]: W0130 05:02:57.765663 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.766163 kubelet[2742]: E0130 05:02:57.766100 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.766163 kubelet[2742]: W0130 05:02:57.766116 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.766521 kubelet[2742]: E0130 05:02:57.766509 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.766668 kubelet[2742]: W0130 05:02:57.766587 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.766668 kubelet[2742]: E0130 05:02:57.766599 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.766668 kubelet[2742]: E0130 05:02:57.766629 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.767007 kubelet[2742]: I0130 05:02:57.766667 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a048fe9b-2075-4d81-9452-b1dc14c3972a-socket-dir\") pod \"csi-node-driver-bhbgz\" (UID: \"a048fe9b-2075-4d81-9452-b1dc14c3972a\") " pod="calico-system/csi-node-driver-bhbgz" Jan 30 05:02:57.767007 kubelet[2742]: E0130 05:02:57.766701 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.767190 kubelet[2742]: E0130 05:02:57.767147 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.767190 kubelet[2742]: W0130 05:02:57.767160 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.767190 kubelet[2742]: E0130 05:02:57.767173 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.767732 kubelet[2742]: E0130 05:02:57.767608 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.767732 kubelet[2742]: W0130 05:02:57.767622 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.767732 kubelet[2742]: E0130 05:02:57.767638 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.768122 kubelet[2742]: I0130 05:02:57.767669 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a048fe9b-2075-4d81-9452-b1dc14c3972a-registration-dir\") pod \"csi-node-driver-bhbgz\" (UID: \"a048fe9b-2075-4d81-9452-b1dc14c3972a\") " pod="calico-system/csi-node-driver-bhbgz" Jan 30 05:02:57.768321 kubelet[2742]: E0130 05:02:57.768232 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.768321 kubelet[2742]: W0130 05:02:57.768243 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.768321 kubelet[2742]: E0130 05:02:57.768259 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.768611 kubelet[2742]: E0130 05:02:57.768600 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.768696 kubelet[2742]: W0130 05:02:57.768685 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.768888 kubelet[2742]: E0130 05:02:57.768795 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.769120 kubelet[2742]: E0130 05:02:57.769109 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.769268 kubelet[2742]: W0130 05:02:57.769181 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.769268 kubelet[2742]: E0130 05:02:57.769196 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.769609 kubelet[2742]: E0130 05:02:57.769446 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.769609 kubelet[2742]: W0130 05:02:57.769457 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.769609 kubelet[2742]: E0130 05:02:57.769467 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.770052 kubelet[2742]: E0130 05:02:57.770034 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.770277 kubelet[2742]: W0130 05:02:57.770154 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.770277 kubelet[2742]: E0130 05:02:57.770178 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.770656 kubelet[2742]: E0130 05:02:57.770542 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.770656 kubelet[2742]: W0130 05:02:57.770555 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.770656 kubelet[2742]: E0130 05:02:57.770587 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.789054 kubelet[2742]: E0130 05:02:57.787839 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:57.790898 containerd[1585]: time="2025-01-30T05:02:57.790793761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tnnm7,Uid:01f1c928-1706-42a5-bb56-01a783fa8509,Namespace:calico-system,Attempt:0,}" Jan 30 05:02:57.806496 containerd[1585]: time="2025-01-30T05:02:57.805910954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:57.806496 containerd[1585]: time="2025-01-30T05:02:57.806065535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:57.806496 containerd[1585]: time="2025-01-30T05:02:57.806098004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:57.806496 containerd[1585]: time="2025-01-30T05:02:57.806216367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:57.872473 kubelet[2742]: E0130 05:02:57.872104 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.872473 kubelet[2742]: W0130 05:02:57.872132 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.872473 kubelet[2742]: E0130 05:02:57.872254 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.877032 kubelet[2742]: E0130 05:02:57.876933 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.877846 kubelet[2742]: W0130 05:02:57.876962 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.877846 kubelet[2742]: E0130 05:02:57.877625 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.883443 kubelet[2742]: E0130 05:02:57.881380 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.883443 kubelet[2742]: W0130 05:02:57.881411 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.883443 kubelet[2742]: E0130 05:02:57.881476 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.888635 kubelet[2742]: E0130 05:02:57.887821 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.888635 kubelet[2742]: W0130 05:02:57.887858 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.888635 kubelet[2742]: E0130 05:02:57.888031 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.888635 kubelet[2742]: E0130 05:02:57.888302 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.888635 kubelet[2742]: W0130 05:02:57.888313 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.888635 kubelet[2742]: E0130 05:02:57.888398 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.890714 kubelet[2742]: E0130 05:02:57.890680 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.892411 kubelet[2742]: W0130 05:02:57.892374 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.893152 kubelet[2742]: E0130 05:02:57.893125 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.897331 kubelet[2742]: E0130 05:02:57.897129 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.897942 kubelet[2742]: W0130 05:02:57.897549 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.899070 kubelet[2742]: E0130 05:02:57.898806 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.901867 kubelet[2742]: E0130 05:02:57.900811 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.901867 kubelet[2742]: W0130 05:02:57.900837 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.901867 kubelet[2742]: E0130 05:02:57.901033 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.901867 kubelet[2742]: E0130 05:02:57.901662 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.901867 kubelet[2742]: W0130 05:02:57.901677 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.902602 kubelet[2742]: E0130 05:02:57.902247 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.911182 containerd[1585]: time="2025-01-30T05:02:57.908713868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:02:57.911182 containerd[1585]: time="2025-01-30T05:02:57.908816310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:02:57.911182 containerd[1585]: time="2025-01-30T05:02:57.908845085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:57.911182 containerd[1585]: time="2025-01-30T05:02:57.909001543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:02:57.911519 kubelet[2742]: E0130 05:02:57.910188 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.911519 kubelet[2742]: W0130 05:02:57.910212 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.911859 kubelet[2742]: E0130 05:02:57.911811 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.912794 kubelet[2742]: W0130 05:02:57.912613 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.914957 kubelet[2742]: E0130 05:02:57.914931 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.915840 kubelet[2742]: E0130 05:02:57.915666 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.916778 kubelet[2742]: W0130 05:02:57.916725 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.917048 kubelet[2742]: E0130 05:02:57.915687 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.917660 kubelet[2742]: E0130 05:02:57.917165 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.926008 kubelet[2742]: E0130 05:02:57.925364 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.926008 kubelet[2742]: W0130 05:02:57.925391 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.929319 kubelet[2742]: E0130 05:02:57.929216 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.929685 kubelet[2742]: E0130 05:02:57.929552 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.929685 kubelet[2742]: W0130 05:02:57.929590 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.929979 kubelet[2742]: E0130 05:02:57.929875 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.930545 kubelet[2742]: E0130 05:02:57.930330 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.930545 kubelet[2742]: W0130 05:02:57.930358 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.931073 kubelet[2742]: E0130 05:02:57.930899 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.931353 kubelet[2742]: E0130 05:02:57.931311 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.931353 kubelet[2742]: W0130 05:02:57.931326 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.931966 kubelet[2742]: E0130 05:02:57.931789 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.932499 kubelet[2742]: E0130 05:02:57.932325 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.932499 kubelet[2742]: W0130 05:02:57.932346 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.932895 kubelet[2742]: E0130 05:02:57.932782 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.933644 kubelet[2742]: E0130 05:02:57.933621 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.934111 kubelet[2742]: W0130 05:02:57.933849 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.934394 kubelet[2742]: E0130 05:02:57.934244 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.934985 kubelet[2742]: E0130 05:02:57.934885 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.934985 kubelet[2742]: W0130 05:02:57.934902 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.936503 kubelet[2742]: E0130 05:02:57.935245 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.937816 kubelet[2742]: E0130 05:02:57.937618 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.938700 kubelet[2742]: W0130 05:02:57.938376 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.939146 kubelet[2742]: E0130 05:02:57.938909 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.939476 kubelet[2742]: E0130 05:02:57.939458 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.939778 kubelet[2742]: W0130 05:02:57.939535 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.939970 kubelet[2742]: E0130 05:02:57.939860 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.941237 kubelet[2742]: E0130 05:02:57.941186 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.941502 kubelet[2742]: W0130 05:02:57.941315 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.941879 kubelet[2742]: E0130 05:02:57.941856 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.942336 kubelet[2742]: E0130 05:02:57.942276 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.942336 kubelet[2742]: W0130 05:02:57.942293 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.942818 kubelet[2742]: E0130 05:02:57.942623 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.943519 kubelet[2742]: E0130 05:02:57.943503 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.943719 kubelet[2742]: W0130 05:02:57.943589 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.943719 kubelet[2742]: E0130 05:02:57.943619 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.944541 kubelet[2742]: E0130 05:02:57.944413 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.944541 kubelet[2742]: W0130 05:02:57.944454 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.944541 kubelet[2742]: E0130 05:02:57.944478 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:57.964912 kubelet[2742]: E0130 05:02:57.964806 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:02:57.964912 kubelet[2742]: W0130 05:02:57.964835 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:02:57.964912 kubelet[2742]: E0130 05:02:57.964859 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:02:58.093657 containerd[1585]: time="2025-01-30T05:02:58.092928764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f84549d7-c5xxs,Uid:85413716-d934-4846-9d00-52f8e328b411,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\"" Jan 30 05:02:58.099690 kubelet[2742]: E0130 05:02:58.099624 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:58.103900 containerd[1585]: time="2025-01-30T05:02:58.103843108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 05:02:58.119289 containerd[1585]: time="2025-01-30T05:02:58.119086312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tnnm7,Uid:01f1c928-1706-42a5-bb56-01a783fa8509,Namespace:calico-system,Attempt:0,} returns sandbox id \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\"" Jan 30 05:02:58.122717 kubelet[2742]: E0130 05:02:58.122009 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:02:58.847964 kubelet[2742]: E0130 05:02:58.847889 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhbgz" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" Jan 30 05:02:59.582085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231943412.mount: Deactivated successfully. Jan 30 05:03:00.337959 containerd[1585]: time="2025-01-30T05:03:00.337876852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:00.342905 containerd[1585]: time="2025-01-30T05:03:00.342793007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 05:03:00.347611 containerd[1585]: time="2025-01-30T05:03:00.346874675Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:00.352226 containerd[1585]: time="2025-01-30T05:03:00.352171068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:00.353891 containerd[1585]: time="2025-01-30T05:03:00.353803283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.249895407s" Jan 30 05:03:00.353891 containerd[1585]: time="2025-01-30T05:03:00.353871259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 05:03:00.356920 containerd[1585]: time="2025-01-30T05:03:00.356854461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 05:03:00.391459 containerd[1585]: time="2025-01-30T05:03:00.391407864Z" level=info msg="CreateContainer within sandbox \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 05:03:00.430037 containerd[1585]: time="2025-01-30T05:03:00.429964221Z" level=info msg="CreateContainer within sandbox \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\"" Jan 30 05:03:00.430802 containerd[1585]: time="2025-01-30T05:03:00.430659880Z" level=info msg="StartContainer for \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\"" Jan 30 05:03:00.637319 containerd[1585]: time="2025-01-30T05:03:00.637223681Z" level=info msg="StartContainer for \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\" returns successfully" Jan 30 05:03:00.848000 kubelet[2742]: E0130 05:03:00.847937 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhbgz" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" Jan 30 05:03:01.011471 kubelet[2742]: E0130 05:03:01.011012 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:01.026952 kubelet[2742]: I0130 05:03:01.025473 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f84549d7-c5xxs" podStartSLOduration=1.7707564919999998 podStartE2EDuration="4.025449957s" podCreationTimestamp="2025-01-30 05:02:57 +0000 UTC" firstStartedPulling="2025-01-30 05:02:58.101243565 +0000 UTC m=+23.410119402" lastFinishedPulling="2025-01-30 05:03:00.355937043 +0000 UTC m=+25.664812867" observedRunningTime="2025-01-30 05:03:01.025023797 +0000 UTC m=+26.333899640" watchObservedRunningTime="2025-01-30 05:03:01.025449957 +0000 UTC m=+26.334325802" Jan 30 05:03:01.079249 kubelet[2742]: E0130 05:03:01.079204 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.079249 kubelet[2742]: W0130 05:03:01.079239 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.079496 kubelet[2742]: E0130 05:03:01.079265 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.079556 kubelet[2742]: E0130 05:03:01.079546 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.079632 kubelet[2742]: W0130 05:03:01.079558 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.079632 kubelet[2742]: E0130 05:03:01.079600 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.079851 kubelet[2742]: E0130 05:03:01.079802 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.079851 kubelet[2742]: W0130 05:03:01.079813 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.079851 kubelet[2742]: E0130 05:03:01.079835 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.080063 kubelet[2742]: E0130 05:03:01.080023 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.080063 kubelet[2742]: W0130 05:03:01.080034 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.080063 kubelet[2742]: E0130 05:03:01.080044 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.080266 kubelet[2742]: E0130 05:03:01.080250 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.080266 kubelet[2742]: W0130 05:03:01.080261 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.080382 kubelet[2742]: E0130 05:03:01.080271 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.080578 kubelet[2742]: E0130 05:03:01.080549 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.080646 kubelet[2742]: W0130 05:03:01.080583 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.080646 kubelet[2742]: E0130 05:03:01.080601 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.080845 kubelet[2742]: E0130 05:03:01.080831 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.080845 kubelet[2742]: W0130 05:03:01.080846 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.080962 kubelet[2742]: E0130 05:03:01.080857 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.081172 kubelet[2742]: E0130 05:03:01.081155 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.081172 kubelet[2742]: W0130 05:03:01.081168 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.081295 kubelet[2742]: E0130 05:03:01.081181 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.081433 kubelet[2742]: E0130 05:03:01.081420 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.081489 kubelet[2742]: W0130 05:03:01.081438 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.081489 kubelet[2742]: E0130 05:03:01.081449 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.081656 kubelet[2742]: E0130 05:03:01.081644 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.081656 kubelet[2742]: W0130 05:03:01.081654 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.081777 kubelet[2742]: E0130 05:03:01.081677 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.081859 kubelet[2742]: E0130 05:03:01.081846 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.081859 kubelet[2742]: W0130 05:03:01.081858 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.081955 kubelet[2742]: E0130 05:03:01.081868 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.082056 kubelet[2742]: E0130 05:03:01.082044 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.082056 kubelet[2742]: W0130 05:03:01.082054 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.082215 kubelet[2742]: E0130 05:03:01.082063 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.082288 kubelet[2742]: E0130 05:03:01.082239 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.082288 kubelet[2742]: W0130 05:03:01.082248 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.082288 kubelet[2742]: E0130 05:03:01.082258 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.082462 kubelet[2742]: E0130 05:03:01.082446 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.082512 kubelet[2742]: W0130 05:03:01.082464 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.082512 kubelet[2742]: E0130 05:03:01.082477 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.082706 kubelet[2742]: E0130 05:03:01.082693 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.082706 kubelet[2742]: W0130 05:03:01.082705 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.082800 kubelet[2742]: E0130 05:03:01.082717 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.148322 kubelet[2742]: E0130 05:03:01.148089 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.148322 kubelet[2742]: W0130 05:03:01.148119 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.148322 kubelet[2742]: E0130 05:03:01.148148 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.148967 kubelet[2742]: E0130 05:03:01.148949 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.149524 kubelet[2742]: W0130 05:03:01.149199 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.149524 kubelet[2742]: E0130 05:03:01.149240 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.150048 kubelet[2742]: E0130 05:03:01.149886 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.150048 kubelet[2742]: W0130 05:03:01.149920 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.150048 kubelet[2742]: E0130 05:03:01.149945 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.151359 kubelet[2742]: E0130 05:03:01.150515 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.151359 kubelet[2742]: W0130 05:03:01.151161 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.151359 kubelet[2742]: E0130 05:03:01.151205 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.152294 kubelet[2742]: E0130 05:03:01.151929 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.152294 kubelet[2742]: W0130 05:03:01.151948 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.152294 kubelet[2742]: E0130 05:03:01.152007 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.152924 kubelet[2742]: E0130 05:03:01.152477 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.152924 kubelet[2742]: W0130 05:03:01.152491 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.152924 kubelet[2742]: E0130 05:03:01.152845 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.153701 kubelet[2742]: E0130 05:03:01.153461 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.153701 kubelet[2742]: W0130 05:03:01.153494 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.153701 kubelet[2742]: E0130 05:03:01.153655 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.155197 kubelet[2742]: E0130 05:03:01.154182 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.155197 kubelet[2742]: W0130 05:03:01.154197 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.155197 kubelet[2742]: E0130 05:03:01.154224 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.155897 kubelet[2742]: E0130 05:03:01.155741 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.155897 kubelet[2742]: W0130 05:03:01.155759 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.155897 kubelet[2742]: E0130 05:03:01.155786 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.156396 kubelet[2742]: E0130 05:03:01.156362 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.156396 kubelet[2742]: W0130 05:03:01.156378 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.156779 kubelet[2742]: E0130 05:03:01.156604 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.156953 kubelet[2742]: E0130 05:03:01.156913 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.156953 kubelet[2742]: W0130 05:03:01.156927 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.157348 kubelet[2742]: E0130 05:03:01.157304 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.157791 kubelet[2742]: E0130 05:03:01.157650 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.157791 kubelet[2742]: W0130 05:03:01.157667 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.158111 kubelet[2742]: E0130 05:03:01.157955 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.158270 kubelet[2742]: E0130 05:03:01.158236 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.158270 kubelet[2742]: W0130 05:03:01.158250 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.158626 kubelet[2742]: E0130 05:03:01.158449 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.159145 kubelet[2742]: E0130 05:03:01.158901 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.159145 kubelet[2742]: W0130 05:03:01.158917 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.159145 kubelet[2742]: E0130 05:03:01.158938 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.159708 kubelet[2742]: E0130 05:03:01.159550 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.159708 kubelet[2742]: W0130 05:03:01.159588 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.159708 kubelet[2742]: E0130 05:03:01.159654 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.160883 kubelet[2742]: E0130 05:03:01.160467 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.160883 kubelet[2742]: W0130 05:03:01.160484 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.160883 kubelet[2742]: E0130 05:03:01.160504 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.161471 kubelet[2742]: E0130 05:03:01.161306 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.161471 kubelet[2742]: W0130 05:03:01.161322 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.161471 kubelet[2742]: E0130 05:03:01.161353 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.162508 kubelet[2742]: E0130 05:03:01.162435 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:01.162508 kubelet[2742]: W0130 05:03:01.162452 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:01.162508 kubelet[2742]: E0130 05:03:01.162470 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:01.890938 containerd[1585]: time="2025-01-30T05:03:01.890851945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:01.898940 containerd[1585]: time="2025-01-30T05:03:01.898839824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 05:03:01.903916 containerd[1585]: time="2025-01-30T05:03:01.903806415Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:01.910413 containerd[1585]: time="2025-01-30T05:03:01.910271541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:01.912108 containerd[1585]: time="2025-01-30T05:03:01.911817085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.554894615s" Jan 30 05:03:01.912108 containerd[1585]: time="2025-01-30T05:03:01.911880753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 05:03:01.916230 containerd[1585]: time="2025-01-30T05:03:01.916049141Z" level=info msg="CreateContainer within sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 05:03:01.992723 containerd[1585]: time="2025-01-30T05:03:01.992628646Z" level=info msg="CreateContainer within sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\"" Jan 30 05:03:01.996033 containerd[1585]: time="2025-01-30T05:03:01.993985727Z" level=info msg="StartContainer for \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\"" Jan 30 05:03:02.024996 kubelet[2742]: I0130 05:03:02.024945 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:03:02.028733 kubelet[2742]: E0130 05:03:02.028532 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:02.060813 systemd[1]: run-containerd-runc-k8s.io-2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b-runc.erXlfP.mount: Deactivated successfully. Jan 30 05:03:02.094140 kubelet[2742]: E0130 05:03:02.094092 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.094140 kubelet[2742]: W0130 05:03:02.094119 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.094140 kubelet[2742]: E0130 05:03:02.094145 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.094433 kubelet[2742]: E0130 05:03:02.094336 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.094433 kubelet[2742]: W0130 05:03:02.094343 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.094433 kubelet[2742]: E0130 05:03:02.094352 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.095476 kubelet[2742]: E0130 05:03:02.094541 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.095476 kubelet[2742]: W0130 05:03:02.094548 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.095476 kubelet[2742]: E0130 05:03:02.094557 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.095476 kubelet[2742]: E0130 05:03:02.094768 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.095476 kubelet[2742]: W0130 05:03:02.094775 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.095476 kubelet[2742]: E0130 05:03:02.094784 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.095476 kubelet[2742]: E0130 05:03:02.095000 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.095476 kubelet[2742]: W0130 05:03:02.095009 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.095476 kubelet[2742]: E0130 05:03:02.095021 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.095476 kubelet[2742]: E0130 05:03:02.095204 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.096023 kubelet[2742]: W0130 05:03:02.095212 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.096023 kubelet[2742]: E0130 05:03:02.095219 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.096023 kubelet[2742]: E0130 05:03:02.095485 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.096023 kubelet[2742]: W0130 05:03:02.095507 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.096023 kubelet[2742]: E0130 05:03:02.095530 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.096023 kubelet[2742]: E0130 05:03:02.095802 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.096023 kubelet[2742]: W0130 05:03:02.095828 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.096023 kubelet[2742]: E0130 05:03:02.095841 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.096399 kubelet[2742]: E0130 05:03:02.096111 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.096399 kubelet[2742]: W0130 05:03:02.096121 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.096399 kubelet[2742]: E0130 05:03:02.096132 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.096399 kubelet[2742]: E0130 05:03:02.096351 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.096399 kubelet[2742]: W0130 05:03:02.096359 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.096399 kubelet[2742]: E0130 05:03:02.096388 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.097064 kubelet[2742]: E0130 05:03:02.096606 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.097064 kubelet[2742]: W0130 05:03:02.096621 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.097064 kubelet[2742]: E0130 05:03:02.096632 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.097064 kubelet[2742]: E0130 05:03:02.097021 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.097064 kubelet[2742]: W0130 05:03:02.097033 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.097308 kubelet[2742]: E0130 05:03:02.097144 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.097534 kubelet[2742]: E0130 05:03:02.097518 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.097534 kubelet[2742]: W0130 05:03:02.097532 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.097673 kubelet[2742]: E0130 05:03:02.097545 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.098018 kubelet[2742]: E0130 05:03:02.097843 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.098018 kubelet[2742]: W0130 05:03:02.097905 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.098018 kubelet[2742]: E0130 05:03:02.097922 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.098256 kubelet[2742]: E0130 05:03:02.098230 2742 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 05:03:02.098256 kubelet[2742]: W0130 05:03:02.098248 2742 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 05:03:02.098371 kubelet[2742]: E0130 05:03:02.098264 2742 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 05:03:02.113717 containerd[1585]: time="2025-01-30T05:03:02.113650745Z" level=info msg="StartContainer for \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\" returns successfully" Jan 30 05:03:02.180379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b-rootfs.mount: Deactivated successfully. Jan 30 05:03:02.231683 containerd[1585]: time="2025-01-30T05:03:02.186344391Z" level=info msg="shim disconnected" id=2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b namespace=k8s.io Jan 30 05:03:02.231683 containerd[1585]: time="2025-01-30T05:03:02.231393863Z" level=warning msg="cleaning up after shim disconnected" id=2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b namespace=k8s.io Jan 30 05:03:02.231683 containerd[1585]: time="2025-01-30T05:03:02.231420794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:02.847870 kubelet[2742]: E0130 05:03:02.847323 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhbgz" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" Jan 30 05:03:03.042376 kubelet[2742]: E0130 05:03:03.042330 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:03.056443 containerd[1585]: time="2025-01-30T05:03:03.056386612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 05:03:04.850598 kubelet[2742]: E0130 05:03:04.848355 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhbgz" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" Jan 30 05:03:06.847761 kubelet[2742]: E0130 05:03:06.847382 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bhbgz" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" Jan 30 05:03:07.553099 containerd[1585]: time="2025-01-30T05:03:07.553027862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:07.555414 containerd[1585]: time="2025-01-30T05:03:07.555319444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 05:03:07.558300 containerd[1585]: time="2025-01-30T05:03:07.558224876Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:07.563418 containerd[1585]: time="2025-01-30T05:03:07.563307565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:07.564880 containerd[1585]: time="2025-01-30T05:03:07.564692254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.50825407s" Jan 30 05:03:07.564880 containerd[1585]: time="2025-01-30T05:03:07.564748734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 05:03:07.569556 containerd[1585]: time="2025-01-30T05:03:07.569503044Z" level=info msg="CreateContainer within sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 05:03:07.644442 containerd[1585]: time="2025-01-30T05:03:07.644292647Z" level=info msg="CreateContainer within sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\"" Jan 30 05:03:07.648198 containerd[1585]: time="2025-01-30T05:03:07.645077804Z" level=info msg="StartContainer for \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\"" Jan 30 05:03:07.784130 containerd[1585]: time="2025-01-30T05:03:07.784062039Z" level=info msg="StartContainer for \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\" returns successfully" Jan 30 05:03:08.060243 kubelet[2742]: E0130 05:03:08.058724 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:08.470030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388-rootfs.mount: Deactivated successfully. Jan 30 05:03:08.476487 containerd[1585]: time="2025-01-30T05:03:08.476382759Z" level=info msg="shim disconnected" id=12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388 namespace=k8s.io Jan 30 05:03:08.476487 containerd[1585]: time="2025-01-30T05:03:08.476464549Z" level=warning msg="cleaning up after shim disconnected" id=12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388 namespace=k8s.io Jan 30 05:03:08.476487 containerd[1585]: time="2025-01-30T05:03:08.476478148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:08.499004 kubelet[2742]: I0130 05:03:08.493790 2742 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 05:03:08.533078 kubelet[2742]: I0130 05:03:08.533015 2742 topology_manager.go:215] "Topology Admit Handler" podUID="38e6d30e-c18e-4b00-bba1-7a7a43ab759a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lc884" Jan 30 05:03:08.548110 kubelet[2742]: I0130 05:03:08.545786 2742 topology_manager.go:215] "Topology Admit Handler" podUID="eaf40084-ae75-4229-851b-dca331cd774c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vz995" Jan 30 05:03:08.549316 kubelet[2742]: I0130 05:03:08.548501 2742 topology_manager.go:215] "Topology Admit Handler" podUID="160f5477-6b56-47a9-a6b1-0ce2a996310c" podNamespace="calico-system" podName="calico-kube-controllers-764f56cffb-268h9" Jan 30 05:03:08.549316 kubelet[2742]: I0130 05:03:08.548791 2742 topology_manager.go:215] "Topology Admit Handler" podUID="c9db393b-7bc7-4843-b161-dce7fa134a05" podNamespace="calico-apiserver" podName="calico-apiserver-7bbbc46978-rxbqx" Jan 30 05:03:08.549316 kubelet[2742]: I0130 05:03:08.548948 2742 topology_manager.go:215] "Topology Admit Handler" podUID="080595a4-3a66-4a46-973a-099bfefc2f67" podNamespace="calico-apiserver" podName="calico-apiserver-7bbbc46978-k5pj5" Jan 30 05:03:08.624672 kubelet[2742]: I0130 05:03:08.624614 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaf40084-ae75-4229-851b-dca331cd774c-config-volume\") pod \"coredns-7db6d8ff4d-vz995\" (UID: \"eaf40084-ae75-4229-851b-dca331cd774c\") " pod="kube-system/coredns-7db6d8ff4d-vz995" Jan 30 05:03:08.624672 kubelet[2742]: I0130 05:03:08.624675 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/160f5477-6b56-47a9-a6b1-0ce2a996310c-tigera-ca-bundle\") pod \"calico-kube-controllers-764f56cffb-268h9\" (UID: \"160f5477-6b56-47a9-a6b1-0ce2a996310c\") " pod="calico-system/calico-kube-controllers-764f56cffb-268h9" Jan 30 05:03:08.625026 kubelet[2742]: I0130 05:03:08.624707 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/080595a4-3a66-4a46-973a-099bfefc2f67-calico-apiserver-certs\") pod \"calico-apiserver-7bbbc46978-k5pj5\" (UID: \"080595a4-3a66-4a46-973a-099bfefc2f67\") " pod="calico-apiserver/calico-apiserver-7bbbc46978-k5pj5" Jan 30 05:03:08.625026 kubelet[2742]: I0130 05:03:08.624731 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8t2\" (UniqueName: \"kubernetes.io/projected/080595a4-3a66-4a46-973a-099bfefc2f67-kube-api-access-5p8t2\") pod \"calico-apiserver-7bbbc46978-k5pj5\" (UID: \"080595a4-3a66-4a46-973a-099bfefc2f67\") " pod="calico-apiserver/calico-apiserver-7bbbc46978-k5pj5" Jan 30 05:03:08.625026 kubelet[2742]: I0130 05:03:08.624757 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38e6d30e-c18e-4b00-bba1-7a7a43ab759a-config-volume\") pod \"coredns-7db6d8ff4d-lc884\" (UID: \"38e6d30e-c18e-4b00-bba1-7a7a43ab759a\") " pod="kube-system/coredns-7db6d8ff4d-lc884" Jan 30 05:03:08.625026 kubelet[2742]: I0130 05:03:08.624786 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbmn\" (UniqueName: \"kubernetes.io/projected/38e6d30e-c18e-4b00-bba1-7a7a43ab759a-kube-api-access-6xbmn\") pod \"coredns-7db6d8ff4d-lc884\" (UID: \"38e6d30e-c18e-4b00-bba1-7a7a43ab759a\") " pod="kube-system/coredns-7db6d8ff4d-lc884" Jan 30 05:03:08.625026 kubelet[2742]: I0130 05:03:08.624817 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76rc\" (UniqueName: \"kubernetes.io/projected/160f5477-6b56-47a9-a6b1-0ce2a996310c-kube-api-access-w76rc\") pod \"calico-kube-controllers-764f56cffb-268h9\" (UID: \"160f5477-6b56-47a9-a6b1-0ce2a996310c\") " pod="calico-system/calico-kube-controllers-764f56cffb-268h9" Jan 30 05:03:08.627519 kubelet[2742]: I0130 05:03:08.624853 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48mh\" (UniqueName: \"kubernetes.io/projected/c9db393b-7bc7-4843-b161-dce7fa134a05-kube-api-access-l48mh\") pod \"calico-apiserver-7bbbc46978-rxbqx\" (UID: \"c9db393b-7bc7-4843-b161-dce7fa134a05\") " pod="calico-apiserver/calico-apiserver-7bbbc46978-rxbqx" Jan 30 05:03:08.627519 kubelet[2742]: I0130 05:03:08.624883 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4xb8\" (UniqueName: \"kubernetes.io/projected/eaf40084-ae75-4229-851b-dca331cd774c-kube-api-access-j4xb8\") pod \"coredns-7db6d8ff4d-vz995\" (UID: \"eaf40084-ae75-4229-851b-dca331cd774c\") " pod="kube-system/coredns-7db6d8ff4d-vz995" Jan 30 05:03:08.627519 kubelet[2742]: I0130 05:03:08.624917 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c9db393b-7bc7-4843-b161-dce7fa134a05-calico-apiserver-certs\") pod \"calico-apiserver-7bbbc46978-rxbqx\" (UID: \"c9db393b-7bc7-4843-b161-dce7fa134a05\") " pod="calico-apiserver/calico-apiserver-7bbbc46978-rxbqx" Jan 30 05:03:08.843732 kubelet[2742]: E0130 05:03:08.843369 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:08.845607 containerd[1585]: time="2025-01-30T05:03:08.845262408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lc884,Uid:38e6d30e-c18e-4b00-bba1-7a7a43ab759a,Namespace:kube-system,Attempt:0,}" Jan 30 05:03:08.855393 kubelet[2742]: E0130 05:03:08.855350 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:08.859080 containerd[1585]: time="2025-01-30T05:03:08.858031921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhbgz,Uid:a048fe9b-2075-4d81-9452-b1dc14c3972a,Namespace:calico-system,Attempt:0,}" Jan 30 05:03:08.859080 containerd[1585]: time="2025-01-30T05:03:08.858979167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vz995,Uid:eaf40084-ae75-4229-851b-dca331cd774c,Namespace:kube-system,Attempt:0,}" Jan 30 05:03:08.859328 containerd[1585]: time="2025-01-30T05:03:08.859301762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbc46978-rxbqx,Uid:c9db393b-7bc7-4843-b161-dce7fa134a05,Namespace:calico-apiserver,Attempt:0,}" Jan 30 05:03:08.862856 containerd[1585]: time="2025-01-30T05:03:08.862814034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764f56cffb-268h9,Uid:160f5477-6b56-47a9-a6b1-0ce2a996310c,Namespace:calico-system,Attempt:0,}" Jan 30 05:03:08.866101 containerd[1585]: time="2025-01-30T05:03:08.866006032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbc46978-k5pj5,Uid:080595a4-3a66-4a46-973a-099bfefc2f67,Namespace:calico-apiserver,Attempt:0,}" Jan 30 05:03:09.127808 kubelet[2742]: E0130 05:03:09.122898 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:09.149603 containerd[1585]: time="2025-01-30T05:03:09.140658052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 05:03:09.438299 containerd[1585]: time="2025-01-30T05:03:09.438109132Z" level=error msg="Failed to destroy network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.439461 containerd[1585]: time="2025-01-30T05:03:09.439411642Z" level=error msg="encountered an error cleaning up failed sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.446807 containerd[1585]: time="2025-01-30T05:03:09.446708272Z" level=error msg="Failed to destroy network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.448741 containerd[1585]: time="2025-01-30T05:03:09.448683505Z" level=error msg="encountered an error cleaning up failed sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.458301 containerd[1585]: time="2025-01-30T05:03:09.457131883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbc46978-rxbqx,Uid:c9db393b-7bc7-4843-b161-dce7fa134a05,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.459683 containerd[1585]: time="2025-01-30T05:03:09.459639110Z" level=error msg="Failed to destroy network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.461075 containerd[1585]: time="2025-01-30T05:03:09.459979990Z" level=error msg="encountered an error cleaning up failed sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.461075 containerd[1585]: time="2025-01-30T05:03:09.460031747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lc884,Uid:38e6d30e-c18e-4b00-bba1-7a7a43ab759a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.467523 containerd[1585]: time="2025-01-30T05:03:09.467449541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vz995,Uid:eaf40084-ae75-4229-851b-dca331cd774c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.467899 containerd[1585]: time="2025-01-30T05:03:09.467844700Z" level=error msg="Failed to destroy network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.468282 kubelet[2742]: E0130 05:03:09.468210 2742 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.468407 kubelet[2742]: E0130 05:03:09.468326 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vz995" Jan 30 05:03:09.468407 kubelet[2742]: E0130 05:03:09.468357 2742 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vz995" Jan 30 05:03:09.468534 kubelet[2742]: E0130 05:03:09.468412 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vz995_kube-system(eaf40084-ae75-4229-851b-dca331cd774c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vz995_kube-system(eaf40084-ae75-4229-851b-dca331cd774c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vz995" podUID="eaf40084-ae75-4229-851b-dca331cd774c" Jan 30 05:03:09.469723 kubelet[2742]: E0130 05:03:09.468623 2742 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.469723 kubelet[2742]: E0130 05:03:09.468695 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lc884" Jan 30 05:03:09.469723 kubelet[2742]: E0130 05:03:09.468725 2742 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lc884" Jan 30 05:03:09.469723 kubelet[2742]: E0130 05:03:09.468210 2742 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.470044 kubelet[2742]: E0130 05:03:09.468788 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lc884_kube-system(38e6d30e-c18e-4b00-bba1-7a7a43ab759a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lc884_kube-system(38e6d30e-c18e-4b00-bba1-7a7a43ab759a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lc884" podUID="38e6d30e-c18e-4b00-bba1-7a7a43ab759a" Jan 30 05:03:09.470044 kubelet[2742]: E0130 05:03:09.468829 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bbbc46978-rxbqx" Jan 30 05:03:09.470044 kubelet[2742]: E0130 05:03:09.468859 2742 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bbbc46978-rxbqx" Jan 30 05:03:09.470304 containerd[1585]: time="2025-01-30T05:03:09.469782013Z" level=error msg="encountered an error cleaning up failed sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.470304 containerd[1585]: time="2025-01-30T05:03:09.469888312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764f56cffb-268h9,Uid:160f5477-6b56-47a9-a6b1-0ce2a996310c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.472241 kubelet[2742]: E0130 05:03:09.468908 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bbbc46978-rxbqx_calico-apiserver(c9db393b-7bc7-4843-b161-dce7fa134a05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bbbc46978-rxbqx_calico-apiserver(c9db393b-7bc7-4843-b161-dce7fa134a05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bbbc46978-rxbqx" podUID="c9db393b-7bc7-4843-b161-dce7fa134a05" Jan 30 05:03:09.472241 kubelet[2742]: E0130 05:03:09.471341 2742 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.472241 kubelet[2742]: E0130 05:03:09.471389 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764f56cffb-268h9" Jan 30 05:03:09.472497 containerd[1585]: time="2025-01-30T05:03:09.470677124Z" level=error msg="Failed to destroy network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.472497 containerd[1585]: time="2025-01-30T05:03:09.471107872Z" level=error msg="encountered an error cleaning up failed sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.472497 containerd[1585]: time="2025-01-30T05:03:09.471209420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhbgz,Uid:a048fe9b-2075-4d81-9452-b1dc14c3972a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.472497 containerd[1585]: time="2025-01-30T05:03:09.471378131Z" level=error msg="Failed to destroy network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.472497 containerd[1585]: time="2025-01-30T05:03:09.471786912Z" level=error msg="encountered an error cleaning up failed sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.472497 containerd[1585]: time="2025-01-30T05:03:09.472035186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbc46978-k5pj5,Uid:080595a4-3a66-4a46-973a-099bfefc2f67,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.473361 kubelet[2742]: E0130 05:03:09.471412 2742 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764f56cffb-268h9" Jan 30 05:03:09.473361 kubelet[2742]: E0130 05:03:09.471447 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-764f56cffb-268h9_calico-system(160f5477-6b56-47a9-a6b1-0ce2a996310c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-764f56cffb-268h9_calico-system(160f5477-6b56-47a9-a6b1-0ce2a996310c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764f56cffb-268h9" podUID="160f5477-6b56-47a9-a6b1-0ce2a996310c" Jan 30 05:03:09.473361 kubelet[2742]: E0130 05:03:09.472718 2742 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.474658 kubelet[2742]: E0130 05:03:09.472769 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bhbgz" Jan 30 05:03:09.474658 kubelet[2742]: E0130 05:03:09.472794 2742 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bhbgz" Jan 30 05:03:09.474658 kubelet[2742]: E0130 05:03:09.472859 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bhbgz_calico-system(a048fe9b-2075-4d81-9452-b1dc14c3972a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bhbgz_calico-system(a048fe9b-2075-4d81-9452-b1dc14c3972a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bhbgz" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" Jan 30 05:03:09.474895 kubelet[2742]: E0130 05:03:09.472931 2742 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:09.474895 kubelet[2742]: E0130 05:03:09.472959 2742 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bbbc46978-k5pj5" Jan 30 05:03:09.474895 kubelet[2742]: E0130 05:03:09.472979 2742 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bbbc46978-k5pj5" Jan 30 05:03:09.475807 kubelet[2742]: E0130 05:03:09.473020 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bbbc46978-k5pj5_calico-apiserver(080595a4-3a66-4a46-973a-099bfefc2f67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bbbc46978-k5pj5_calico-apiserver(080595a4-3a66-4a46-973a-099bfefc2f67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bbbc46978-k5pj5" podUID="080595a4-3a66-4a46-973a-099bfefc2f67" Jan 30 05:03:09.751299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d-shm.mount: Deactivated successfully. Jan 30 05:03:10.126642 kubelet[2742]: I0130 05:03:10.126546 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:10.128302 kubelet[2742]: I0130 05:03:10.128175 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:10.131886 containerd[1585]: time="2025-01-30T05:03:10.131829994Z" level=info msg="StopPodSandbox for \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\"" Jan 30 05:03:10.140593 containerd[1585]: time="2025-01-30T05:03:10.139819435Z" level=info msg="StopPodSandbox for \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\"" Jan 30 05:03:10.140593 containerd[1585]: time="2025-01-30T05:03:10.140328014Z" level=info msg="Ensure that sandbox 633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185 in task-service has been cleanup successfully" Jan 30 05:03:10.140817 containerd[1585]: time="2025-01-30T05:03:10.140766684Z" level=info msg="Ensure that sandbox 26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a in task-service has been cleanup successfully" Jan 30 05:03:10.147504 kubelet[2742]: I0130 05:03:10.146788 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:10.151660 containerd[1585]: time="2025-01-30T05:03:10.151604082Z" level=info msg="StopPodSandbox for \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\"" Jan 30 05:03:10.152380 containerd[1585]: time="2025-01-30T05:03:10.152316946Z" level=info msg="Ensure that sandbox c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6 in task-service has been cleanup successfully" Jan 30 05:03:10.154778 kubelet[2742]: I0130 05:03:10.154651 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:10.158848 containerd[1585]: time="2025-01-30T05:03:10.157046162Z" level=info msg="StopPodSandbox for \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\"" Jan 30 05:03:10.164724 containerd[1585]: time="2025-01-30T05:03:10.163557834Z" level=info msg="Ensure that sandbox 5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf in task-service has been cleanup successfully" Jan 30 05:03:10.169225 kubelet[2742]: I0130 05:03:10.168607 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:10.171655 containerd[1585]: time="2025-01-30T05:03:10.171541081Z" level=info msg="StopPodSandbox for \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\"" Jan 30 05:03:10.174367 containerd[1585]: time="2025-01-30T05:03:10.174303711Z" level=info msg="Ensure that sandbox 3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af in task-service has been cleanup successfully" Jan 30 05:03:10.177805 kubelet[2742]: I0130 05:03:10.177065 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:10.180214 containerd[1585]: time="2025-01-30T05:03:10.180175469Z" level=info msg="StopPodSandbox for \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\"" Jan 30 05:03:10.180665 containerd[1585]: time="2025-01-30T05:03:10.180620721Z" level=info msg="Ensure that sandbox cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d in task-service has been cleanup successfully" Jan 30 05:03:10.305142 containerd[1585]: time="2025-01-30T05:03:10.303062634Z" level=error msg="StopPodSandbox for \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\" failed" error="failed to destroy network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:10.307381 containerd[1585]: time="2025-01-30T05:03:10.303678209Z" level=error msg="StopPodSandbox for \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\" failed" error="failed to destroy network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:10.309276 kubelet[2742]: E0130 05:03:10.308899 2742 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:10.309276 kubelet[2742]: E0130 05:03:10.308987 2742 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185"} Jan 30 05:03:10.309276 kubelet[2742]: E0130 05:03:10.309064 2742 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9db393b-7bc7-4843-b161-dce7fa134a05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:03:10.309276 kubelet[2742]: E0130 05:03:10.309097 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9db393b-7bc7-4843-b161-dce7fa134a05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bbbc46978-rxbqx" podUID="c9db393b-7bc7-4843-b161-dce7fa134a05" Jan 30 05:03:10.310588 kubelet[2742]: E0130 05:03:10.308899 2742 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:10.310588 kubelet[2742]: E0130 05:03:10.309135 2742 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf"} Jan 30 05:03:10.310588 kubelet[2742]: E0130 05:03:10.309163 2742 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a048fe9b-2075-4d81-9452-b1dc14c3972a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:03:10.310588 kubelet[2742]: E0130 05:03:10.309190 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a048fe9b-2075-4d81-9452-b1dc14c3972a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bhbgz" podUID="a048fe9b-2075-4d81-9452-b1dc14c3972a" Jan 30 05:03:10.320594 containerd[1585]: time="2025-01-30T05:03:10.320451969Z" level=error msg="StopPodSandbox for \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\" failed" error="failed to destroy network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:10.320796 kubelet[2742]: E0130 05:03:10.320725 2742 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:10.320912 kubelet[2742]: E0130 05:03:10.320789 2742 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a"} Jan 30 05:03:10.320912 kubelet[2742]: E0130 05:03:10.320834 2742 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"080595a4-3a66-4a46-973a-099bfefc2f67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:03:10.320912 kubelet[2742]: E0130 05:03:10.320868 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"080595a4-3a66-4a46-973a-099bfefc2f67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bbbc46978-k5pj5" podUID="080595a4-3a66-4a46-973a-099bfefc2f67" Jan 30 05:03:10.322594 containerd[1585]: time="2025-01-30T05:03:10.321232810Z" level=error msg="StopPodSandbox for \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\" failed" error="failed to destroy network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:10.322711 kubelet[2742]: E0130 05:03:10.321706 2742 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:10.322711 kubelet[2742]: E0130 05:03:10.321766 2742 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af"} Jan 30 05:03:10.322711 kubelet[2742]: E0130 05:03:10.321816 2742 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eaf40084-ae75-4229-851b-dca331cd774c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:03:10.322711 kubelet[2742]: E0130 05:03:10.321854 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eaf40084-ae75-4229-851b-dca331cd774c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vz995" podUID="eaf40084-ae75-4229-851b-dca331cd774c" Jan 30 05:03:10.326653 containerd[1585]: time="2025-01-30T05:03:10.326588111Z" level=error msg="StopPodSandbox for \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\" failed" error="failed to destroy network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:10.327228 kubelet[2742]: E0130 05:03:10.326860 2742 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:10.327228 kubelet[2742]: E0130 05:03:10.326917 2742 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6"} Jan 30 05:03:10.327228 kubelet[2742]: E0130 05:03:10.326966 2742 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"160f5477-6b56-47a9-a6b1-0ce2a996310c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:03:10.327228 kubelet[2742]: E0130 05:03:10.326998 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"160f5477-6b56-47a9-a6b1-0ce2a996310c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764f56cffb-268h9" podUID="160f5477-6b56-47a9-a6b1-0ce2a996310c" Jan 30 05:03:10.333027 containerd[1585]: time="2025-01-30T05:03:10.332950471Z" level=error msg="StopPodSandbox for \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\" failed" error="failed to destroy network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 05:03:10.333488 kubelet[2742]: E0130 05:03:10.333232 2742 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:10.333488 kubelet[2742]: E0130 05:03:10.333293 2742 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d"} Jan 30 05:03:10.333488 kubelet[2742]: E0130 05:03:10.333335 2742 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"38e6d30e-c18e-4b00-bba1-7a7a43ab759a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 05:03:10.333488 kubelet[2742]: E0130 05:03:10.333368 2742 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"38e6d30e-c18e-4b00-bba1-7a7a43ab759a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lc884" podUID="38e6d30e-c18e-4b00-bba1-7a7a43ab759a" Jan 30 05:03:15.404527 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:03:15.401788 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:03:15.401932 systemd-resolved[1473]: Flushed all caches. Jan 30 05:03:15.581026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119315817.mount: Deactivated successfully. Jan 30 05:03:15.694066 containerd[1585]: time="2025-01-30T05:03:15.678059963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 05:03:15.713533 containerd[1585]: time="2025-01-30T05:03:15.713369757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:15.739610 containerd[1585]: time="2025-01-30T05:03:15.739360556Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:15.741210 containerd[1585]: time="2025-01-30T05:03:15.740947416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.600213801s" Jan 30 05:03:15.741210 containerd[1585]: time="2025-01-30T05:03:15.741020169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 05:03:15.742387 containerd[1585]: time="2025-01-30T05:03:15.742253552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:15.819331 containerd[1585]: time="2025-01-30T05:03:15.819144088Z" level=info msg="CreateContainer within sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 05:03:15.938123 containerd[1585]: time="2025-01-30T05:03:15.938020261Z" level=info msg="CreateContainer within sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\"" Jan 30 05:03:15.939014 containerd[1585]: time="2025-01-30T05:03:15.938925162Z" level=info msg="StartContainer for \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\"" Jan 30 05:03:16.109288 containerd[1585]: time="2025-01-30T05:03:16.109180374Z" level=info msg="StartContainer for \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\" returns successfully" Jan 30 05:03:16.213606 kubelet[2742]: I0130 05:03:16.213042 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:03:16.217700 kubelet[2742]: E0130 05:03:16.216998 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:16.233605 kubelet[2742]: E0130 05:03:16.233392 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:16.243745 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 05:03:16.244699 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 05:03:16.302540 kubelet[2742]: E0130 05:03:16.302496 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:16.422046 kubelet[2742]: I0130 05:03:16.408248 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tnnm7" podStartSLOduration=1.787065815 podStartE2EDuration="19.403721457s" podCreationTimestamp="2025-01-30 05:02:57 +0000 UTC" firstStartedPulling="2025-01-30 05:02:58.126141684 +0000 UTC m=+23.435017509" lastFinishedPulling="2025-01-30 05:03:15.742797324 +0000 UTC m=+41.051673151" observedRunningTime="2025-01-30 05:03:16.380210171 +0000 UTC m=+41.689086016" watchObservedRunningTime="2025-01-30 05:03:16.403721457 +0000 UTC m=+41.712597307" Jan 30 05:03:17.027310 systemd[1]: Started sshd@8-137.184.120.173:22-147.75.109.163:41150.service - OpenSSH per-connection server daemon (147.75.109.163:41150). Jan 30 05:03:17.150220 sshd[3900]: Accepted publickey for core from 147.75.109.163 port 41150 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:17.154501 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:17.165609 systemd-logind[1561]: New session 8 of user core. Jan 30 05:03:17.177631 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 05:03:17.239583 kubelet[2742]: E0130 05:03:17.239141 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:17.451397 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:03:17.449138 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:03:17.449150 systemd-resolved[1473]: Flushed all caches. Jan 30 05:03:17.471923 sshd[3900]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:17.477451 systemd[1]: sshd@8-137.184.120.173:22-147.75.109.163:41150.service: Deactivated successfully. Jan 30 05:03:17.484860 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 05:03:17.486742 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 30 05:03:17.487880 systemd-logind[1561]: Removed session 8. Jan 30 05:03:18.248974 kubelet[2742]: E0130 05:03:18.248930 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:18.715734 kernel: bpftool[4090]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 05:03:19.070423 systemd-networkd[1224]: vxlan.calico: Link UP Jan 30 05:03:19.070434 systemd-networkd[1224]: vxlan.calico: Gained carrier Jan 30 05:03:20.457026 systemd-networkd[1224]: vxlan.calico: Gained IPv6LL Jan 30 05:03:20.850699 containerd[1585]: time="2025-01-30T05:03:20.849710377Z" level=info msg="StopPodSandbox for \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\"" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:20.986 [INFO][4176] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:20.986 [INFO][4176] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" iface="eth0" netns="/var/run/netns/cni-e72f928d-b234-5355-be65-6c463650745a" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:20.987 [INFO][4176] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" iface="eth0" netns="/var/run/netns/cni-e72f928d-b234-5355-be65-6c463650745a" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:20.988 [INFO][4176] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" iface="eth0" netns="/var/run/netns/cni-e72f928d-b234-5355-be65-6c463650745a" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:20.988 [INFO][4176] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:20.988 [INFO][4176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:21.409 [INFO][4183] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:21.413 [INFO][4183] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:21.414 [INFO][4183] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:21.430 [WARNING][4183] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:21.430 [INFO][4183] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:21.433 [INFO][4183] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:21.439058 containerd[1585]: 2025-01-30 05:03:21.436 [INFO][4176] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:21.441146 containerd[1585]: time="2025-01-30T05:03:21.440047862Z" level=info msg="TearDown network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\" successfully" Jan 30 05:03:21.441146 containerd[1585]: time="2025-01-30T05:03:21.440095972Z" level=info msg="StopPodSandbox for \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\" returns successfully" Jan 30 05:03:21.443188 containerd[1585]: time="2025-01-30T05:03:21.442329962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbc46978-rxbqx,Uid:c9db393b-7bc7-4843-b161-dce7fa134a05,Namespace:calico-apiserver,Attempt:1,}" Jan 30 05:03:21.445799 systemd[1]: run-netns-cni\x2de72f928d\x2db234\x2d5355\x2dbe65\x2d6c463650745a.mount: Deactivated successfully. Jan 30 05:03:21.695805 systemd-networkd[1224]: calid027fdc4c6e: Link UP Jan 30 05:03:21.698840 systemd-networkd[1224]: calid027fdc4c6e: Gained carrier Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.539 [INFO][4189] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0 calico-apiserver-7bbbc46978- calico-apiserver c9db393b-7bc7-4843-b161-dce7fa134a05 869 0 2025-01-30 05:02:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bbbc46978 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-d-47de560844 calico-apiserver-7bbbc46978-rxbqx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid027fdc4c6e [] []}} ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-rxbqx" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.539 [INFO][4189] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-rxbqx" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.597 [INFO][4200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" HandleID="k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.619 [INFO][4200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" HandleID="k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-d-47de560844", "pod":"calico-apiserver-7bbbc46978-rxbqx", "timestamp":"2025-01-30 05:03:21.597841553 +0000 UTC"}, Hostname:"ci-4081.3.0-d-47de560844", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.619 [INFO][4200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.619 [INFO][4200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.619 [INFO][4200] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-d-47de560844' Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.622 [INFO][4200] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.652 [INFO][4200] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.659 [INFO][4200] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.662 [INFO][4200] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.665 [INFO][4200] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.665 [INFO][4200] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.668 [INFO][4200] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.676 [INFO][4200] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.685 [INFO][4200] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.1/26] block=192.168.59.0/26 handle="k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.685 [INFO][4200] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.1/26] handle="k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.685 [INFO][4200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:21.733990 containerd[1585]: 2025-01-30 05:03:21.685 [INFO][4200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.1/26] IPv6=[] ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" HandleID="k8s-pod-network.45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.740228 containerd[1585]: 2025-01-30 05:03:21.690 [INFO][4189] cni-plugin/k8s.go 386: Populated endpoint ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-rxbqx" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0", GenerateName:"calico-apiserver-7bbbc46978-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9db393b-7bc7-4843-b161-dce7fa134a05", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbc46978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"", Pod:"calico-apiserver-7bbbc46978-rxbqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid027fdc4c6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:21.740228 containerd[1585]: 2025-01-30 05:03:21.690 [INFO][4189] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.1/32] ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-rxbqx" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.740228 containerd[1585]: 2025-01-30 05:03:21.690 [INFO][4189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid027fdc4c6e ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-rxbqx" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.740228 containerd[1585]: 2025-01-30 05:03:21.699 [INFO][4189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-rxbqx" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.740228 containerd[1585]: 2025-01-30 05:03:21.699 [INFO][4189] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-rxbqx" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0", GenerateName:"calico-apiserver-7bbbc46978-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9db393b-7bc7-4843-b161-dce7fa134a05", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbc46978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed", Pod:"calico-apiserver-7bbbc46978-rxbqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid027fdc4c6e", MAC:"de:28:67:fe:a6:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:21.740228 containerd[1585]: 2025-01-30 05:03:21.718 [INFO][4189] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-rxbqx" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:21.793288 containerd[1585]: time="2025-01-30T05:03:21.792115397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:03:21.793288 containerd[1585]: time="2025-01-30T05:03:21.793007592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:03:21.793288 containerd[1585]: time="2025-01-30T05:03:21.793024097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:21.793288 containerd[1585]: time="2025-01-30T05:03:21.793143276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:21.838355 systemd[1]: run-containerd-runc-k8s.io-45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed-runc.IDm211.mount: Deactivated successfully. Jan 30 05:03:21.896348 containerd[1585]: time="2025-01-30T05:03:21.896294678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbc46978-rxbqx,Uid:c9db393b-7bc7-4843-b161-dce7fa134a05,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed\"" Jan 30 05:03:21.899965 containerd[1585]: time="2025-01-30T05:03:21.899490308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 05:03:22.482107 systemd[1]: Started sshd@9-137.184.120.173:22-147.75.109.163:46600.service - OpenSSH per-connection server daemon (147.75.109.163:46600). Jan 30 05:03:22.589062 sshd[4259]: Accepted publickey for core from 147.75.109.163 port 46600 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:22.593054 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:22.601586 systemd-logind[1561]: New session 9 of user core. Jan 30 05:03:22.612165 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 05:03:22.852579 containerd[1585]: time="2025-01-30T05:03:22.852232435Z" level=info msg="StopPodSandbox for \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\"" Jan 30 05:03:23.024050 sshd[4259]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:23.036268 systemd[1]: sshd@9-137.184.120.173:22-147.75.109.163:46600.service: Deactivated successfully. Jan 30 05:03:23.053997 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 05:03:23.057452 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 30 05:03:23.063432 systemd-logind[1561]: Removed session 9. Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.053 [INFO][4286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.053 [INFO][4286] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" iface="eth0" netns="/var/run/netns/cni-14e7ebf0-8166-8dd4-b6d1-2fa44697f5c8" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.056 [INFO][4286] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" iface="eth0" netns="/var/run/netns/cni-14e7ebf0-8166-8dd4-b6d1-2fa44697f5c8" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.062 [INFO][4286] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" iface="eth0" netns="/var/run/netns/cni-14e7ebf0-8166-8dd4-b6d1-2fa44697f5c8" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.063 [INFO][4286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.063 [INFO][4286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.138 [INFO][4295] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.139 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.139 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.148 [WARNING][4295] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.148 [INFO][4295] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.150 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:23.156259 containerd[1585]: 2025-01-30 05:03:23.153 [INFO][4286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:23.159786 containerd[1585]: time="2025-01-30T05:03:23.157261390Z" level=info msg="TearDown network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\" successfully" Jan 30 05:03:23.159786 containerd[1585]: time="2025-01-30T05:03:23.157299491Z" level=info msg="StopPodSandbox for \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\" returns successfully" Jan 30 05:03:23.163188 systemd[1]: run-netns-cni\x2d14e7ebf0\x2d8166\x2d8dd4\x2db6d1\x2d2fa44697f5c8.mount: Deactivated successfully. Jan 30 05:03:23.165700 containerd[1585]: time="2025-01-30T05:03:23.165638284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764f56cffb-268h9,Uid:160f5477-6b56-47a9-a6b1-0ce2a996310c,Namespace:calico-system,Attempt:1,}" Jan 30 05:03:23.407420 systemd-networkd[1224]: cali9968ff99b0b: Link UP Jan 30 05:03:23.413880 systemd-networkd[1224]: cali9968ff99b0b: Gained carrier Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.260 [INFO][4303] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0 calico-kube-controllers-764f56cffb- calico-system 160f5477-6b56-47a9-a6b1-0ce2a996310c 890 0 2025-01-30 05:02:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:764f56cffb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-d-47de560844 calico-kube-controllers-764f56cffb-268h9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9968ff99b0b [] []}} ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Namespace="calico-system" Pod="calico-kube-controllers-764f56cffb-268h9" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.260 [INFO][4303] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Namespace="calico-system" Pod="calico-kube-controllers-764f56cffb-268h9" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.320 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.338 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-d-47de560844", "pod":"calico-kube-controllers-764f56cffb-268h9", "timestamp":"2025-01-30 05:03:23.320180841 +0000 UTC"}, Hostname:"ci-4081.3.0-d-47de560844", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.338 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.339 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.339 [INFO][4313] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-d-47de560844' Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.343 [INFO][4313] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.352 [INFO][4313] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.362 [INFO][4313] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.365 [INFO][4313] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.372 [INFO][4313] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.372 [INFO][4313] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.375 [INFO][4313] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.382 [INFO][4313] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.396 [INFO][4313] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.2/26] block=192.168.59.0/26 handle="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.396 [INFO][4313] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.2/26] handle="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.396 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:23.449076 containerd[1585]: 2025-01-30 05:03:23.396 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.2/26] IPv6=[] ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.450869 containerd[1585]: 2025-01-30 05:03:23.400 [INFO][4303] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Namespace="calico-system" Pod="calico-kube-controllers-764f56cffb-268h9" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0", GenerateName:"calico-kube-controllers-764f56cffb-", Namespace:"calico-system", SelfLink:"", UID:"160f5477-6b56-47a9-a6b1-0ce2a996310c", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764f56cffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"", Pod:"calico-kube-controllers-764f56cffb-268h9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9968ff99b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:23.450869 containerd[1585]: 2025-01-30 05:03:23.400 [INFO][4303] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.2/32] ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Namespace="calico-system" Pod="calico-kube-controllers-764f56cffb-268h9" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.450869 containerd[1585]: 2025-01-30 05:03:23.400 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9968ff99b0b ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Namespace="calico-system" Pod="calico-kube-controllers-764f56cffb-268h9" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.450869 containerd[1585]: 2025-01-30 05:03:23.414 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Namespace="calico-system" Pod="calico-kube-controllers-764f56cffb-268h9" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.450869 containerd[1585]: 2025-01-30 05:03:23.416 [INFO][4303] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Namespace="calico-system" Pod="calico-kube-controllers-764f56cffb-268h9" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0", GenerateName:"calico-kube-controllers-764f56cffb-", Namespace:"calico-system", SelfLink:"", UID:"160f5477-6b56-47a9-a6b1-0ce2a996310c", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764f56cffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e", Pod:"calico-kube-controllers-764f56cffb-268h9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9968ff99b0b", MAC:"be:b1:4d:0a:18:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:23.450869 containerd[1585]: 2025-01-30 05:03:23.440 [INFO][4303] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Namespace="calico-system" Pod="calico-kube-controllers-764f56cffb-268h9" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:23.583968 containerd[1585]: time="2025-01-30T05:03:23.565295225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:03:23.583968 containerd[1585]: time="2025-01-30T05:03:23.565365041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:03:23.583968 containerd[1585]: time="2025-01-30T05:03:23.565388112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:23.583968 containerd[1585]: time="2025-01-30T05:03:23.565531425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:23.594427 systemd-networkd[1224]: calid027fdc4c6e: Gained IPv6LL Jan 30 05:03:23.694458 containerd[1585]: time="2025-01-30T05:03:23.693782297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764f56cffb-268h9,Uid:160f5477-6b56-47a9-a6b1-0ce2a996310c,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\"" Jan 30 05:03:23.851309 containerd[1585]: time="2025-01-30T05:03:23.851228533Z" level=info msg="StopPodSandbox for \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\"" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:23.955 [INFO][4389] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:23.956 [INFO][4389] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" iface="eth0" netns="/var/run/netns/cni-2c85756d-95df-a0dc-cd28-fd4d201cee0a" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:23.957 [INFO][4389] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" iface="eth0" netns="/var/run/netns/cni-2c85756d-95df-a0dc-cd28-fd4d201cee0a" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:23.958 [INFO][4389] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" iface="eth0" netns="/var/run/netns/cni-2c85756d-95df-a0dc-cd28-fd4d201cee0a" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:23.958 [INFO][4389] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:23.958 [INFO][4389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:24.034 [INFO][4396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:24.034 [INFO][4396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:24.034 [INFO][4396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:24.044 [WARNING][4396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:24.044 [INFO][4396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:24.047 [INFO][4396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:24.057411 containerd[1585]: 2025-01-30 05:03:24.053 [INFO][4389] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:24.058205 containerd[1585]: time="2025-01-30T05:03:24.057674866Z" level=info msg="TearDown network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\" successfully" Jan 30 05:03:24.059629 containerd[1585]: time="2025-01-30T05:03:24.058947115Z" level=info msg="StopPodSandbox for \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\" returns successfully" Jan 30 05:03:24.060218 kubelet[2742]: E0130 05:03:24.059752 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:24.064579 containerd[1585]: time="2025-01-30T05:03:24.064140445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lc884,Uid:38e6d30e-c18e-4b00-bba1-7a7a43ab759a,Namespace:kube-system,Attempt:1,}" Jan 30 05:03:24.177332 systemd[1]: run-netns-cni\x2d2c85756d\x2d95df\x2da0dc\x2dcd28\x2dfd4d201cee0a.mount: Deactivated successfully. Jan 30 05:03:24.422781 systemd-networkd[1224]: cali950f8c7e5d9: Link UP Jan 30 05:03:24.423777 systemd-networkd[1224]: cali950f8c7e5d9: Gained carrier Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.256 [INFO][4402] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0 coredns-7db6d8ff4d- kube-system 38e6d30e-c18e-4b00-bba1-7a7a43ab759a 898 0 2025-01-30 05:02:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-d-47de560844 coredns-7db6d8ff4d-lc884 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali950f8c7e5d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lc884" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.257 [INFO][4402] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lc884" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.330 [INFO][4413] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" HandleID="k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.345 [INFO][4413] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" HandleID="k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fc590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-d-47de560844", "pod":"coredns-7db6d8ff4d-lc884", "timestamp":"2025-01-30 05:03:24.330272767 +0000 UTC"}, Hostname:"ci-4081.3.0-d-47de560844", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.346 [INFO][4413] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.346 [INFO][4413] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.346 [INFO][4413] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-d-47de560844' Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.351 [INFO][4413] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.363 [INFO][4413] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.371 [INFO][4413] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.377 [INFO][4413] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.381 [INFO][4413] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.381 [INFO][4413] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.387 [INFO][4413] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55 Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.397 [INFO][4413] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.412 [INFO][4413] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.3/26] block=192.168.59.0/26 handle="k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.412 [INFO][4413] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.3/26] handle="k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.412 [INFO][4413] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:24.451289 containerd[1585]: 2025-01-30 05:03:24.412 [INFO][4413] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.3/26] IPv6=[] ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" HandleID="k8s-pod-network.05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.452848 containerd[1585]: 2025-01-30 05:03:24.415 [INFO][4402] cni-plugin/k8s.go 386: Populated endpoint ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lc884" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"38e6d30e-c18e-4b00-bba1-7a7a43ab759a", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"", Pod:"coredns-7db6d8ff4d-lc884", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali950f8c7e5d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:24.452848 containerd[1585]: 2025-01-30 05:03:24.415 [INFO][4402] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.3/32] ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lc884" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.452848 containerd[1585]: 2025-01-30 05:03:24.415 [INFO][4402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali950f8c7e5d9 ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lc884" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.452848 containerd[1585]: 2025-01-30 05:03:24.424 [INFO][4402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lc884" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.452848 containerd[1585]: 2025-01-30 05:03:24.426 [INFO][4402] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lc884" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"38e6d30e-c18e-4b00-bba1-7a7a43ab759a", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55", Pod:"coredns-7db6d8ff4d-lc884", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali950f8c7e5d9", MAC:"86:6c:a5:db:65:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:24.452848 containerd[1585]: 2025-01-30 05:03:24.447 [INFO][4402] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lc884" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:24.550597 containerd[1585]: time="2025-01-30T05:03:24.547727686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:03:24.550597 containerd[1585]: time="2025-01-30T05:03:24.547821349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:03:24.550597 containerd[1585]: time="2025-01-30T05:03:24.547866205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:24.550597 containerd[1585]: time="2025-01-30T05:03:24.548076414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:24.683197 containerd[1585]: time="2025-01-30T05:03:24.682157151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lc884,Uid:38e6d30e-c18e-4b00-bba1-7a7a43ab759a,Namespace:kube-system,Attempt:1,} returns sandbox id \"05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55\"" Jan 30 05:03:24.685768 kubelet[2742]: E0130 05:03:24.685417 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:24.695006 containerd[1585]: time="2025-01-30T05:03:24.694945855Z" level=info msg="CreateContainer within sandbox \"05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:03:24.741373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392405619.mount: Deactivated successfully. Jan 30 05:03:24.758138 containerd[1585]: time="2025-01-30T05:03:24.758074334Z" level=info msg="CreateContainer within sandbox \"05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae46b63688f3e1e7bde9111a11cd501bc0da4be71487f0926d4fbf3dfefa16d5\"" Jan 30 05:03:24.761389 containerd[1585]: time="2025-01-30T05:03:24.761278922Z" level=info msg="StartContainer for \"ae46b63688f3e1e7bde9111a11cd501bc0da4be71487f0926d4fbf3dfefa16d5\"" Jan 30 05:03:24.881197 containerd[1585]: time="2025-01-30T05:03:24.881029920Z" level=info msg="StartContainer for \"ae46b63688f3e1e7bde9111a11cd501bc0da4be71487f0926d4fbf3dfefa16d5\" returns successfully" Jan 30 05:03:25.128898 systemd-networkd[1224]: cali9968ff99b0b: Gained IPv6LL Jan 30 05:03:25.166168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571647844.mount: Deactivated successfully. Jan 30 05:03:25.292497 kubelet[2742]: E0130 05:03:25.292419 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:25.296635 containerd[1585]: time="2025-01-30T05:03:25.295327713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:25.303128 containerd[1585]: time="2025-01-30T05:03:25.302454243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 05:03:25.308310 containerd[1585]: time="2025-01-30T05:03:25.308185058Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:25.315548 containerd[1585]: time="2025-01-30T05:03:25.315487524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:25.317822 containerd[1585]: time="2025-01-30T05:03:25.317303478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.417742656s" Jan 30 05:03:25.317822 containerd[1585]: time="2025-01-30T05:03:25.317363131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 05:03:25.322350 containerd[1585]: time="2025-01-30T05:03:25.320736437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 05:03:25.324933 containerd[1585]: time="2025-01-30T05:03:25.323783449Z" level=info msg="CreateContainer within sandbox \"45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 05:03:25.377573 containerd[1585]: time="2025-01-30T05:03:25.376169544Z" level=info msg="CreateContainer within sandbox \"45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b9254d033bb09578fb11d6d95bea7de44a961209585ab63b7800b0a7f98a3e85\"" Jan 30 05:03:25.376512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2710365172.mount: Deactivated successfully. Jan 30 05:03:25.380636 containerd[1585]: time="2025-01-30T05:03:25.378503986Z" level=info msg="StartContainer for \"b9254d033bb09578fb11d6d95bea7de44a961209585ab63b7800b0a7f98a3e85\"" Jan 30 05:03:25.492531 containerd[1585]: time="2025-01-30T05:03:25.492476253Z" level=info msg="StartContainer for \"b9254d033bb09578fb11d6d95bea7de44a961209585ab63b7800b0a7f98a3e85\" returns successfully" Jan 30 05:03:25.849554 containerd[1585]: time="2025-01-30T05:03:25.849127632Z" level=info msg="StopPodSandbox for \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\"" Jan 30 05:03:25.856010 containerd[1585]: time="2025-01-30T05:03:25.853805133Z" level=info msg="StopPodSandbox for \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\"" Jan 30 05:03:25.861636 containerd[1585]: time="2025-01-30T05:03:25.861551661Z" level=info msg="StopPodSandbox for \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\"" Jan 30 05:03:26.030600 kubelet[2742]: I0130 05:03:26.029038 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lc884" podStartSLOduration=36.02899729 podStartE2EDuration="36.02899729s" podCreationTimestamp="2025-01-30 05:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:03:25.338503453 +0000 UTC m=+50.647379297" watchObservedRunningTime="2025-01-30 05:03:26.02899729 +0000 UTC m=+51.337873135" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.031 [INFO][4603] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.031 [INFO][4603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" iface="eth0" netns="/var/run/netns/cni-3eb0d4b6-be74-6433-39a7-c13632bc203f" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.032 [INFO][4603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" iface="eth0" netns="/var/run/netns/cni-3eb0d4b6-be74-6433-39a7-c13632bc203f" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.036 [INFO][4603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" iface="eth0" netns="/var/run/netns/cni-3eb0d4b6-be74-6433-39a7-c13632bc203f" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.036 [INFO][4603] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.036 [INFO][4603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.128 [INFO][4620] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.128 [INFO][4620] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.135 [INFO][4620] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.151 [WARNING][4620] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.151 [INFO][4620] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.156 [INFO][4620] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:26.169349 containerd[1585]: 2025-01-30 05:03:26.163 [INFO][4603] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:26.178446 containerd[1585]: time="2025-01-30T05:03:26.178232858Z" level=info msg="TearDown network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\" successfully" Jan 30 05:03:26.178446 containerd[1585]: time="2025-01-30T05:03:26.178328408Z" level=info msg="StopPodSandbox for \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\" returns successfully" Jan 30 05:03:26.181975 kubelet[2742]: E0130 05:03:26.181938 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:26.185321 systemd[1]: run-netns-cni\x2d3eb0d4b6\x2dbe74\x2d6433\x2d39a7\x2dc13632bc203f.mount: Deactivated successfully. Jan 30 05:03:26.188809 containerd[1585]: time="2025-01-30T05:03:26.188397820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vz995,Uid:eaf40084-ae75-4229-851b-dca331cd774c,Namespace:kube-system,Attempt:1,}" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.075 [INFO][4594] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4594] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" iface="eth0" netns="/var/run/netns/cni-443c4bac-b780-1f53-c576-bd5d4ce9574b" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4594] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" iface="eth0" netns="/var/run/netns/cni-443c4bac-b780-1f53-c576-bd5d4ce9574b" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4594] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" iface="eth0" netns="/var/run/netns/cni-443c4bac-b780-1f53-c576-bd5d4ce9574b" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4594] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.200 [INFO][4626] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.201 [INFO][4626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.201 [INFO][4626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.215 [WARNING][4626] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.215 [INFO][4626] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.219 [INFO][4626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:26.226607 containerd[1585]: 2025-01-30 05:03:26.223 [INFO][4594] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:26.233102 containerd[1585]: time="2025-01-30T05:03:26.226753488Z" level=info msg="TearDown network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\" successfully" Jan 30 05:03:26.233102 containerd[1585]: time="2025-01-30T05:03:26.226794114Z" level=info msg="StopPodSandbox for \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\" returns successfully" Jan 30 05:03:26.233102 containerd[1585]: time="2025-01-30T05:03:26.231319130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbc46978-k5pj5,Uid:080595a4-3a66-4a46-973a-099bfefc2f67,Namespace:calico-apiserver,Attempt:1,}" Jan 30 05:03:26.235285 systemd[1]: run-netns-cni\x2d443c4bac\x2db780\x2d1f53\x2dc576\x2dbd5d4ce9574b.mount: Deactivated successfully. Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.071 [INFO][4602] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.074 [INFO][4602] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" iface="eth0" netns="/var/run/netns/cni-7181e77a-6399-b448-eeee-757e09323fff" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4602] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" iface="eth0" netns="/var/run/netns/cni-7181e77a-6399-b448-eeee-757e09323fff" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4602] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" iface="eth0" netns="/var/run/netns/cni-7181e77a-6399-b448-eeee-757e09323fff" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4602] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.076 [INFO][4602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.205 [INFO][4627] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.205 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.219 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.244 [WARNING][4627] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.244 [INFO][4627] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.250 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:26.257377 containerd[1585]: 2025-01-30 05:03:26.254 [INFO][4602] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:26.262411 containerd[1585]: time="2025-01-30T05:03:26.257986295Z" level=info msg="TearDown network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\" successfully" Jan 30 05:03:26.262411 containerd[1585]: time="2025-01-30T05:03:26.261867157Z" level=info msg="StopPodSandbox for \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\" returns successfully" Jan 30 05:03:26.266317 containerd[1585]: time="2025-01-30T05:03:26.266165129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhbgz,Uid:a048fe9b-2075-4d81-9452-b1dc14c3972a,Namespace:calico-system,Attempt:1,}" Jan 30 05:03:26.273702 systemd[1]: run-netns-cni\x2d7181e77a\x2d6399\x2db448\x2deeee\x2d757e09323fff.mount: Deactivated successfully. Jan 30 05:03:26.281923 systemd-networkd[1224]: cali950f8c7e5d9: Gained IPv6LL Jan 30 05:03:26.327007 kubelet[2742]: E0130 05:03:26.326737 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:26.396740 kubelet[2742]: I0130 05:03:26.395615 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bbbc46978-rxbqx" podStartSLOduration=24.975210122 podStartE2EDuration="28.395119889s" podCreationTimestamp="2025-01-30 05:02:58 +0000 UTC" firstStartedPulling="2025-01-30 05:03:21.899020558 +0000 UTC m=+47.207896396" lastFinishedPulling="2025-01-30 05:03:25.318930339 +0000 UTC m=+50.627806163" observedRunningTime="2025-01-30 05:03:26.350151857 +0000 UTC m=+51.659027702" watchObservedRunningTime="2025-01-30 05:03:26.395119889 +0000 UTC m=+51.703995734" Jan 30 05:03:26.812983 systemd-networkd[1224]: cali09750c1dbed: Link UP Jan 30 05:03:26.815503 systemd-networkd[1224]: cali09750c1dbed: Gained carrier Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.497 [INFO][4639] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0 coredns-7db6d8ff4d- kube-system eaf40084-ae75-4229-851b-dca331cd774c 922 0 2025-01-30 05:02:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-d-47de560844 coredns-7db6d8ff4d-vz995 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09750c1dbed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vz995" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.498 [INFO][4639] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vz995" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.635 [INFO][4675] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" HandleID="k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.678 [INFO][4675] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" HandleID="k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040bb00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-d-47de560844", "pod":"coredns-7db6d8ff4d-vz995", "timestamp":"2025-01-30 05:03:26.63517986 +0000 UTC"}, Hostname:"ci-4081.3.0-d-47de560844", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.679 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.679 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.679 [INFO][4675] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-d-47de560844' Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.685 [INFO][4675] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.695 [INFO][4675] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.717 [INFO][4675] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.734 [INFO][4675] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.753 [INFO][4675] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.754 [INFO][4675] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.757 [INFO][4675] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4 Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.775 [INFO][4675] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.795 [INFO][4675] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.4/26] block=192.168.59.0/26 handle="k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.795 [INFO][4675] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.4/26] handle="k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.795 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:26.872337 containerd[1585]: 2025-01-30 05:03:26.795 [INFO][4675] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.4/26] IPv6=[] ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" HandleID="k8s-pod-network.d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.880513 containerd[1585]: 2025-01-30 05:03:26.803 [INFO][4639] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vz995" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eaf40084-ae75-4229-851b-dca331cd774c", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"", Pod:"coredns-7db6d8ff4d-vz995", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09750c1dbed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:26.880513 containerd[1585]: 2025-01-30 05:03:26.805 [INFO][4639] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.4/32] ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vz995" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.880513 containerd[1585]: 2025-01-30 05:03:26.805 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09750c1dbed ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vz995" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.880513 containerd[1585]: 2025-01-30 05:03:26.817 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vz995" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:26.880513 containerd[1585]: 2025-01-30 05:03:26.820 [INFO][4639] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vz995" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eaf40084-ae75-4229-851b-dca331cd774c", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4", Pod:"coredns-7db6d8ff4d-vz995", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09750c1dbed", MAC:"ae:cd:19:5a:ed:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:26.880513 containerd[1585]: 2025-01-30 05:03:26.849 [INFO][4639] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vz995" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:27.084841 containerd[1585]: time="2025-01-30T05:03:27.076351562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:03:27.084841 containerd[1585]: time="2025-01-30T05:03:27.076444963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:03:27.084841 containerd[1585]: time="2025-01-30T05:03:27.076467136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:27.084841 containerd[1585]: time="2025-01-30T05:03:27.081485540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:27.102020 systemd-networkd[1224]: cali4b7f633d105: Link UP Jan 30 05:03:27.103547 systemd-networkd[1224]: cali4b7f633d105: Gained carrier Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.674 [INFO][4661] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0 csi-node-driver- calico-system a048fe9b-2075-4d81-9452-b1dc14c3972a 923 0 2025-01-30 05:02:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-d-47de560844 csi-node-driver-bhbgz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4b7f633d105 [] []}} ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Namespace="calico-system" Pod="csi-node-driver-bhbgz" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.674 [INFO][4661] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Namespace="calico-system" Pod="csi-node-driver-bhbgz" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.892 [INFO][4690] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" HandleID="k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.924 [INFO][4690] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" HandleID="k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291f50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-d-47de560844", "pod":"csi-node-driver-bhbgz", "timestamp":"2025-01-30 05:03:26.892484366 +0000 UTC"}, Hostname:"ci-4081.3.0-d-47de560844", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.925 [INFO][4690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.925 [INFO][4690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.925 [INFO][4690] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-d-47de560844' Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.935 [INFO][4690] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.951 [INFO][4690] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.969 [INFO][4690] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.975 [INFO][4690] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.981 [INFO][4690] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.982 [INFO][4690] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.985 [INFO][4690] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700 Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:26.998 [INFO][4690] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:27.035 [INFO][4690] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.5/26] block=192.168.59.0/26 handle="k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:27.035 [INFO][4690] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.5/26] handle="k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:27.040 [INFO][4690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:27.173783 containerd[1585]: 2025-01-30 05:03:27.040 [INFO][4690] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.5/26] IPv6=[] ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" HandleID="k8s-pod-network.1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:27.177759 containerd[1585]: 2025-01-30 05:03:27.091 [INFO][4661] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Namespace="calico-system" Pod="csi-node-driver-bhbgz" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a048fe9b-2075-4d81-9452-b1dc14c3972a", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"", Pod:"csi-node-driver-bhbgz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b7f633d105", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:27.177759 containerd[1585]: 2025-01-30 05:03:27.092 [INFO][4661] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.5/32] ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Namespace="calico-system" Pod="csi-node-driver-bhbgz" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:27.177759 containerd[1585]: 2025-01-30 05:03:27.092 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b7f633d105 ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Namespace="calico-system" Pod="csi-node-driver-bhbgz" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:27.177759 containerd[1585]: 2025-01-30 05:03:27.104 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Namespace="calico-system" Pod="csi-node-driver-bhbgz" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:27.177759 containerd[1585]: 2025-01-30 05:03:27.119 [INFO][4661] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Namespace="calico-system" Pod="csi-node-driver-bhbgz" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a048fe9b-2075-4d81-9452-b1dc14c3972a", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700", Pod:"csi-node-driver-bhbgz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b7f633d105", MAC:"42:16:25:2c:4e:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:27.177759 containerd[1585]: 2025-01-30 05:03:27.154 [INFO][4661] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700" Namespace="calico-system" Pod="csi-node-driver-bhbgz" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:27.217821 systemd[1]: run-containerd-runc-k8s.io-d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4-runc.TEeXvB.mount: Deactivated successfully. Jan 30 05:03:27.238505 systemd-networkd[1224]: cali717d0fd741e: Link UP Jan 30 05:03:27.245409 systemd-networkd[1224]: cali717d0fd741e: Gained carrier Jan 30 05:03:27.348428 kubelet[2742]: I0130 05:03:27.345731 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:26.627 [INFO][4650] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0 calico-apiserver-7bbbc46978- calico-apiserver 080595a4-3a66-4a46-973a-099bfefc2f67 924 0 2025-01-30 05:02:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bbbc46978 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-d-47de560844 calico-apiserver-7bbbc46978-k5pj5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali717d0fd741e [] []}} ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-k5pj5" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:26.627 [INFO][4650] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-k5pj5" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:26.902 [INFO][4685] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" HandleID="k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:26.958 [INFO][4685] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" HandleID="k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-d-47de560844", "pod":"calico-apiserver-7bbbc46978-k5pj5", "timestamp":"2025-01-30 05:03:26.902230282 +0000 UTC"}, Hostname:"ci-4081.3.0-d-47de560844", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:26.958 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.037 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.037 [INFO][4685] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-d-47de560844' Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.047 [INFO][4685] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.070 [INFO][4685] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.092 [INFO][4685] ipam/ipam.go 489: Trying affinity for 192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.105 [INFO][4685] ipam/ipam.go 155: Attempting to load block cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.132 [INFO][4685] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.132 [INFO][4685] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.141 [INFO][4685] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.152 [INFO][4685] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.178 [INFO][4685] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.59.6/26] block=192.168.59.0/26 handle="k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.178 [INFO][4685] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.59.6/26] handle="k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" host="ci-4081.3.0-d-47de560844" Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.178 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:27.349141 containerd[1585]: 2025-01-30 05:03:27.178 [INFO][4685] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.6/26] IPv6=[] ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" HandleID="k8s-pod-network.fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:27.350045 containerd[1585]: 2025-01-30 05:03:27.225 [INFO][4650] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-k5pj5" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0", GenerateName:"calico-apiserver-7bbbc46978-", Namespace:"calico-apiserver", SelfLink:"", UID:"080595a4-3a66-4a46-973a-099bfefc2f67", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbc46978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"", Pod:"calico-apiserver-7bbbc46978-k5pj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali717d0fd741e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:27.350045 containerd[1585]: 2025-01-30 05:03:27.226 [INFO][4650] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.59.6/32] ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-k5pj5" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:27.350045 containerd[1585]: 2025-01-30 05:03:27.226 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali717d0fd741e ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-k5pj5" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:27.350045 containerd[1585]: 2025-01-30 05:03:27.255 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-k5pj5" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:27.350045 containerd[1585]: 2025-01-30 05:03:27.263 [INFO][4650] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-k5pj5" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0", GenerateName:"calico-apiserver-7bbbc46978-", Namespace:"calico-apiserver", SelfLink:"", UID:"080595a4-3a66-4a46-973a-099bfefc2f67", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbc46978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a", Pod:"calico-apiserver-7bbbc46978-k5pj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali717d0fd741e", MAC:"0e:ba:ce:b7:0b:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:27.350045 containerd[1585]: 2025-01-30 05:03:27.310 [INFO][4650] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbc46978-k5pj5" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:27.350771 kubelet[2742]: E0130 05:03:27.350733 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:27.437714 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:03:27.432889 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:03:27.438462 containerd[1585]: time="2025-01-30T05:03:27.410406119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:03:27.438462 containerd[1585]: time="2025-01-30T05:03:27.410484082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:03:27.438462 containerd[1585]: time="2025-01-30T05:03:27.410506184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:27.438462 containerd[1585]: time="2025-01-30T05:03:27.410745284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:27.432942 systemd-resolved[1473]: Flushed all caches. Jan 30 05:03:27.466920 containerd[1585]: time="2025-01-30T05:03:27.466867110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vz995,Uid:eaf40084-ae75-4229-851b-dca331cd774c,Namespace:kube-system,Attempt:1,} returns sandbox id \"d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4\"" Jan 30 05:03:27.469226 kubelet[2742]: E0130 05:03:27.468644 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:27.476999 containerd[1585]: time="2025-01-30T05:03:27.476484330Z" level=info msg="CreateContainer within sandbox \"d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 05:03:27.492911 systemd[1]: run-containerd-runc-k8s.io-1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700-runc.ju0bVE.mount: Deactivated successfully. Jan 30 05:03:27.502446 containerd[1585]: time="2025-01-30T05:03:27.502198809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:03:27.502446 containerd[1585]: time="2025-01-30T05:03:27.502326002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:03:27.502446 containerd[1585]: time="2025-01-30T05:03:27.502351082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:27.505985 containerd[1585]: time="2025-01-30T05:03:27.505617252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:03:27.531731 containerd[1585]: time="2025-01-30T05:03:27.531598246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bhbgz,Uid:a048fe9b-2075-4d81-9452-b1dc14c3972a,Namespace:calico-system,Attempt:1,} returns sandbox id \"1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700\"" Jan 30 05:03:27.536228 containerd[1585]: time="2025-01-30T05:03:27.536034229Z" level=info msg="CreateContainer within sandbox \"d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58b9cd1505123b2ce9485fb501dae4c427e49bb753fb47a4a003b5f171bcf8de\"" Jan 30 05:03:27.536927 containerd[1585]: time="2025-01-30T05:03:27.536725432Z" level=info msg="StartContainer for \"58b9cd1505123b2ce9485fb501dae4c427e49bb753fb47a4a003b5f171bcf8de\"" Jan 30 05:03:27.681631 containerd[1585]: time="2025-01-30T05:03:27.678770313Z" level=info msg="StartContainer for \"58b9cd1505123b2ce9485fb501dae4c427e49bb753fb47a4a003b5f171bcf8de\" returns successfully" Jan 30 05:03:27.740336 containerd[1585]: time="2025-01-30T05:03:27.740288677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbc46978-k5pj5,Uid:080595a4-3a66-4a46-973a-099bfefc2f67,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a\"" Jan 30 05:03:27.749598 containerd[1585]: time="2025-01-30T05:03:27.748872735Z" level=info msg="CreateContainer within sandbox \"fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 05:03:27.802258 containerd[1585]: time="2025-01-30T05:03:27.802166643Z" level=info msg="CreateContainer within sandbox \"fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2eb37e85472469c0fe4037ae888ef3cf4fae38c271b807c016f82da3e8dc36f9\"" Jan 30 05:03:27.805290 containerd[1585]: time="2025-01-30T05:03:27.805248961Z" level=info msg="StartContainer for \"2eb37e85472469c0fe4037ae888ef3cf4fae38c271b807c016f82da3e8dc36f9\"" Jan 30 05:03:28.045961 systemd[1]: Started sshd@10-137.184.120.173:22-147.75.109.163:54564.service - OpenSSH per-connection server daemon (147.75.109.163:54564). Jan 30 05:03:28.049061 containerd[1585]: time="2025-01-30T05:03:28.049012738Z" level=info msg="StartContainer for \"2eb37e85472469c0fe4037ae888ef3cf4fae38c271b807c016f82da3e8dc36f9\" returns successfully" Jan 30 05:03:28.217901 sshd[4936]: Accepted publickey for core from 147.75.109.163 port 54564 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:28.224209 sshd[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:28.236204 systemd-logind[1561]: New session 10 of user core. Jan 30 05:03:28.241203 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 05:03:28.389940 kubelet[2742]: E0130 05:03:28.387607 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:28.392870 systemd-networkd[1224]: cali4b7f633d105: Gained IPv6LL Jan 30 05:03:28.417707 kubelet[2742]: I0130 05:03:28.415634 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bbbc46978-k5pj5" podStartSLOduration=30.415605238 podStartE2EDuration="30.415605238s" podCreationTimestamp="2025-01-30 05:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:03:28.414945137 +0000 UTC m=+53.723820981" watchObservedRunningTime="2025-01-30 05:03:28.415605238 +0000 UTC m=+53.724481083" Jan 30 05:03:28.652755 systemd-networkd[1224]: cali09750c1dbed: Gained IPv6LL Jan 30 05:03:28.907676 systemd-networkd[1224]: cali717d0fd741e: Gained IPv6LL Jan 30 05:03:29.211337 sshd[4936]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:29.229131 systemd[1]: Started sshd@11-137.184.120.173:22-147.75.109.163:54572.service - OpenSSH per-connection server daemon (147.75.109.163:54572). Jan 30 05:03:29.232456 systemd[1]: sshd@10-137.184.120.173:22-147.75.109.163:54564.service: Deactivated successfully. Jan 30 05:03:29.249394 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 05:03:29.251225 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Jan 30 05:03:29.257101 systemd-logind[1561]: Removed session 10. Jan 30 05:03:29.364155 sshd[4956]: Accepted publickey for core from 147.75.109.163 port 54572 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:29.368777 sshd[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:29.393327 kubelet[2742]: I0130 05:03:29.392830 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:03:29.400348 systemd-logind[1561]: New session 11 of user core. Jan 30 05:03:29.407384 kubelet[2742]: E0130 05:03:29.402411 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:29.410141 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 05:03:29.496309 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:03:29.485776 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:03:29.485786 systemd-resolved[1473]: Flushed all caches. Jan 30 05:03:29.613130 containerd[1585]: time="2025-01-30T05:03:29.612759700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:29.620411 containerd[1585]: time="2025-01-30T05:03:29.617613367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 05:03:29.626822 containerd[1585]: time="2025-01-30T05:03:29.626676949Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:29.635093 containerd[1585]: time="2025-01-30T05:03:29.631414962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:29.635093 containerd[1585]: time="2025-01-30T05:03:29.634867495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 4.314084096s" Jan 30 05:03:29.635093 containerd[1585]: time="2025-01-30T05:03:29.634922389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 05:03:29.639441 containerd[1585]: time="2025-01-30T05:03:29.638374361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 05:03:29.686344 containerd[1585]: time="2025-01-30T05:03:29.684903897Z" level=info msg="CreateContainer within sandbox \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 05:03:29.745461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618334760.mount: Deactivated successfully. Jan 30 05:03:29.749419 containerd[1585]: time="2025-01-30T05:03:29.747173701Z" level=info msg="CreateContainer within sandbox \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\"" Jan 30 05:03:29.751142 containerd[1585]: time="2025-01-30T05:03:29.750595846Z" level=info msg="StartContainer for \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\"" Jan 30 05:03:30.099150 sshd[4956]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:30.124858 systemd[1]: Started sshd@12-137.184.120.173:22-147.75.109.163:54584.service - OpenSSH per-connection server daemon (147.75.109.163:54584). Jan 30 05:03:30.134240 systemd[1]: sshd@11-137.184.120.173:22-147.75.109.163:54572.service: Deactivated successfully. Jan 30 05:03:30.161311 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 05:03:30.170699 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Jan 30 05:03:30.184018 systemd-logind[1561]: Removed session 11. Jan 30 05:03:30.316652 containerd[1585]: time="2025-01-30T05:03:30.316547690Z" level=info msg="StartContainer for \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\" returns successfully" Jan 30 05:03:30.343215 sshd[5000]: Accepted publickey for core from 147.75.109.163 port 54584 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:30.351805 sshd[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:30.381154 systemd-logind[1561]: New session 12 of user core. Jan 30 05:03:30.393163 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 05:03:30.420145 kubelet[2742]: E0130 05:03:30.417413 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:30.422271 kubelet[2742]: I0130 05:03:30.421263 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:03:30.449406 kubelet[2742]: I0130 05:03:30.447761 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vz995" podStartSLOduration=40.447730451 podStartE2EDuration="40.447730451s" podCreationTimestamp="2025-01-30 05:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:03:28.452492869 +0000 UTC m=+53.761368719" watchObservedRunningTime="2025-01-30 05:03:30.447730451 +0000 UTC m=+55.756606299" Jan 30 05:03:30.742079 sshd[5000]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:30.751057 systemd[1]: sshd@12-137.184.120.173:22-147.75.109.163:54584.service: Deactivated successfully. Jan 30 05:03:30.760299 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Jan 30 05:03:30.762459 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 05:03:30.784462 systemd-logind[1561]: Removed session 12. Jan 30 05:03:30.907608 kubelet[2742]: I0130 05:03:30.904850 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-764f56cffb-268h9" podStartSLOduration=27.963931895 podStartE2EDuration="33.904821549s" podCreationTimestamp="2025-01-30 05:02:57 +0000 UTC" firstStartedPulling="2025-01-30 05:03:23.696216007 +0000 UTC m=+49.005091833" lastFinishedPulling="2025-01-30 05:03:29.63710565 +0000 UTC m=+54.945981487" observedRunningTime="2025-01-30 05:03:30.450470187 +0000 UTC m=+55.759346032" watchObservedRunningTime="2025-01-30 05:03:30.904821549 +0000 UTC m=+56.213697395" Jan 30 05:03:31.335343 containerd[1585]: time="2025-01-30T05:03:31.335273475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:31.340686 containerd[1585]: time="2025-01-30T05:03:31.339756081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 05:03:31.345760 containerd[1585]: time="2025-01-30T05:03:31.345696698Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:31.360351 containerd[1585]: time="2025-01-30T05:03:31.359880635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:31.362238 containerd[1585]: time="2025-01-30T05:03:31.361531586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.723089121s" Jan 30 05:03:31.362238 containerd[1585]: time="2025-01-30T05:03:31.361605123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 05:03:31.372105 containerd[1585]: time="2025-01-30T05:03:31.370902461Z" level=info msg="CreateContainer within sandbox \"1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 05:03:31.426115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679081312.mount: Deactivated successfully. Jan 30 05:03:31.451789 containerd[1585]: time="2025-01-30T05:03:31.451627942Z" level=info msg="CreateContainer within sandbox \"1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"226e37c3892a3e7deee90bf40511185fa5b66410e7639f0947f275d3d7c50ec9\"" Jan 30 05:03:31.468548 containerd[1585]: time="2025-01-30T05:03:31.463806255Z" level=info msg="StartContainer for \"226e37c3892a3e7deee90bf40511185fa5b66410e7639f0947f275d3d7c50ec9\"" Jan 30 05:03:31.646671 containerd[1585]: time="2025-01-30T05:03:31.646605068Z" level=info msg="StartContainer for \"226e37c3892a3e7deee90bf40511185fa5b66410e7639f0947f275d3d7c50ec9\" returns successfully" Jan 30 05:03:31.651657 containerd[1585]: time="2025-01-30T05:03:31.651616127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 05:03:33.119344 containerd[1585]: time="2025-01-30T05:03:33.119245247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:33.121599 containerd[1585]: time="2025-01-30T05:03:33.121477396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 05:03:33.124441 containerd[1585]: time="2025-01-30T05:03:33.124322060Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:33.131168 containerd[1585]: time="2025-01-30T05:03:33.130979210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.479320999s" Jan 30 05:03:33.131168 containerd[1585]: time="2025-01-30T05:03:33.131042608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 05:03:33.135860 containerd[1585]: time="2025-01-30T05:03:33.135517116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 05:03:33.137316 containerd[1585]: time="2025-01-30T05:03:33.136502817Z" level=info msg="CreateContainer within sandbox \"1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 05:03:33.177014 containerd[1585]: time="2025-01-30T05:03:33.176943071Z" level=info msg="CreateContainer within sandbox \"1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a68dbd803f1dc4cd50150d84d342ac5542066106124e2b8e13e862679c74c362\"" Jan 30 05:03:33.178158 containerd[1585]: time="2025-01-30T05:03:33.178066503Z" level=info msg="StartContainer for \"a68dbd803f1dc4cd50150d84d342ac5542066106124e2b8e13e862679c74c362\"" Jan 30 05:03:33.277723 containerd[1585]: time="2025-01-30T05:03:33.277680305Z" level=info msg="StartContainer for \"a68dbd803f1dc4cd50150d84d342ac5542066106124e2b8e13e862679c74c362\" returns successfully" Jan 30 05:03:33.458418 kubelet[2742]: I0130 05:03:33.458140 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bhbgz" podStartSLOduration=30.859391338000002 podStartE2EDuration="36.458116145s" podCreationTimestamp="2025-01-30 05:02:57 +0000 UTC" firstStartedPulling="2025-01-30 05:03:27.534049365 +0000 UTC m=+52.842925190" lastFinishedPulling="2025-01-30 05:03:33.132774164 +0000 UTC m=+58.441649997" observedRunningTime="2025-01-30 05:03:33.455898456 +0000 UTC m=+58.764774297" watchObservedRunningTime="2025-01-30 05:03:33.458116145 +0000 UTC m=+58.766991990" Jan 30 05:03:34.178688 kubelet[2742]: I0130 05:03:34.178613 2742 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 05:03:34.184856 kubelet[2742]: I0130 05:03:34.184790 2742 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 05:03:34.917593 containerd[1585]: time="2025-01-30T05:03:34.917144596Z" level=info msg="StopPodSandbox for \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\"" Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.060 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0", GenerateName:"calico-apiserver-7bbbc46978-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9db393b-7bc7-4843-b161-dce7fa134a05", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbc46978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed", Pod:"calico-apiserver-7bbbc46978-rxbqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid027fdc4c6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.063 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.063 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" iface="eth0" netns="" Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.063 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.063 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.097 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.097 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.097 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.105 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.105 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.107 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:35.112473 containerd[1585]: 2025-01-30 05:03:35.110 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:35.113622 containerd[1585]: time="2025-01-30T05:03:35.112520748Z" level=info msg="TearDown network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\" successfully" Jan 30 05:03:35.113622 containerd[1585]: time="2025-01-30T05:03:35.112546194Z" level=info msg="StopPodSandbox for \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\" returns successfully" Jan 30 05:03:35.113622 containerd[1585]: time="2025-01-30T05:03:35.113291932Z" level=info msg="RemovePodSandbox for \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\"" Jan 30 05:03:35.113622 containerd[1585]: time="2025-01-30T05:03:35.113326137Z" level=info msg="Forcibly stopping sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\"" Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.164 [WARNING][5192] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0", GenerateName:"calico-apiserver-7bbbc46978-", Namespace:"calico-apiserver", SelfLink:"", UID:"c9db393b-7bc7-4843-b161-dce7fa134a05", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbc46978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"45debaa79440a61c0453ef2bb836306c13d93fd2a5404bf1bbc7eb7d758918ed", Pod:"calico-apiserver-7bbbc46978-rxbqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid027fdc4c6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.165 [INFO][5192] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.165 [INFO][5192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" iface="eth0" netns="" Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.165 [INFO][5192] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.165 [INFO][5192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.200 [INFO][5198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.200 [INFO][5198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.200 [INFO][5198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.208 [WARNING][5198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.208 [INFO][5198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" HandleID="k8s-pod-network.633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--rxbqx-eth0" Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.210 [INFO][5198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:35.215972 containerd[1585]: 2025-01-30 05:03:35.213 [INFO][5192] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185" Jan 30 05:03:35.217773 containerd[1585]: time="2025-01-30T05:03:35.216644675Z" level=info msg="TearDown network for sandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\" successfully" Jan 30 05:03:35.233415 containerd[1585]: time="2025-01-30T05:03:35.233296141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:03:35.233415 containerd[1585]: time="2025-01-30T05:03:35.233387021Z" level=info msg="RemovePodSandbox \"633cb6f86cc6c1411be3b737747758fb039f6fe647509d30521b923f395ae185\" returns successfully" Jan 30 05:03:35.234203 containerd[1585]: time="2025-01-30T05:03:35.234163419Z" level=info msg="StopPodSandbox for \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\"" Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.292 [WARNING][5216] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eaf40084-ae75-4229-851b-dca331cd774c", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4", Pod:"coredns-7db6d8ff4d-vz995", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09750c1dbed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.292 [INFO][5216] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.292 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" iface="eth0" netns="" Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.292 [INFO][5216] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.292 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.331 [INFO][5223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.331 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.331 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.339 [WARNING][5223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.339 [INFO][5223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.341 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:35.345744 containerd[1585]: 2025-01-30 05:03:35.343 [INFO][5216] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:35.345744 containerd[1585]: time="2025-01-30T05:03:35.345732671Z" level=info msg="TearDown network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\" successfully" Jan 30 05:03:35.347289 containerd[1585]: time="2025-01-30T05:03:35.345771806Z" level=info msg="StopPodSandbox for \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\" returns successfully" Jan 30 05:03:35.347289 containerd[1585]: time="2025-01-30T05:03:35.346672407Z" level=info msg="RemovePodSandbox for \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\"" Jan 30 05:03:35.347289 containerd[1585]: time="2025-01-30T05:03:35.346717159Z" level=info msg="Forcibly stopping sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\"" Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.406 [WARNING][5242] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"eaf40084-ae75-4229-851b-dca331cd774c", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"d6077e2645564acbe776f66ab3904da14b82fa906bf7610e4bc688696949a4d4", Pod:"coredns-7db6d8ff4d-vz995", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09750c1dbed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.407 [INFO][5242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.407 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" iface="eth0" netns="" Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.407 [INFO][5242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.407 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.438 [INFO][5248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.439 [INFO][5248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.439 [INFO][5248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.446 [WARNING][5248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.446 [INFO][5248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" HandleID="k8s-pod-network.3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--vz995-eth0" Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.449 [INFO][5248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:35.453641 containerd[1585]: 2025-01-30 05:03:35.451 [INFO][5242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af" Jan 30 05:03:35.453641 containerd[1585]: time="2025-01-30T05:03:35.453106412Z" level=info msg="TearDown network for sandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\" successfully" Jan 30 05:03:35.459730 containerd[1585]: time="2025-01-30T05:03:35.459662960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:03:35.459730 containerd[1585]: time="2025-01-30T05:03:35.459758536Z" level=info msg="RemovePodSandbox \"3383bee010b4c6ae3f7e9561819ecdc8a00a6487bde0f111d3cd4098620967af\" returns successfully" Jan 30 05:03:35.460547 containerd[1585]: time="2025-01-30T05:03:35.460510042Z" level=info msg="StopPodSandbox for \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\"" Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.525 [WARNING][5266] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0", GenerateName:"calico-apiserver-7bbbc46978-", Namespace:"calico-apiserver", SelfLink:"", UID:"080595a4-3a66-4a46-973a-099bfefc2f67", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbc46978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a", Pod:"calico-apiserver-7bbbc46978-k5pj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali717d0fd741e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.525 [INFO][5266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.525 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" iface="eth0" netns="" Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.525 [INFO][5266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.525 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.556 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.556 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.556 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.564 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.564 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.566 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:35.571789 containerd[1585]: 2025-01-30 05:03:35.569 [INFO][5266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:35.571789 containerd[1585]: time="2025-01-30T05:03:35.571535504Z" level=info msg="TearDown network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\" successfully" Jan 30 05:03:35.571789 containerd[1585]: time="2025-01-30T05:03:35.571595057Z" level=info msg="StopPodSandbox for \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\" returns successfully" Jan 30 05:03:35.572916 containerd[1585]: time="2025-01-30T05:03:35.572241947Z" level=info msg="RemovePodSandbox for \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\"" Jan 30 05:03:35.572916 containerd[1585]: time="2025-01-30T05:03:35.572278207Z" level=info msg="Forcibly stopping sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\"" Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.649 [WARNING][5291] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0", GenerateName:"calico-apiserver-7bbbc46978-", Namespace:"calico-apiserver", SelfLink:"", UID:"080595a4-3a66-4a46-973a-099bfefc2f67", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbc46978", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"fab664ff5e51da0efad38d652a43169262e7bc2a9bc78a005be45de2ac4c3b4a", Pod:"calico-apiserver-7bbbc46978-k5pj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali717d0fd741e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.651 [INFO][5291] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.651 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" iface="eth0" netns="" Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.651 [INFO][5291] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.651 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.710 [INFO][5298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.710 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.710 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.721 [WARNING][5298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.721 [INFO][5298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" HandleID="k8s-pod-network.26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Workload="ci--4081.3.0--d--47de560844-k8s-calico--apiserver--7bbbc46978--k5pj5-eth0" Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.724 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:35.728810 containerd[1585]: 2025-01-30 05:03:35.726 [INFO][5291] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a" Jan 30 05:03:35.729707 containerd[1585]: time="2025-01-30T05:03:35.728870489Z" level=info msg="TearDown network for sandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\" successfully" Jan 30 05:03:35.736377 containerd[1585]: time="2025-01-30T05:03:35.736314045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:03:35.736641 containerd[1585]: time="2025-01-30T05:03:35.736431002Z" level=info msg="RemovePodSandbox \"26d18e19359902089814bece3c80686c502da9a1953325d2043511cfb392f65a\" returns successfully" Jan 30 05:03:35.737701 containerd[1585]: time="2025-01-30T05:03:35.737228452Z" level=info msg="StopPodSandbox for \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\"" Jan 30 05:03:35.755029 systemd[1]: Started sshd@13-137.184.120.173:22-147.75.109.163:54590.service - OpenSSH per-connection server daemon (147.75.109.163:54590). Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.809 [WARNING][5316] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a048fe9b-2075-4d81-9452-b1dc14c3972a", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700", Pod:"csi-node-driver-bhbgz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b7f633d105", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.809 [INFO][5316] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.809 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" iface="eth0" netns="" Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.809 [INFO][5316] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.809 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.853 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.853 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.853 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.860 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.860 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.863 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:35.868143 containerd[1585]: 2025-01-30 05:03:35.865 [INFO][5316] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:35.869584 containerd[1585]: time="2025-01-30T05:03:35.868812554Z" level=info msg="TearDown network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\" successfully" Jan 30 05:03:35.869584 containerd[1585]: time="2025-01-30T05:03:35.868858983Z" level=info msg="StopPodSandbox for \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\" returns successfully" Jan 30 05:03:35.869584 containerd[1585]: time="2025-01-30T05:03:35.869477384Z" level=info msg="RemovePodSandbox for \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\"" Jan 30 05:03:35.869584 containerd[1585]: time="2025-01-30T05:03:35.869516061Z" level=info msg="Forcibly stopping sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\"" Jan 30 05:03:35.881831 sshd[5317]: Accepted publickey for core from 147.75.109.163 port 54590 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:35.886073 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:35.896576 systemd-logind[1561]: New session 13 of user core. Jan 30 05:03:35.902907 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.942 [WARNING][5342] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a048fe9b-2075-4d81-9452-b1dc14c3972a", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"1630b7cf771003866b67b2712605afede1c1408410a6882498613f6f788c7700", Pod:"csi-node-driver-bhbgz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4b7f633d105", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.942 [INFO][5342] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.942 [INFO][5342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" iface="eth0" netns="" Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.942 [INFO][5342] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.942 [INFO][5342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.989 [INFO][5350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.989 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.989 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.998 [WARNING][5350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:35.998 [INFO][5350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" HandleID="k8s-pod-network.5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Workload="ci--4081.3.0--d--47de560844-k8s-csi--node--driver--bhbgz-eth0" Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:36.000 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:36.007910 containerd[1585]: 2025-01-30 05:03:36.002 [INFO][5342] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf" Jan 30 05:03:36.007910 containerd[1585]: time="2025-01-30T05:03:36.005789259Z" level=info msg="TearDown network for sandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\" successfully" Jan 30 05:03:36.016130 containerd[1585]: time="2025-01-30T05:03:36.016055646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:03:36.016450 containerd[1585]: time="2025-01-30T05:03:36.016416666Z" level=info msg="RemovePodSandbox \"5f7f5cf9f8a44f61939edb53dc1c12ad504325c483d34f3777e9c23b64f104cf\" returns successfully" Jan 30 05:03:36.017275 containerd[1585]: time="2025-01-30T05:03:36.017204797Z" level=info msg="StopPodSandbox for \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\"" Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.087 [WARNING][5371] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"38e6d30e-c18e-4b00-bba1-7a7a43ab759a", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55", Pod:"coredns-7db6d8ff4d-lc884", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali950f8c7e5d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.087 [INFO][5371] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.087 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" iface="eth0" netns="" Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.087 [INFO][5371] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.087 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.142 [INFO][5380] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.142 [INFO][5380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.142 [INFO][5380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.152 [WARNING][5380] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.152 [INFO][5380] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.155 [INFO][5380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:36.161024 containerd[1585]: 2025-01-30 05:03:36.157 [INFO][5371] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:36.162623 containerd[1585]: time="2025-01-30T05:03:36.160842024Z" level=info msg="TearDown network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\" successfully" Jan 30 05:03:36.163039 containerd[1585]: time="2025-01-30T05:03:36.162879363Z" level=info msg="StopPodSandbox for \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\" returns successfully" Jan 30 05:03:36.166591 containerd[1585]: time="2025-01-30T05:03:36.164487941Z" level=info msg="RemovePodSandbox for \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\"" Jan 30 05:03:36.166591 containerd[1585]: time="2025-01-30T05:03:36.164538302Z" level=info msg="Forcibly stopping sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\"" Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.283 [WARNING][5399] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"38e6d30e-c18e-4b00-bba1-7a7a43ab759a", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"05cdcebd7229933dec0274ba291564b82f970dd2c34125edb36f99c1f95e8a55", Pod:"coredns-7db6d8ff4d-lc884", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali950f8c7e5d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.284 [INFO][5399] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.284 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" iface="eth0" netns="" Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.284 [INFO][5399] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.284 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.331 [INFO][5405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.331 [INFO][5405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.331 [INFO][5405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.345 [WARNING][5405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.345 [INFO][5405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" HandleID="k8s-pod-network.cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Workload="ci--4081.3.0--d--47de560844-k8s-coredns--7db6d8ff4d--lc884-eth0" Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.348 [INFO][5405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:36.357202 containerd[1585]: 2025-01-30 05:03:36.352 [INFO][5399] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d" Jan 30 05:03:36.357202 containerd[1585]: time="2025-01-30T05:03:36.356740468Z" level=info msg="TearDown network for sandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\" successfully" Jan 30 05:03:36.374003 containerd[1585]: time="2025-01-30T05:03:36.373708582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:03:36.374003 containerd[1585]: time="2025-01-30T05:03:36.373809498Z" level=info msg="RemovePodSandbox \"cbabf6e6ff57d56667f3685605ef30888ec8b0883a067a2d5c8a698733b4338d\" returns successfully" Jan 30 05:03:36.375689 containerd[1585]: time="2025-01-30T05:03:36.375215352Z" level=info msg="StopPodSandbox for \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\"" Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.444 [WARNING][5423] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0", GenerateName:"calico-kube-controllers-764f56cffb-", Namespace:"calico-system", SelfLink:"", UID:"160f5477-6b56-47a9-a6b1-0ce2a996310c", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764f56cffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e", Pod:"calico-kube-controllers-764f56cffb-268h9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9968ff99b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.444 [INFO][5423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.444 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" iface="eth0" netns="" Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.444 [INFO][5423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.444 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.519 [INFO][5429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.519 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.519 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.527 [WARNING][5429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.528 [INFO][5429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.530 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:36.537211 containerd[1585]: 2025-01-30 05:03:36.534 [INFO][5423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:36.541106 containerd[1585]: time="2025-01-30T05:03:36.537508828Z" level=info msg="TearDown network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\" successfully" Jan 30 05:03:36.541106 containerd[1585]: time="2025-01-30T05:03:36.538130656Z" level=info msg="StopPodSandbox for \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\" returns successfully" Jan 30 05:03:36.541909 containerd[1585]: time="2025-01-30T05:03:36.541548349Z" level=info msg="RemovePodSandbox for \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\"" Jan 30 05:03:36.541909 containerd[1585]: time="2025-01-30T05:03:36.541688031Z" level=info msg="Forcibly stopping sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\"" Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.620 [WARNING][5449] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0", GenerateName:"calico-kube-controllers-764f56cffb-", Namespace:"calico-system", SelfLink:"", UID:"160f5477-6b56-47a9-a6b1-0ce2a996310c", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 5, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764f56cffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-d-47de560844", ContainerID:"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e", Pod:"calico-kube-controllers-764f56cffb-268h9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9968ff99b0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.620 [INFO][5449] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.620 [INFO][5449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" iface="eth0" netns="" Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.620 [INFO][5449] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.622 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.668 [INFO][5455] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.669 [INFO][5455] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.669 [INFO][5455] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.682 [WARNING][5455] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.682 [INFO][5455] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" HandleID="k8s-pod-network.c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.685 [INFO][5455] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:03:36.692992 containerd[1585]: 2025-01-30 05:03:36.689 [INFO][5449] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6" Jan 30 05:03:36.692992 containerd[1585]: time="2025-01-30T05:03:36.692785850Z" level=info msg="TearDown network for sandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\" successfully" Jan 30 05:03:36.703945 containerd[1585]: time="2025-01-30T05:03:36.703583876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:03:36.703945 containerd[1585]: time="2025-01-30T05:03:36.703700379Z" level=info msg="RemovePodSandbox \"c6e4810b597fbe3e2a77788d82ce5f40e9651b9c05ac0920ccff2bf5935ec6d6\" returns successfully" Jan 30 05:03:36.756231 sshd[5317]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:36.766207 systemd[1]: sshd@13-137.184.120.173:22-147.75.109.163:54590.service: Deactivated successfully. Jan 30 05:03:36.773820 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 05:03:36.775414 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Jan 30 05:03:36.776973 systemd-logind[1561]: Removed session 13. Jan 30 05:03:37.420901 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:03:37.420313 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:03:37.420383 systemd-resolved[1473]: Flushed all caches. Jan 30 05:03:39.467548 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:03:39.464953 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:03:39.464962 systemd-resolved[1473]: Flushed all caches. Jan 30 05:03:41.768088 systemd[1]: Started sshd@14-137.184.120.173:22-147.75.109.163:34436.service - OpenSSH per-connection server daemon (147.75.109.163:34436). Jan 30 05:03:41.832955 sshd[5512]: Accepted publickey for core from 147.75.109.163 port 34436 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:41.835659 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:41.843665 systemd-logind[1561]: New session 14 of user core. Jan 30 05:03:41.849102 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 05:03:42.026858 sshd[5512]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:42.032310 systemd[1]: sshd@14-137.184.120.173:22-147.75.109.163:34436.service: Deactivated successfully. Jan 30 05:03:42.039753 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 05:03:42.041640 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Jan 30 05:03:42.043731 systemd-logind[1561]: Removed session 14. Jan 30 05:03:44.863171 kubelet[2742]: E0130 05:03:44.862985 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:46.736189 kubelet[2742]: I0130 05:03:46.735845 2742 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 05:03:47.037270 systemd[1]: Started sshd@15-137.184.120.173:22-147.75.109.163:34442.service - OpenSSH per-connection server daemon (147.75.109.163:34442). Jan 30 05:03:47.099027 sshd[5549]: Accepted publickey for core from 147.75.109.163 port 34442 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:47.101708 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:47.109380 systemd-logind[1561]: New session 15 of user core. Jan 30 05:03:47.116680 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 05:03:47.280189 sshd[5549]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:47.284733 systemd[1]: sshd@15-137.184.120.173:22-147.75.109.163:34442.service: Deactivated successfully. Jan 30 05:03:47.291974 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 05:03:47.295734 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Jan 30 05:03:47.297586 systemd-logind[1561]: Removed session 15. Jan 30 05:03:50.849804 kubelet[2742]: E0130 05:03:50.849354 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:52.294048 systemd[1]: Started sshd@16-137.184.120.173:22-147.75.109.163:58496.service - OpenSSH per-connection server daemon (147.75.109.163:58496). Jan 30 05:03:52.383422 sshd[5565]: Accepted publickey for core from 147.75.109.163 port 58496 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:52.387264 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:52.398396 systemd-logind[1561]: New session 16 of user core. Jan 30 05:03:52.411198 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 05:03:52.659554 sshd[5565]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:52.668344 systemd[1]: sshd@16-137.184.120.173:22-147.75.109.163:58496.service: Deactivated successfully. Jan 30 05:03:52.674462 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 05:03:52.676502 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Jan 30 05:03:52.678259 systemd-logind[1561]: Removed session 16. Jan 30 05:03:54.849291 kubelet[2742]: E0130 05:03:54.848398 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:03:57.681248 systemd[1]: Started sshd@17-137.184.120.173:22-147.75.109.163:37662.service - OpenSSH per-connection server daemon (147.75.109.163:37662). Jan 30 05:03:57.736211 sshd[5581]: Accepted publickey for core from 147.75.109.163 port 37662 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:57.738946 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:57.752021 systemd-logind[1561]: New session 17 of user core. Jan 30 05:03:57.757603 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 05:03:58.399733 sshd[5581]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:58.437671 systemd[1]: Started sshd@18-137.184.120.173:22-147.75.109.163:37668.service - OpenSSH per-connection server daemon (147.75.109.163:37668). Jan 30 05:03:58.446680 containerd[1585]: time="2025-01-30T05:03:58.443878398Z" level=info msg="StopContainer for \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\" with timeout 300 (s)" Jan 30 05:03:58.444628 systemd[1]: sshd@17-137.184.120.173:22-147.75.109.163:37662.service: Deactivated successfully. Jan 30 05:03:58.460923 containerd[1585]: time="2025-01-30T05:03:58.460796285Z" level=info msg="Stop container \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\" with signal terminated" Jan 30 05:03:58.461552 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 05:03:58.468456 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Jan 30 05:03:58.473344 systemd-logind[1561]: Removed session 17. Jan 30 05:03:58.588870 containerd[1585]: time="2025-01-30T05:03:58.587476138Z" level=info msg="StopContainer for \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\" with timeout 30 (s)" Jan 30 05:03:58.599552 containerd[1585]: time="2025-01-30T05:03:58.596721708Z" level=info msg="Stop container \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\" with signal terminated" Jan 30 05:03:58.641804 sshd[5593]: Accepted publickey for core from 147.75.109.163 port 37668 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:03:58.677605 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:03:58.697425 systemd-logind[1561]: New session 18 of user core. Jan 30 05:03:58.703109 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 05:03:58.802072 containerd[1585]: time="2025-01-30T05:03:58.800950232Z" level=info msg="shim disconnected" id=12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a namespace=k8s.io Jan 30 05:03:58.817551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a-rootfs.mount: Deactivated successfully. Jan 30 05:03:58.834034 containerd[1585]: time="2025-01-30T05:03:58.833960994Z" level=warning msg="cleaning up after shim disconnected" id=12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a namespace=k8s.io Jan 30 05:03:58.837499 containerd[1585]: time="2025-01-30T05:03:58.837305492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:58.955397 containerd[1585]: time="2025-01-30T05:03:58.954841485Z" level=info msg="StopContainer for \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\" returns successfully" Jan 30 05:03:58.957226 containerd[1585]: time="2025-01-30T05:03:58.957057542Z" level=info msg="StopPodSandbox for \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\"" Jan 30 05:03:58.964423 containerd[1585]: time="2025-01-30T05:03:58.964345779Z" level=info msg="Container to stop \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:03:58.978466 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e-shm.mount: Deactivated successfully. Jan 30 05:03:58.990108 containerd[1585]: time="2025-01-30T05:03:58.989938376Z" level=info msg="StopContainer for \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\" with timeout 5 (s)" Jan 30 05:03:58.990867 containerd[1585]: time="2025-01-30T05:03:58.990724310Z" level=info msg="Stop container \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\" with signal terminated" Jan 30 05:03:59.102822 containerd[1585]: time="2025-01-30T05:03:59.102620617Z" level=info msg="shim disconnected" id=3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e namespace=k8s.io Jan 30 05:03:59.102822 containerd[1585]: time="2025-01-30T05:03:59.102688335Z" level=warning msg="cleaning up after shim disconnected" id=3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e namespace=k8s.io Jan 30 05:03:59.102822 containerd[1585]: time="2025-01-30T05:03:59.102702810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:59.176312 containerd[1585]: time="2025-01-30T05:03:59.175902610Z" level=info msg="shim disconnected" id=a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6 namespace=k8s.io Jan 30 05:03:59.176312 containerd[1585]: time="2025-01-30T05:03:59.176274485Z" level=warning msg="cleaning up after shim disconnected" id=a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6 namespace=k8s.io Jan 30 05:03:59.176576 containerd[1585]: time="2025-01-30T05:03:59.176302061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:59.209067 containerd[1585]: time="2025-01-30T05:03:59.208053638Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:03:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:03:59.243237 containerd[1585]: time="2025-01-30T05:03:59.243173420Z" level=info msg="StopContainer for \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\" returns successfully" Jan 30 05:03:59.246599 containerd[1585]: time="2025-01-30T05:03:59.246256018Z" level=info msg="StopPodSandbox for \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\"" Jan 30 05:03:59.247274 containerd[1585]: time="2025-01-30T05:03:59.246946213Z" level=info msg="Container to stop \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:03:59.247274 containerd[1585]: time="2025-01-30T05:03:59.246985386Z" level=info msg="Container to stop \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:03:59.247274 containerd[1585]: time="2025-01-30T05:03:59.247004914Z" level=info msg="Container to stop \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:03:59.357365 containerd[1585]: time="2025-01-30T05:03:59.356104954Z" level=info msg="shim disconnected" id=668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81 namespace=k8s.io Jan 30 05:03:59.357365 containerd[1585]: time="2025-01-30T05:03:59.356191981Z" level=warning msg="cleaning up after shim disconnected" id=668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81 namespace=k8s.io Jan 30 05:03:59.357365 containerd[1585]: time="2025-01-30T05:03:59.356205424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:03:59.444446 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:03:59.436317 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:03:59.436330 systemd-resolved[1473]: Flushed all caches. Jan 30 05:03:59.466849 containerd[1585]: time="2025-01-30T05:03:59.463508104Z" level=warning msg="cleanup warnings time=\"2025-01-30T05:03:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 05:03:59.485092 containerd[1585]: time="2025-01-30T05:03:59.484647997Z" level=info msg="TearDown network for sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" successfully" Jan 30 05:03:59.490173 containerd[1585]: time="2025-01-30T05:03:59.488222617Z" level=info msg="StopPodSandbox for \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" returns successfully" Jan 30 05:03:59.607980 kubelet[2742]: I0130 05:03:59.607708 2742 scope.go:117] "RemoveContainer" containerID="a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6" Jan 30 05:03:59.614550 kubelet[2742]: I0130 05:03:59.610742 2742 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:03:59.624960 containerd[1585]: time="2025-01-30T05:03:59.624592644Z" level=info msg="RemoveContainer for \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\"" Jan 30 05:03:59.636355 systemd-networkd[1224]: cali9968ff99b0b: Link DOWN Jan 30 05:03:59.636366 systemd-networkd[1224]: cali9968ff99b0b: Lost carrier Jan 30 05:03:59.652207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e-rootfs.mount: Deactivated successfully. Jan 30 05:03:59.652551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6-rootfs.mount: Deactivated successfully. Jan 30 05:03:59.652930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81-rootfs.mount: Deactivated successfully. Jan 30 05:03:59.653174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81-shm.mount: Deactivated successfully. Jan 30 05:03:59.681595 kubelet[2742]: I0130 05:03:59.673651 2742 topology_manager.go:215] "Topology Admit Handler" podUID="ec40f2b9-0fda-43a0-b8ac-068843d953d5" podNamespace="calico-system" podName="calico-node-88tpl" Jan 30 05:03:59.697515 containerd[1585]: time="2025-01-30T05:03:59.690142335Z" level=info msg="RemoveContainer for \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\" returns successfully" Jan 30 05:03:59.718260 kubelet[2742]: E0130 05:03:59.715878 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01f1c928-1706-42a5-bb56-01a783fa8509" containerName="flexvol-driver" Jan 30 05:03:59.718260 kubelet[2742]: E0130 05:03:59.715955 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01f1c928-1706-42a5-bb56-01a783fa8509" containerName="calico-node" Jan 30 05:03:59.718260 kubelet[2742]: E0130 05:03:59.715976 2742 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01f1c928-1706-42a5-bb56-01a783fa8509" containerName="install-cni" Jan 30 05:03:59.759113 kubelet[2742]: I0130 05:03:59.759057 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-var-lib-calico\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.759585 kubelet[2742]: I0130 05:03:59.759343 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/01f1c928-1706-42a5-bb56-01a783fa8509-node-certs\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.759585 kubelet[2742]: I0130 05:03:59.759384 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-net-dir\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.759585 kubelet[2742]: I0130 05:03:59.759419 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-lib-modules\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.759585 kubelet[2742]: I0130 05:03:59.759444 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-var-run-calico\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.760353 kubelet[2742]: I0130 05:03:59.760028 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-flexvol-driver-host\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.760353 kubelet[2742]: I0130 05:03:59.760084 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-log-dir\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.760353 kubelet[2742]: I0130 05:03:59.760122 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f1c928-1706-42a5-bb56-01a783fa8509-tigera-ca-bundle\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.764138 kubelet[2742]: I0130 05:03:59.761698 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-policysync\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.764138 kubelet[2742]: I0130 05:03:59.761777 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n62zt\" (UniqueName: \"kubernetes.io/projected/01f1c928-1706-42a5-bb56-01a783fa8509-kube-api-access-n62zt\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.764138 kubelet[2742]: I0130 05:03:59.761811 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-xtables-lock\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.764138 kubelet[2742]: I0130 05:03:59.761838 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-bin-dir\") pod \"01f1c928-1706-42a5-bb56-01a783fa8509\" (UID: \"01f1c928-1706-42a5-bb56-01a783fa8509\") " Jan 30 05:03:59.809500 kubelet[2742]: I0130 05:03:59.809448 2742 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f1c928-1706-42a5-bb56-01a783fa8509" containerName="calico-node" Jan 30 05:03:59.812687 sshd[5593]: pam_unix(sshd:session): session closed for user core Jan 30 05:03:59.822616 kubelet[2742]: I0130 05:03:59.820217 2742 scope.go:117] "RemoveContainer" containerID="12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388" Jan 30 05:03:59.822798 containerd[1585]: time="2025-01-30T05:03:59.822623359Z" level=info msg="RemoveContainer for \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\"" Jan 30 05:03:59.824057 kubelet[2742]: I0130 05:03:59.823003 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.824057 kubelet[2742]: I0130 05:03:59.823123 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.838961 kubelet[2742]: I0130 05:03:59.833288 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f1c928-1706-42a5-bb56-01a783fa8509-node-certs" (OuterVolumeSpecName: "node-certs") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:03:59.838961 kubelet[2742]: I0130 05:03:59.833416 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.838961 kubelet[2742]: I0130 05:03:59.833461 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.838961 kubelet[2742]: I0130 05:03:59.833511 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.842589 containerd[1585]: time="2025-01-30T05:03:59.836333921Z" level=info msg="RemoveContainer for \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\" returns successfully" Jan 30 05:03:59.841180 systemd[1]: Started sshd@19-137.184.120.173:22-147.75.109.163:37684.service - OpenSSH per-connection server daemon (147.75.109.163:37684). Jan 30 05:03:59.866716 systemd[1]: var-lib-kubelet-pods-01f1c928\x2d1706\x2d42a5\x2dbb56\x2d01a783fa8509-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 30 05:03:59.870310 systemd[1]: sshd@18-137.184.120.173:22-147.75.109.163:37668.service: Deactivated successfully. Jan 30 05:03:59.880156 kubelet[2742]: I0130 05:03:59.878777 2742 scope.go:117] "RemoveContainer" containerID="2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b" Jan 30 05:03:59.880117 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 05:03:59.888295 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Jan 30 05:03:59.892462 systemd-logind[1561]: Removed session 18. Jan 30 05:03:59.925938 kubelet[2742]: I0130 05:03:59.923499 2742 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-var-lib-calico\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:03:59.925938 kubelet[2742]: I0130 05:03:59.924027 2742 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/01f1c928-1706-42a5-bb56-01a783fa8509-node-certs\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:03:59.925938 kubelet[2742]: I0130 05:03:59.924098 2742 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-net-dir\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:03:59.925938 kubelet[2742]: I0130 05:03:59.924121 2742 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-lib-modules\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:03:59.925938 kubelet[2742]: I0130 05:03:59.924141 2742 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-var-run-calico\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:03:59.925938 kubelet[2742]: I0130 05:03:59.924163 2742 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-flexvol-driver-host\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:03:59.925938 kubelet[2742]: I0130 05:03:59.924350 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.946103 kubelet[2742]: I0130 05:03:59.939924 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-policysync" (OuterVolumeSpecName: "policysync") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.952287 systemd[1]: var-lib-kubelet-pods-01f1c928\x2d1706\x2d42a5\x2dbb56\x2d01a783fa8509-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 30 05:03:59.963314 kubelet[2742]: I0130 05:03:59.958466 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.984089 kubelet[2742]: I0130 05:03:59.970962 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 05:03:59.982017 systemd[1]: var-lib-kubelet-pods-01f1c928\x2d1706\x2d42a5\x2dbb56\x2d01a783fa8509-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn62zt.mount: Deactivated successfully. Jan 30 05:04:00.017076 kubelet[2742]: I0130 05:04:00.014681 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f1c928-1706-42a5-bb56-01a783fa8509-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:04:00.017076 kubelet[2742]: I0130 05:04:00.015697 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f1c928-1706-42a5-bb56-01a783fa8509-kube-api-access-n62zt" (OuterVolumeSpecName: "kube-api-access-n62zt") pod "01f1c928-1706-42a5-bb56-01a783fa8509" (UID: "01f1c928-1706-42a5-bb56-01a783fa8509"). InnerVolumeSpecName "kube-api-access-n62zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:04:00.033313 kubelet[2742]: I0130 05:04:00.031835 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-flexvol-driver-host\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.033313 kubelet[2742]: I0130 05:04:00.031925 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec40f2b9-0fda-43a0-b8ac-068843d953d5-node-certs\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.033313 kubelet[2742]: I0130 05:04:00.031960 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-var-run-calico\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.033313 kubelet[2742]: I0130 05:04:00.031991 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-policysync\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.033313 kubelet[2742]: I0130 05:04:00.032015 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-cni-log-dir\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.033860 kubelet[2742]: I0130 05:04:00.032048 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69nzn\" (UniqueName: \"kubernetes.io/projected/ec40f2b9-0fda-43a0-b8ac-068843d953d5-kube-api-access-69nzn\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.033860 kubelet[2742]: I0130 05:04:00.032072 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-var-lib-calico\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.033860 kubelet[2742]: I0130 05:04:00.032110 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-lib-modules\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.037131 kubelet[2742]: I0130 05:04:00.034683 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-xtables-lock\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.037131 kubelet[2742]: I0130 05:04:00.034748 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec40f2b9-0fda-43a0-b8ac-068843d953d5-tigera-ca-bundle\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.037131 kubelet[2742]: I0130 05:04:00.034776 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-cni-net-dir\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.037131 kubelet[2742]: I0130 05:04:00.034818 2742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec40f2b9-0fda-43a0-b8ac-068843d953d5-cni-bin-dir\") pod \"calico-node-88tpl\" (UID: \"ec40f2b9-0fda-43a0-b8ac-068843d953d5\") " pod="calico-system/calico-node-88tpl" Jan 30 05:04:00.037131 kubelet[2742]: I0130 05:04:00.034856 2742 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-n62zt\" (UniqueName: \"kubernetes.io/projected/01f1c928-1706-42a5-bb56-01a783fa8509-kube-api-access-n62zt\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:00.037131 kubelet[2742]: I0130 05:04:00.034872 2742 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-policysync\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:00.042703 kubelet[2742]: I0130 05:04:00.034889 2742 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-xtables-lock\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:00.042703 kubelet[2742]: I0130 05:04:00.034904 2742 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-bin-dir\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:00.042703 kubelet[2742]: I0130 05:04:00.034965 2742 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/01f1c928-1706-42a5-bb56-01a783fa8509-cni-log-dir\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:00.042703 kubelet[2742]: I0130 05:04:00.034981 2742 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f1c928-1706-42a5-bb56-01a783fa8509-tigera-ca-bundle\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:00.042703 kubelet[2742]: E0130 05:04:00.035350 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:00.054699 containerd[1585]: time="2025-01-30T05:04:00.053381735Z" level=info msg="RemoveContainer for \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\"" Jan 30 05:04:00.069416 sshd[5814]: Accepted publickey for core from 147.75.109.163 port 37684 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:00.074914 containerd[1585]: time="2025-01-30T05:04:00.074667023Z" level=info msg="RemoveContainer for \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\" returns successfully" Jan 30 05:04:00.075101 kubelet[2742]: I0130 05:04:00.075040 2742 scope.go:117] "RemoveContainer" containerID="a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6" Jan 30 05:04:00.092137 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:00.134777 systemd-logind[1561]: New session 19 of user core. Jan 30 05:04:00.136231 containerd[1585]: time="2025-01-30T05:04:00.114482640Z" level=error msg="ContainerStatus for \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\": not found" Jan 30 05:04:00.136945 kubelet[2742]: E0130 05:04:00.136881 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\": not found" containerID="a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6" Jan 30 05:04:00.137611 kubelet[2742]: I0130 05:04:00.137136 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6"} err="failed to get container status \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a31db655e5b794c5065fc7f328f55ae5fd6ad38ee5c92e049de67c8202ab77b6\": not found" Jan 30 05:04:00.137611 kubelet[2742]: I0130 05:04:00.137207 2742 scope.go:117] "RemoveContainer" containerID="12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388" Jan 30 05:04:00.139785 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 05:04:00.160997 containerd[1585]: time="2025-01-30T05:04:00.160291743Z" level=error msg="ContainerStatus for \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\": not found" Jan 30 05:04:00.175631 kubelet[2742]: E0130 05:04:00.173903 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\": not found" containerID="12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388" Jan 30 05:04:00.175631 kubelet[2742]: I0130 05:04:00.173964 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388"} err="failed to get container status \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\": rpc error: code = NotFound desc = an error occurred when try to find container \"12d19d15846f8e7bc5e7bcd2fe37427882ed9c61bc502c1a0fc635abb7706388\": not found" Jan 30 05:04:00.175631 kubelet[2742]: I0130 05:04:00.173999 2742 scope.go:117] "RemoveContainer" containerID="2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b" Jan 30 05:04:00.188698 containerd[1585]: time="2025-01-30T05:04:00.187719839Z" level=error msg="ContainerStatus for \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\": not found" Jan 30 05:04:00.194245 kubelet[2742]: E0130 05:04:00.193511 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\": not found" containerID="2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b" Jan 30 05:04:00.194245 kubelet[2742]: I0130 05:04:00.193609 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b"} err="failed to get container status \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b25f80874b76b6eaad10b9ca4f555755cded8b89ff8c543328221a3bd4c559b\": not found" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:03:59.622 [INFO][5779] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:03:59.631 [INFO][5779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" iface="eth0" netns="/var/run/netns/cni-46731d26-9220-06f5-d794-cb18e76650f7" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:03:59.634 [INFO][5779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" iface="eth0" netns="/var/run/netns/cni-46731d26-9220-06f5-d794-cb18e76650f7" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:03:59.666 [INFO][5779] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" after=34.123397ms iface="eth0" netns="/var/run/netns/cni-46731d26-9220-06f5-d794-cb18e76650f7" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:03:59.668 [INFO][5779] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:03:59.669 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:04:00.001 [INFO][5807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:04:00.006 [INFO][5807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:04:00.007 [INFO][5807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:04:00.306 [INFO][5807] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:04:00.309 [INFO][5807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:04:00.326 [INFO][5807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:04:00.359934 containerd[1585]: 2025-01-30 05:04:00.352 [INFO][5779] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:00.370524 containerd[1585]: time="2025-01-30T05:04:00.362050171Z" level=info msg="TearDown network for sandbox \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\" successfully" Jan 30 05:04:00.370524 containerd[1585]: time="2025-01-30T05:04:00.362622276Z" level=info msg="StopPodSandbox for \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\" returns successfully" Jan 30 05:04:00.420758 kubelet[2742]: E0130 05:04:00.420626 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:00.445424 containerd[1585]: time="2025-01-30T05:04:00.445313246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-88tpl,Uid:ec40f2b9-0fda-43a0-b8ac-068843d953d5,Namespace:calico-system,Attempt:0,}" Jan 30 05:04:00.543198 kubelet[2742]: I0130 05:04:00.542868 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/160f5477-6b56-47a9-a6b1-0ce2a996310c-tigera-ca-bundle\") pod \"160f5477-6b56-47a9-a6b1-0ce2a996310c\" (UID: \"160f5477-6b56-47a9-a6b1-0ce2a996310c\") " Jan 30 05:04:00.543198 kubelet[2742]: I0130 05:04:00.542939 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w76rc\" (UniqueName: \"kubernetes.io/projected/160f5477-6b56-47a9-a6b1-0ce2a996310c-kube-api-access-w76rc\") pod \"160f5477-6b56-47a9-a6b1-0ce2a996310c\" (UID: \"160f5477-6b56-47a9-a6b1-0ce2a996310c\") " Jan 30 05:04:00.544678 containerd[1585]: time="2025-01-30T05:04:00.541913751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 05:04:00.544678 containerd[1585]: time="2025-01-30T05:04:00.544482175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 05:04:00.546916 containerd[1585]: time="2025-01-30T05:04:00.544609361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:04:00.546916 containerd[1585]: time="2025-01-30T05:04:00.544826579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 05:04:00.554643 kubelet[2742]: I0130 05:04:00.553986 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/160f5477-6b56-47a9-a6b1-0ce2a996310c-kube-api-access-w76rc" (OuterVolumeSpecName: "kube-api-access-w76rc") pod "160f5477-6b56-47a9-a6b1-0ce2a996310c" (UID: "160f5477-6b56-47a9-a6b1-0ce2a996310c"). InnerVolumeSpecName "kube-api-access-w76rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:04:00.557113 kubelet[2742]: I0130 05:04:00.557001 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/160f5477-6b56-47a9-a6b1-0ce2a996310c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "160f5477-6b56-47a9-a6b1-0ce2a996310c" (UID: "160f5477-6b56-47a9-a6b1-0ce2a996310c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:04:00.651053 kubelet[2742]: I0130 05:04:00.650860 2742 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w76rc\" (UniqueName: \"kubernetes.io/projected/160f5477-6b56-47a9-a6b1-0ce2a996310c-kube-api-access-w76rc\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:00.651053 kubelet[2742]: I0130 05:04:00.650907 2742 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/160f5477-6b56-47a9-a6b1-0ce2a996310c-tigera-ca-bundle\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:00.682117 systemd[1]: var-lib-kubelet-pods-160f5477\x2d6b56\x2d47a9\x2da6b1\x2d0ce2a996310c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 30 05:04:00.682423 systemd[1]: run-netns-cni\x2d46731d26\x2d9220\x2d06f5\x2dd794\x2dcb18e76650f7.mount: Deactivated successfully. Jan 30 05:04:00.682673 systemd[1]: var-lib-kubelet-pods-160f5477\x2d6b56\x2d47a9\x2da6b1\x2d0ce2a996310c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw76rc.mount: Deactivated successfully. Jan 30 05:04:00.837247 containerd[1585]: time="2025-01-30T05:04:00.836472432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-88tpl,Uid:ec40f2b9-0fda-43a0-b8ac-068843d953d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd71234e9ba8534ffaebb171c9c1e216de9c28d99c98bee7f09d0130475919ef\"" Jan 30 05:04:00.839362 kubelet[2742]: E0130 05:04:00.839291 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:00.853150 kubelet[2742]: I0130 05:04:00.852987 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f1c928-1706-42a5-bb56-01a783fa8509" path="/var/lib/kubelet/pods/01f1c928-1706-42a5-bb56-01a783fa8509/volumes" Jan 30 05:04:00.856499 containerd[1585]: time="2025-01-30T05:04:00.855930821Z" level=info msg="CreateContainer within sandbox \"cd71234e9ba8534ffaebb171c9c1e216de9c28d99c98bee7f09d0130475919ef\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 05:04:00.860946 kubelet[2742]: I0130 05:04:00.860897 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="160f5477-6b56-47a9-a6b1-0ce2a996310c" path="/var/lib/kubelet/pods/160f5477-6b56-47a9-a6b1-0ce2a996310c/volumes" Jan 30 05:04:00.910218 containerd[1585]: time="2025-01-30T05:04:00.908175312Z" level=info msg="CreateContainer within sandbox \"cd71234e9ba8534ffaebb171c9c1e216de9c28d99c98bee7f09d0130475919ef\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"be57ef067c4a82051970706d5da8ea9137f755a02a5122e49ed961d80a1a3d70\"" Jan 30 05:04:00.913622 containerd[1585]: time="2025-01-30T05:04:00.910689081Z" level=info msg="StartContainer for \"be57ef067c4a82051970706d5da8ea9137f755a02a5122e49ed961d80a1a3d70\"" Jan 30 05:04:01.009856 systemd[1]: run-containerd-runc-k8s.io-be57ef067c4a82051970706d5da8ea9137f755a02a5122e49ed961d80a1a3d70-runc.aBShJJ.mount: Deactivated successfully. Jan 30 05:04:01.074711 containerd[1585]: time="2025-01-30T05:04:01.074640388Z" level=info msg="StartContainer for \"be57ef067c4a82051970706d5da8ea9137f755a02a5122e49ed961d80a1a3d70\" returns successfully" Jan 30 05:04:01.324092 containerd[1585]: time="2025-01-30T05:04:01.322850688Z" level=info msg="shim disconnected" id=be57ef067c4a82051970706d5da8ea9137f755a02a5122e49ed961d80a1a3d70 namespace=k8s.io Jan 30 05:04:01.324092 containerd[1585]: time="2025-01-30T05:04:01.322930734Z" level=warning msg="cleaning up after shim disconnected" id=be57ef067c4a82051970706d5da8ea9137f755a02a5122e49ed961d80a1a3d70 namespace=k8s.io Jan 30 05:04:01.324092 containerd[1585]: time="2025-01-30T05:04:01.322945011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:04:01.487788 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:04:01.480701 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:04:01.480712 systemd-resolved[1473]: Flushed all caches. Jan 30 05:04:01.632689 kubelet[2742]: E0130 05:04:01.630437 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:01.646953 containerd[1585]: time="2025-01-30T05:04:01.646891992Z" level=info msg="CreateContainer within sandbox \"cd71234e9ba8534ffaebb171c9c1e216de9c28d99c98bee7f09d0130475919ef\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 05:04:01.653300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be57ef067c4a82051970706d5da8ea9137f755a02a5122e49ed961d80a1a3d70-rootfs.mount: Deactivated successfully. Jan 30 05:04:01.717625 containerd[1585]: time="2025-01-30T05:04:01.717526669Z" level=info msg="CreateContainer within sandbox \"cd71234e9ba8534ffaebb171c9c1e216de9c28d99c98bee7f09d0130475919ef\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"15dcac712086edb21b3d584f85e86d242a69860cf2d5a28ffc0a535467760778\"" Jan 30 05:04:01.721691 containerd[1585]: time="2025-01-30T05:04:01.721059841Z" level=info msg="StartContainer for \"15dcac712086edb21b3d584f85e86d242a69860cf2d5a28ffc0a535467760778\"" Jan 30 05:04:02.028193 containerd[1585]: time="2025-01-30T05:04:02.026726380Z" level=info msg="StartContainer for \"15dcac712086edb21b3d584f85e86d242a69860cf2d5a28ffc0a535467760778\" returns successfully" Jan 30 05:04:02.614048 systemd[1]: Started sshd@20-137.184.120.173:22-218.92.0.157:43329.service - OpenSSH per-connection server daemon (218.92.0.157:43329). Jan 30 05:04:02.648723 kubelet[2742]: E0130 05:04:02.645659 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:03.732658 sshd[5986]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Jan 30 05:04:03.991161 sshd[5814]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:04.018624 systemd[1]: Started sshd@21-137.184.120.173:22-147.75.109.163:37690.service - OpenSSH per-connection server daemon (147.75.109.163:37690). Jan 30 05:04:04.025260 systemd[1]: sshd@19-137.184.120.173:22-147.75.109.163:37684.service: Deactivated successfully. Jan 30 05:04:04.048640 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Jan 30 05:04:04.049158 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 05:04:04.056449 systemd-logind[1561]: Removed session 19. Jan 30 05:04:04.074494 containerd[1585]: time="2025-01-30T05:04:04.068679983Z" level=info msg="shim disconnected" id=512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97 namespace=k8s.io Jan 30 05:04:04.074494 containerd[1585]: time="2025-01-30T05:04:04.068767676Z" level=warning msg="cleaning up after shim disconnected" id=512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97 namespace=k8s.io Jan 30 05:04:04.074494 containerd[1585]: time="2025-01-30T05:04:04.068782260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:04:04.075370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97-rootfs.mount: Deactivated successfully. Jan 30 05:04:04.205869 containerd[1585]: time="2025-01-30T05:04:04.204530881Z" level=info msg="StopContainer for \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\" returns successfully" Jan 30 05:04:04.208684 containerd[1585]: time="2025-01-30T05:04:04.207999264Z" level=info msg="StopPodSandbox for \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\"" Jan 30 05:04:04.208684 containerd[1585]: time="2025-01-30T05:04:04.208068533Z" level=info msg="Container to stop \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 05:04:04.213624 sshd[6002]: Accepted publickey for core from 147.75.109.163 port 37690 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:04.222125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839-shm.mount: Deactivated successfully. Jan 30 05:04:04.224709 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:04.247159 systemd-logind[1561]: New session 20 of user core. Jan 30 05:04:04.253536 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 05:04:04.365881 containerd[1585]: time="2025-01-30T05:04:04.365799809Z" level=info msg="shim disconnected" id=7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839 namespace=k8s.io Jan 30 05:04:04.365881 containerd[1585]: time="2025-01-30T05:04:04.365866434Z" level=warning msg="cleaning up after shim disconnected" id=7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839 namespace=k8s.io Jan 30 05:04:04.365881 containerd[1585]: time="2025-01-30T05:04:04.365876245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:04:04.368082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839-rootfs.mount: Deactivated successfully. Jan 30 05:04:04.418939 containerd[1585]: time="2025-01-30T05:04:04.418358375Z" level=info msg="TearDown network for sandbox \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\" successfully" Jan 30 05:04:04.418939 containerd[1585]: time="2025-01-30T05:04:04.418411151Z" level=info msg="StopPodSandbox for \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\" returns successfully" Jan 30 05:04:04.489494 kubelet[2742]: I0130 05:04:04.488646 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85413716-d934-4846-9d00-52f8e328b411-tigera-ca-bundle\") pod \"85413716-d934-4846-9d00-52f8e328b411\" (UID: \"85413716-d934-4846-9d00-52f8e328b411\") " Jan 30 05:04:04.489494 kubelet[2742]: I0130 05:04:04.488772 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/85413716-d934-4846-9d00-52f8e328b411-typha-certs\") pod \"85413716-d934-4846-9d00-52f8e328b411\" (UID: \"85413716-d934-4846-9d00-52f8e328b411\") " Jan 30 05:04:04.489494 kubelet[2742]: I0130 05:04:04.488834 2742 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57zxs\" (UniqueName: \"kubernetes.io/projected/85413716-d934-4846-9d00-52f8e328b411-kube-api-access-57zxs\") pod \"85413716-d934-4846-9d00-52f8e328b411\" (UID: \"85413716-d934-4846-9d00-52f8e328b411\") " Jan 30 05:04:04.505972 kubelet[2742]: I0130 05:04:04.505816 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85413716-d934-4846-9d00-52f8e328b411-kube-api-access-57zxs" (OuterVolumeSpecName: "kube-api-access-57zxs") pod "85413716-d934-4846-9d00-52f8e328b411" (UID: "85413716-d934-4846-9d00-52f8e328b411"). InnerVolumeSpecName "kube-api-access-57zxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 05:04:04.513911 systemd[1]: var-lib-kubelet-pods-85413716\x2dd934\x2d4846\x2d9d00\x2d52f8e328b411-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 30 05:04:04.514186 systemd[1]: var-lib-kubelet-pods-85413716\x2dd934\x2d4846\x2d9d00\x2d52f8e328b411-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d57zxs.mount: Deactivated successfully. Jan 30 05:04:04.514964 kubelet[2742]: I0130 05:04:04.514877 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85413716-d934-4846-9d00-52f8e328b411-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "85413716-d934-4846-9d00-52f8e328b411" (UID: "85413716-d934-4846-9d00-52f8e328b411"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 05:04:04.519735 kubelet[2742]: I0130 05:04:04.519465 2742 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85413716-d934-4846-9d00-52f8e328b411-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "85413716-d934-4846-9d00-52f8e328b411" (UID: "85413716-d934-4846-9d00-52f8e328b411"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 05:04:04.529629 systemd[1]: var-lib-kubelet-pods-85413716\x2dd934\x2d4846\x2d9d00\x2d52f8e328b411-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 30 05:04:04.590267 kubelet[2742]: I0130 05:04:04.590153 2742 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85413716-d934-4846-9d00-52f8e328b411-tigera-ca-bundle\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:04.590267 kubelet[2742]: I0130 05:04:04.590195 2742 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/85413716-d934-4846-9d00-52f8e328b411-typha-certs\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:04.590267 kubelet[2742]: I0130 05:04:04.590238 2742 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-57zxs\" (UniqueName: \"kubernetes.io/projected/85413716-d934-4846-9d00-52f8e328b411-kube-api-access-57zxs\") on node \"ci-4081.3.0-d-47de560844\" DevicePath \"\"" Jan 30 05:04:04.660049 kubelet[2742]: I0130 05:04:04.659999 2742 scope.go:117] "RemoveContainer" containerID="512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97" Jan 30 05:04:04.706166 containerd[1585]: time="2025-01-30T05:04:04.706081546Z" level=info msg="RemoveContainer for \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\"" Jan 30 05:04:04.721405 containerd[1585]: time="2025-01-30T05:04:04.721347808Z" level=info msg="RemoveContainer for \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\" returns successfully" Jan 30 05:04:04.723231 kubelet[2742]: I0130 05:04:04.723195 2742 scope.go:117] "RemoveContainer" containerID="512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97" Jan 30 05:04:04.724099 containerd[1585]: time="2025-01-30T05:04:04.723911976Z" level=error msg="ContainerStatus for \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\": not found" Jan 30 05:04:04.725159 kubelet[2742]: E0130 05:04:04.724543 2742 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\": not found" containerID="512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97" Jan 30 05:04:04.725159 kubelet[2742]: I0130 05:04:04.724596 2742 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97"} err="failed to get container status \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\": rpc error: code = NotFound desc = an error occurred when try to find container \"512937c341709de728189f19de03217a19eea97cc988a471bcedadb5123bdb97\": not found" Jan 30 05:04:04.859290 kubelet[2742]: I0130 05:04:04.859242 2742 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85413716-d934-4846-9d00-52f8e328b411" path="/var/lib/kubelet/pods/85413716-d934-4846-9d00-52f8e328b411/volumes" Jan 30 05:04:05.452077 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:04:05.450067 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:04:05.450079 systemd-resolved[1473]: Flushed all caches. Jan 30 05:04:05.569321 sshd[5982]: PAM: Permission denied for root from 218.92.0.157 Jan 30 05:04:05.864998 sshd[6068]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Jan 30 05:04:05.995449 sshd[6002]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:06.012907 systemd[1]: Started sshd@22-137.184.120.173:22-147.75.109.163:37692.service - OpenSSH per-connection server daemon (147.75.109.163:37692). Jan 30 05:04:06.013987 systemd[1]: sshd@21-137.184.120.173:22-147.75.109.163:37690.service: Deactivated successfully. Jan 30 05:04:06.032371 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 05:04:06.033109 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Jan 30 05:04:06.043649 systemd-logind[1561]: Removed session 20. Jan 30 05:04:06.130768 sshd[6069]: Accepted publickey for core from 147.75.109.163 port 37692 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:06.134982 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:06.159176 systemd-logind[1561]: New session 21 of user core. Jan 30 05:04:06.166076 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 05:04:06.450429 sshd[6069]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:06.458075 systemd[1]: sshd@22-137.184.120.173:22-147.75.109.163:37692.service: Deactivated successfully. Jan 30 05:04:06.468630 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 05:04:06.474505 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Jan 30 05:04:06.477055 systemd-logind[1561]: Removed session 21. Jan 30 05:04:06.595223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15dcac712086edb21b3d584f85e86d242a69860cf2d5a28ffc0a535467760778-rootfs.mount: Deactivated successfully. Jan 30 05:04:06.606534 containerd[1585]: time="2025-01-30T05:04:06.606409640Z" level=info msg="shim disconnected" id=15dcac712086edb21b3d584f85e86d242a69860cf2d5a28ffc0a535467760778 namespace=k8s.io Jan 30 05:04:06.608553 containerd[1585]: time="2025-01-30T05:04:06.607014435Z" level=warning msg="cleaning up after shim disconnected" id=15dcac712086edb21b3d584f85e86d242a69860cf2d5a28ffc0a535467760778 namespace=k8s.io Jan 30 05:04:06.608553 containerd[1585]: time="2025-01-30T05:04:06.607048921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 05:04:07.499858 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:04:07.496995 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:04:07.497007 systemd-resolved[1473]: Flushed all caches. Jan 30 05:04:07.640201 sshd[5982]: PAM: Permission denied for root from 218.92.0.157 Jan 30 05:04:07.677583 kubelet[2742]: E0130 05:04:07.677502 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:07.716295 containerd[1585]: time="2025-01-30T05:04:07.716245670Z" level=info msg="CreateContainer within sandbox \"cd71234e9ba8534ffaebb171c9c1e216de9c28d99c98bee7f09d0130475919ef\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 05:04:07.776507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237359619.mount: Deactivated successfully. Jan 30 05:04:07.779424 containerd[1585]: time="2025-01-30T05:04:07.778457054Z" level=info msg="CreateContainer within sandbox \"cd71234e9ba8534ffaebb171c9c1e216de9c28d99c98bee7f09d0130475919ef\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ffb57cceb1fc2c218bac0f2b07fd17ee1ab51600b47002d1fa99ddffb20b79ff\"" Jan 30 05:04:07.779958 containerd[1585]: time="2025-01-30T05:04:07.779929082Z" level=info msg="StartContainer for \"ffb57cceb1fc2c218bac0f2b07fd17ee1ab51600b47002d1fa99ddffb20b79ff\"" Jan 30 05:04:07.881543 containerd[1585]: time="2025-01-30T05:04:07.881344956Z" level=info msg="StartContainer for \"ffb57cceb1fc2c218bac0f2b07fd17ee1ab51600b47002d1fa99ddffb20b79ff\" returns successfully" Jan 30 05:04:07.934971 sshd[6115]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Jan 30 05:04:08.690673 kubelet[2742]: E0130 05:04:08.686377 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:08.782464 kubelet[2742]: I0130 05:04:08.732759 2742 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-88tpl" podStartSLOduration=9.718777601 podStartE2EDuration="9.718777601s" podCreationTimestamp="2025-01-30 05:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 05:04:08.718089248 +0000 UTC m=+94.026965094" watchObservedRunningTime="2025-01-30 05:04:08.718777601 +0000 UTC m=+94.027653448" Jan 30 05:04:09.726055 kubelet[2742]: E0130 05:04:09.725936 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:09.808776 systemd[1]: run-containerd-runc-k8s.io-ffb57cceb1fc2c218bac0f2b07fd17ee1ab51600b47002d1fa99ddffb20b79ff-runc.HrFJAi.mount: Deactivated successfully. Jan 30 05:04:09.984443 sshd[5982]: PAM: Permission denied for root from 218.92.0.157 Jan 30 05:04:10.130988 sshd[5982]: Received disconnect from 218.92.0.157 port 43329:11: [preauth] Jan 30 05:04:10.130988 sshd[5982]: Disconnected from authenticating user root 218.92.0.157 port 43329 [preauth] Jan 30 05:04:10.133601 systemd[1]: sshd@20-137.184.120.173:22-218.92.0.157:43329.service: Deactivated successfully. Jan 30 05:04:11.460115 systemd[1]: Started sshd@23-137.184.120.173:22-147.75.109.163:56546.service - OpenSSH per-connection server daemon (147.75.109.163:56546). Jan 30 05:04:11.558525 sshd[6416]: Accepted publickey for core from 147.75.109.163 port 56546 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:11.563358 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:11.571918 systemd-logind[1561]: New session 22 of user core. Jan 30 05:04:11.578849 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 05:04:12.448557 sshd[6416]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:12.456377 systemd[1]: sshd@23-137.184.120.173:22-147.75.109.163:56546.service: Deactivated successfully. Jan 30 05:04:12.457276 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Jan 30 05:04:12.464374 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 05:04:12.468105 systemd-logind[1561]: Removed session 22. Jan 30 05:04:13.448747 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:04:13.451902 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:04:13.448757 systemd-resolved[1473]: Flushed all caches. Jan 30 05:04:17.468357 systemd[1]: Started sshd@24-137.184.120.173:22-147.75.109.163:48208.service - OpenSSH per-connection server daemon (147.75.109.163:48208). Jan 30 05:04:17.543990 sshd[6438]: Accepted publickey for core from 147.75.109.163 port 48208 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:17.546279 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:17.554853 systemd-logind[1561]: New session 23 of user core. Jan 30 05:04:17.561053 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 05:04:17.749070 sshd[6438]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:17.756401 systemd[1]: sshd@24-137.184.120.173:22-147.75.109.163:48208.service: Deactivated successfully. Jan 30 05:04:17.764185 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 05:04:17.765622 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Jan 30 05:04:17.767309 systemd-logind[1561]: Removed session 23. Jan 30 05:04:21.450957 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:04:21.448700 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:04:21.448709 systemd-resolved[1473]: Flushed all caches. Jan 30 05:04:22.763777 systemd[1]: Started sshd@25-137.184.120.173:22-147.75.109.163:48220.service - OpenSSH per-connection server daemon (147.75.109.163:48220). Jan 30 05:04:22.885972 sshd[6460]: Accepted publickey for core from 147.75.109.163 port 48220 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:22.889707 sshd[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:22.899632 systemd-logind[1561]: New session 24 of user core. Jan 30 05:04:22.904038 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 05:04:23.472530 sshd[6460]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:23.482080 systemd[1]: sshd@25-137.184.120.173:22-147.75.109.163:48220.service: Deactivated successfully. Jan 30 05:04:23.489278 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 05:04:23.490271 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Jan 30 05:04:23.495331 systemd-logind[1561]: Removed session 24. Jan 30 05:04:23.498934 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:04:23.497873 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:04:23.497886 systemd-resolved[1473]: Flushed all caches. Jan 30 05:04:25.738359 systemd[1]: Started sshd@26-137.184.120.173:22-2.57.122.196:34862.service - OpenSSH per-connection server daemon (2.57.122.196:34862). Jan 30 05:04:26.458685 sshd[6476]: Invalid user ubuntu from 2.57.122.196 port 34862 Jan 30 05:04:26.638048 sshd[6476]: Connection closed by invalid user ubuntu 2.57.122.196 port 34862 [preauth] Jan 30 05:04:26.641337 systemd[1]: sshd@26-137.184.120.173:22-2.57.122.196:34862.service: Deactivated successfully. Jan 30 05:04:28.499425 systemd[1]: Started sshd@27-137.184.120.173:22-147.75.109.163:41900.service - OpenSSH per-connection server daemon (147.75.109.163:41900). Jan 30 05:04:28.553380 sshd[6481]: Accepted publickey for core from 147.75.109.163 port 41900 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:28.555729 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:28.561842 systemd-logind[1561]: New session 25 of user core. Jan 30 05:04:28.571446 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 05:04:28.778214 sshd[6481]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:28.786174 systemd[1]: sshd@27-137.184.120.173:22-147.75.109.163:41900.service: Deactivated successfully. Jan 30 05:04:28.792674 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Jan 30 05:04:28.793431 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 05:04:28.795815 systemd-logind[1561]: Removed session 25. Jan 30 05:04:29.045780 kubelet[2742]: E0130 05:04:29.045481 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:29.451090 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:04:29.448951 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:04:29.448962 systemd-resolved[1473]: Flushed all caches. Jan 30 05:04:30.572685 kubelet[2742]: E0130 05:04:30.572391 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:32.848126 kubelet[2742]: E0130 05:04:32.847627 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:33.786957 systemd[1]: Started sshd@28-137.184.120.173:22-147.75.109.163:41914.service - OpenSSH per-connection server daemon (147.75.109.163:41914). Jan 30 05:04:33.865297 sshd[6525]: Accepted publickey for core from 147.75.109.163 port 41914 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:33.869301 sshd[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:33.876610 systemd-logind[1561]: New session 26 of user core. Jan 30 05:04:33.882102 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 05:04:34.670474 sshd[6525]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:34.676443 systemd[1]: sshd@28-137.184.120.173:22-147.75.109.163:41914.service: Deactivated successfully. Jan 30 05:04:34.683838 systemd-logind[1561]: Session 26 logged out. Waiting for processes to exit. Jan 30 05:04:34.685066 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 05:04:34.687004 systemd-logind[1561]: Removed session 26. Jan 30 05:04:35.467487 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:04:35.465233 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:04:35.465244 systemd-resolved[1473]: Flushed all caches. Jan 30 05:04:36.716324 kubelet[2742]: I0130 05:04:36.716255 2742 scope.go:117] "RemoveContainer" containerID="12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a" Jan 30 05:04:36.721609 containerd[1585]: time="2025-01-30T05:04:36.721341654Z" level=info msg="RemoveContainer for \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\"" Jan 30 05:04:36.732889 containerd[1585]: time="2025-01-30T05:04:36.732722578Z" level=info msg="RemoveContainer for \"12cad337c5e2314719309806ab2b6d3d4c54f62ab03a0e4c6391751702878c3a\" returns successfully" Jan 30 05:04:36.734647 containerd[1585]: time="2025-01-30T05:04:36.734605148Z" level=info msg="StopPodSandbox for \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\"" Jan 30 05:04:36.735084 containerd[1585]: time="2025-01-30T05:04:36.734743474Z" level=info msg="TearDown network for sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" successfully" Jan 30 05:04:36.735084 containerd[1585]: time="2025-01-30T05:04:36.734761639Z" level=info msg="StopPodSandbox for \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" returns successfully" Jan 30 05:04:36.739911 containerd[1585]: time="2025-01-30T05:04:36.739796916Z" level=info msg="RemovePodSandbox for \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\"" Jan 30 05:04:36.739911 containerd[1585]: time="2025-01-30T05:04:36.739849632Z" level=info msg="Forcibly stopping sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\"" Jan 30 05:04:36.751362 containerd[1585]: time="2025-01-30T05:04:36.751275580Z" level=info msg="TearDown network for sandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" successfully" Jan 30 05:04:36.777018 containerd[1585]: time="2025-01-30T05:04:36.776918650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:04:36.778060 containerd[1585]: time="2025-01-30T05:04:36.777057721Z" level=info msg="RemovePodSandbox \"668bd32ba93787e598b6e32dd3a30d1706a510be265a5f346ca0bb0d13905a81\" returns successfully" Jan 30 05:04:36.778060 containerd[1585]: time="2025-01-30T05:04:36.777821416Z" level=info msg="StopPodSandbox for \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\"" Jan 30 05:04:36.778060 containerd[1585]: time="2025-01-30T05:04:36.777912690Z" level=info msg="TearDown network for sandbox \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\" successfully" Jan 30 05:04:36.778060 containerd[1585]: time="2025-01-30T05:04:36.777923744Z" level=info msg="StopPodSandbox for \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\" returns successfully" Jan 30 05:04:36.778664 containerd[1585]: time="2025-01-30T05:04:36.778535771Z" level=info msg="RemovePodSandbox for \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\"" Jan 30 05:04:36.778664 containerd[1585]: time="2025-01-30T05:04:36.778581721Z" level=info msg="Forcibly stopping sandbox \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\"" Jan 30 05:04:36.779785 containerd[1585]: time="2025-01-30T05:04:36.778862081Z" level=info msg="TearDown network for sandbox \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\" successfully" Jan 30 05:04:36.785469 containerd[1585]: time="2025-01-30T05:04:36.785411952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:04:36.785661 containerd[1585]: time="2025-01-30T05:04:36.785519151Z" level=info msg="RemovePodSandbox \"7d658b300f1647c9afac48b2304a109cfb44b9d3e4413c0caf7a6dd30ffad839\" returns successfully" Jan 30 05:04:36.786207 containerd[1585]: time="2025-01-30T05:04:36.786163767Z" level=info msg="StopPodSandbox for \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\"" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:36.955 [WARNING][6553] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:36.956 [INFO][6553] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:36.956 [INFO][6553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" iface="eth0" netns="" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:36.956 [INFO][6553] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:36.956 [INFO][6553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:37.045 [INFO][6560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:37.046 [INFO][6560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:37.046 [INFO][6560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:37.057 [WARNING][6560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:37.057 [INFO][6560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:37.059 [INFO][6560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:04:37.065185 containerd[1585]: 2025-01-30 05:04:37.061 [INFO][6553] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:37.065185 containerd[1585]: time="2025-01-30T05:04:37.065128129Z" level=info msg="TearDown network for sandbox \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\" successfully" Jan 30 05:04:37.065185 containerd[1585]: time="2025-01-30T05:04:37.065154440Z" level=info msg="StopPodSandbox for \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\" returns successfully" Jan 30 05:04:37.068759 containerd[1585]: time="2025-01-30T05:04:37.066036820Z" level=info msg="RemovePodSandbox for \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\"" Jan 30 05:04:37.068759 containerd[1585]: time="2025-01-30T05:04:37.066298790Z" level=info msg="Forcibly stopping sandbox \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\"" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.128 [WARNING][6578] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" WorkloadEndpoint="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.128 [INFO][6578] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.128 [INFO][6578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" iface="eth0" netns="" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.129 [INFO][6578] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.129 [INFO][6578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.162 [INFO][6585] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.162 [INFO][6585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.162 [INFO][6585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.169 [WARNING][6585] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.169 [INFO][6585] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" HandleID="k8s-pod-network.3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Workload="ci--4081.3.0--d--47de560844-k8s-calico--kube--controllers--764f56cffb--268h9-eth0" Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.171 [INFO][6585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 05:04:37.177455 containerd[1585]: 2025-01-30 05:04:37.175 [INFO][6578] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e" Jan 30 05:04:37.178238 containerd[1585]: time="2025-01-30T05:04:37.177510664Z" level=info msg="TearDown network for sandbox \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\" successfully" Jan 30 05:04:37.190763 containerd[1585]: time="2025-01-30T05:04:37.190679463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 05:04:37.191017 containerd[1585]: time="2025-01-30T05:04:37.190776150Z" level=info msg="RemovePodSandbox \"3c85f44d37315b149301293d6f873f7a46bf186aef9b3818487bcfbcad52a34e\" returns successfully" Jan 30 05:04:39.681097 systemd[1]: Started sshd@29-137.184.120.173:22-147.75.109.163:36820.service - OpenSSH per-connection server daemon (147.75.109.163:36820). Jan 30 05:04:39.776819 sshd[6591]: Accepted publickey for core from 147.75.109.163 port 36820 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:39.779731 sshd[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:39.785671 systemd-logind[1561]: New session 27 of user core. Jan 30 05:04:39.790963 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 05:04:40.199260 sshd[6591]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:40.203649 systemd[1]: sshd@29-137.184.120.173:22-147.75.109.163:36820.service: Deactivated successfully. Jan 30 05:04:40.209172 systemd-logind[1561]: Session 27 logged out. Waiting for processes to exit. Jan 30 05:04:40.209387 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 05:04:40.213066 systemd-logind[1561]: Removed session 27. Jan 30 05:04:45.212939 systemd[1]: Started sshd@30-137.184.120.173:22-147.75.109.163:36834.service - OpenSSH per-connection server daemon (147.75.109.163:36834). Jan 30 05:04:45.268126 sshd[6605]: Accepted publickey for core from 147.75.109.163 port 36834 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:45.269770 sshd[6605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:45.281379 systemd-logind[1561]: New session 28 of user core. Jan 30 05:04:45.287122 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 05:04:45.530907 sshd[6605]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:45.538197 systemd[1]: sshd@30-137.184.120.173:22-147.75.109.163:36834.service: Deactivated successfully. Jan 30 05:04:45.543001 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 05:04:45.544894 systemd-logind[1561]: Session 28 logged out. Waiting for processes to exit. Jan 30 05:04:45.547131 systemd-logind[1561]: Removed session 28. Jan 30 05:04:50.541120 systemd[1]: Started sshd@31-137.184.120.173:22-147.75.109.163:41106.service - OpenSSH per-connection server daemon (147.75.109.163:41106). Jan 30 05:04:50.599651 sshd[6631]: Accepted publickey for core from 147.75.109.163 port 41106 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:50.602334 sshd[6631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:50.612679 systemd-logind[1561]: New session 29 of user core. Jan 30 05:04:50.620274 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 05:04:50.817745 sshd[6631]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:50.825789 systemd[1]: sshd@31-137.184.120.173:22-147.75.109.163:41106.service: Deactivated successfully. Jan 30 05:04:50.831415 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 05:04:50.833203 systemd-logind[1561]: Session 29 logged out. Waiting for processes to exit. Jan 30 05:04:50.834884 systemd-logind[1561]: Removed session 29. Jan 30 05:04:51.849919 kubelet[2742]: E0130 05:04:51.849769 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:04:55.830091 systemd[1]: Started sshd@32-137.184.120.173:22-147.75.109.163:41122.service - OpenSSH per-connection server daemon (147.75.109.163:41122). Jan 30 05:04:55.881218 sshd[6657]: Accepted publickey for core from 147.75.109.163 port 41122 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:04:55.883777 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:04:55.891764 systemd-logind[1561]: New session 30 of user core. Jan 30 05:04:55.901075 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 30 05:04:56.051933 sshd[6657]: pam_unix(sshd:session): session closed for user core Jan 30 05:04:56.056601 systemd[1]: sshd@32-137.184.120.173:22-147.75.109.163:41122.service: Deactivated successfully. Jan 30 05:04:56.063890 systemd-logind[1561]: Session 30 logged out. Waiting for processes to exit. Jan 30 05:04:56.064280 systemd[1]: session-30.scope: Deactivated successfully. Jan 30 05:04:56.067796 systemd-logind[1561]: Removed session 30. Jan 30 05:04:56.848896 kubelet[2742]: E0130 05:04:56.848510 2742 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 30 05:05:01.067049 systemd[1]: Started sshd@33-137.184.120.173:22-147.75.109.163:43978.service - OpenSSH per-connection server daemon (147.75.109.163:43978). Jan 30 05:05:01.196030 sshd[6693]: Accepted publickey for core from 147.75.109.163 port 43978 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:01.200326 sshd[6693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:01.210625 systemd-logind[1561]: New session 31 of user core. Jan 30 05:05:01.215173 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 30 05:05:01.917555 sshd[6693]: pam_unix(sshd:session): session closed for user core Jan 30 05:05:01.925397 systemd[1]: sshd@33-137.184.120.173:22-147.75.109.163:43978.service: Deactivated successfully. Jan 30 05:05:01.935964 systemd[1]: session-31.scope: Deactivated successfully. Jan 30 05:05:01.936048 systemd-logind[1561]: Session 31 logged out. Waiting for processes to exit. Jan 30 05:05:01.942501 systemd-logind[1561]: Removed session 31. Jan 30 05:05:03.500679 systemd-journald[1146]: Under memory pressure, flushing caches. Jan 30 05:05:03.497479 systemd-resolved[1473]: Under memory pressure, flushing caches. Jan 30 05:05:03.497493 systemd-resolved[1473]: Flushed all caches. Jan 30 05:05:06.968077 systemd[1]: Started sshd@34-137.184.120.173:22-147.75.109.163:43988.service - OpenSSH per-connection server daemon (147.75.109.163:43988). Jan 30 05:05:07.054049 sshd[6707]: Accepted publickey for core from 147.75.109.163 port 43988 ssh2: RSA SHA256:abisXdMawU/go7K1daSApUu8vion7woLXjWCopHjf7c Jan 30 05:05:07.057939 sshd[6707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 05:05:07.069283 systemd-logind[1561]: New session 32 of user core. Jan 30 05:05:07.079164 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 30 05:05:07.549882 sshd[6707]: pam_unix(sshd:session): session closed for user core Jan 30 05:05:07.560871 systemd[1]: sshd@34-137.184.120.173:22-147.75.109.163:43988.service: Deactivated successfully. Jan 30 05:05:07.569716 systemd[1]: session-32.scope: Deactivated successfully. Jan 30 05:05:07.576508 systemd-logind[1561]: Session 32 logged out. Waiting for processes to exit. Jan 30 05:05:07.584348 systemd-logind[1561]: Removed session 32.