Oct 9 07:24:57.893113 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:19:34 -00 2024 Oct 9 07:24:57.893151 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:24:57.893167 kernel: BIOS-provided physical RAM map: Oct 9 07:24:57.893174 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:24:57.893180 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:24:57.893187 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:24:57.893194 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Oct 9 07:24:57.893201 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Oct 9 07:24:57.893208 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:24:57.893217 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:24:57.893224 kernel: NX (Execute Disable) protection: active Oct 9 07:24:57.893231 kernel: APIC: Static calls initialized Oct 9 07:24:57.893237 kernel: SMBIOS 2.8 present. Oct 9 07:24:57.893244 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 9 07:24:57.893253 kernel: Hypervisor detected: KVM Oct 9 07:24:57.893263 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:24:57.893270 kernel: kvm-clock: using sched offset of 2889268475 cycles Oct 9 07:24:57.893279 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:24:57.893287 kernel: tsc: Detected 2494.138 MHz processor Oct 9 07:24:57.893295 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:24:57.893303 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:24:57.893310 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Oct 9 07:24:57.893318 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:24:57.893326 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:24:57.893336 kernel: ACPI: Early table checksum verification disabled Oct 9 07:24:57.893344 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Oct 9 07:24:57.893351 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:24:57.893359 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:24:57.893367 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:24:57.893375 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 07:24:57.893382 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:24:57.893390 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:24:57.893398 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:24:57.893408 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:24:57.893416 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 9 07:24:57.893423 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 9 07:24:57.893431 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 07:24:57.893438 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 9 07:24:57.893446 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 9 07:24:57.893453 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 9 07:24:57.893467 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 9 07:24:57.893475 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 07:24:57.893483 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 07:24:57.893491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 07:24:57.893499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 07:24:57.893508 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Oct 9 07:24:57.893516 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Oct 9 07:24:57.893526 kernel: Zone ranges: Oct 9 07:24:57.893535 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:24:57.893543 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Oct 9 07:24:57.893551 kernel: Normal empty Oct 9 07:24:57.893559 kernel: Movable zone start for each node Oct 9 07:24:57.893566 kernel: Early memory node ranges Oct 9 07:24:57.893574 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:24:57.893582 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Oct 9 07:24:57.893590 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Oct 9 07:24:57.893601 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:24:57.893609 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:24:57.893617 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Oct 9 07:24:57.893625 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:24:57.893633 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:24:57.893641 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:24:57.893649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:24:57.893657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:24:57.893665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:24:57.893675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:24:57.893683 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:24:57.893691 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:24:57.893699 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:24:57.893707 kernel: TSC deadline timer available Oct 9 07:24:57.893715 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:24:57.893723 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:24:57.893731 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 07:24:57.893739 kernel: Booting paravirtualized kernel on KVM Oct 9 07:24:57.893750 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:24:57.893758 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:24:57.893766 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:24:57.893774 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:24:57.893782 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:24:57.893790 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 07:24:57.893799 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:24:57.893807 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:24:57.893818 kernel: random: crng init done Oct 9 07:24:57.893825 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:24:57.893834 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:24:57.893841 kernel: Fallback order for Node 0: 0 Oct 9 07:24:57.893849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Oct 9 07:24:57.893857 kernel: Policy zone: DMA32 Oct 9 07:24:57.893865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:24:57.893874 kernel: Memory: 1965048K/2096600K available (12288K kernel code, 2304K rwdata, 22648K rodata, 49452K init, 1888K bss, 131292K reserved, 0K cma-reserved) Oct 9 07:24:57.893882 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:24:57.893892 kernel: Kernel/User page tables isolation: enabled Oct 9 07:24:57.893900 kernel: ftrace: allocating 37706 entries in 148 pages Oct 9 07:24:57.893908 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:24:57.893916 kernel: Dynamic Preempt: voluntary Oct 9 07:24:57.893924 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:24:57.893933 kernel: rcu: RCU event tracing is enabled. Oct 9 07:24:57.893941 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:24:57.893949 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:24:57.893957 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:24:57.893968 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:24:57.893976 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:24:57.893984 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:24:57.893992 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:24:57.894000 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:24:57.894008 kernel: Console: colour VGA+ 80x25 Oct 9 07:24:57.894016 kernel: printk: console [tty0] enabled Oct 9 07:24:57.894024 kernel: printk: console [ttyS0] enabled Oct 9 07:24:57.894032 kernel: ACPI: Core revision 20230628 Oct 9 07:24:57.894040 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:24:57.894051 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:24:57.894059 kernel: x2apic enabled Oct 9 07:24:57.894076 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:24:57.894085 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:24:57.894093 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 07:24:57.894101 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Oct 9 07:24:57.894109 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:24:57.894117 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:24:57.894136 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:24:57.894145 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:24:57.894154 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:24:57.894165 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:24:57.894173 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 07:24:57.894182 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:24:57.894190 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:24:57.894199 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 07:24:57.894208 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 07:24:57.894219 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:24:57.894228 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:24:57.894237 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:24:57.894245 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:24:57.894254 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 07:24:57.894263 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:24:57.894271 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:24:57.894280 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 9 07:24:57.894291 kernel: SELinux: Initializing. Oct 9 07:24:57.894300 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:24:57.894308 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:24:57.894317 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 9 07:24:57.894326 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:24:57.894335 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:24:57.894343 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:24:57.894352 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 9 07:24:57.894363 kernel: signal: max sigframe size: 1776 Oct 9 07:24:57.894371 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:24:57.894380 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:24:57.894389 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 07:24:57.894397 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:24:57.894406 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:24:57.894414 kernel: .... node #0, CPUs: #1 Oct 9 07:24:57.894423 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:24:57.894431 kernel: smpboot: Max logical packages: 1 Oct 9 07:24:57.894440 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Oct 9 07:24:57.894451 kernel: devtmpfs: initialized Oct 9 07:24:57.894460 kernel: x86/mm: Memory block size: 128MB Oct 9 07:24:57.894469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:24:57.894477 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:24:57.894486 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:24:57.894495 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:24:57.894503 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:24:57.894512 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:24:57.894520 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:24:57.894532 kernel: audit: type=2000 audit(1728458696.652:1): state=initialized audit_enabled=0 res=1 Oct 9 07:24:57.894540 kernel: cpuidle: using governor menu Oct 9 07:24:57.894549 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:24:57.894557 kernel: dca service started, version 1.12.1 Oct 9 07:24:57.894566 kernel: PCI: Using configuration type 1 for base access Oct 9 07:24:57.894577 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:24:57.894587 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:24:57.894595 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:24:57.894604 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:24:57.894616 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:24:57.894624 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:24:57.894633 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:24:57.894641 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:24:57.894650 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:24:57.894660 kernel: ACPI: Interpreter enabled Oct 9 07:24:57.894669 kernel: ACPI: PM: (supports S0 S5) Oct 9 07:24:57.894678 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:24:57.894686 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:24:57.894698 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:24:57.894706 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 07:24:57.894715 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:24:57.894905 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:24:57.895010 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:24:57.896231 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:24:57.896258 kernel: acpiphp: Slot [3] registered Oct 9 07:24:57.896275 kernel: acpiphp: Slot [4] registered Oct 9 07:24:57.896285 kernel: acpiphp: Slot [5] registered Oct 9 07:24:57.896295 kernel: acpiphp: Slot [6] registered Oct 9 07:24:57.896305 kernel: acpiphp: Slot [7] registered Oct 9 07:24:57.896315 kernel: acpiphp: Slot [8] registered Oct 9 07:24:57.896324 kernel: acpiphp: Slot [9] registered Oct 9 07:24:57.896334 kernel: acpiphp: Slot [10] registered Oct 9 07:24:57.896344 kernel: acpiphp: Slot [11] registered Oct 9 07:24:57.896353 kernel: acpiphp: Slot [12] registered Oct 9 07:24:57.896366 kernel: acpiphp: Slot [13] registered Oct 9 07:24:57.896375 kernel: acpiphp: Slot [14] registered Oct 9 07:24:57.896385 kernel: acpiphp: Slot [15] registered Oct 9 07:24:57.896395 kernel: acpiphp: Slot [16] registered Oct 9 07:24:57.896404 kernel: acpiphp: Slot [17] registered Oct 9 07:24:57.896414 kernel: acpiphp: Slot [18] registered Oct 9 07:24:57.896423 kernel: acpiphp: Slot [19] registered Oct 9 07:24:57.896433 kernel: acpiphp: Slot [20] registered Oct 9 07:24:57.896443 kernel: acpiphp: Slot [21] registered Oct 9 07:24:57.896452 kernel: acpiphp: Slot [22] registered Oct 9 07:24:57.896465 kernel: acpiphp: Slot [23] registered Oct 9 07:24:57.896475 kernel: acpiphp: Slot [24] registered Oct 9 07:24:57.896484 kernel: acpiphp: Slot [25] registered Oct 9 07:24:57.896494 kernel: acpiphp: Slot [26] registered Oct 9 07:24:57.896503 kernel: acpiphp: Slot [27] registered Oct 9 07:24:57.896513 kernel: acpiphp: Slot [28] registered Oct 9 07:24:57.896523 kernel: acpiphp: Slot [29] registered Oct 9 07:24:57.896543 kernel: acpiphp: Slot [30] registered Oct 9 07:24:57.896556 kernel: acpiphp: Slot [31] registered Oct 9 07:24:57.896573 kernel: PCI host bridge to bus 0000:00 Oct 9 07:24:57.896700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:24:57.896794 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:24:57.896879 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:24:57.896969 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:24:57.897055 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 07:24:57.898212 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:24:57.898345 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:24:57.898452 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:24:57.898554 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 07:24:57.898676 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Oct 9 07:24:57.898835 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 07:24:57.898944 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 07:24:57.899055 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 07:24:57.899977 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 07:24:57.900133 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 9 07:24:57.900243 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Oct 9 07:24:57.900364 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 07:24:57.900486 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 07:24:57.900627 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 07:24:57.900758 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 07:24:57.900868 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 07:24:57.900974 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 07:24:57.901090 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Oct 9 07:24:57.901240 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 07:24:57.901346 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:24:57.901504 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:24:57.901614 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Oct 9 07:24:57.901719 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Oct 9 07:24:57.901822 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 07:24:57.901932 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:24:57.902038 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Oct 9 07:24:57.905250 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Oct 9 07:24:57.905383 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 07:24:57.905504 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Oct 9 07:24:57.905612 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Oct 9 07:24:57.905718 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Oct 9 07:24:57.905834 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 07:24:57.905941 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:24:57.906035 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:24:57.906149 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Oct 9 07:24:57.906241 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 07:24:57.906341 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:24:57.906436 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Oct 9 07:24:57.906526 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Oct 9 07:24:57.906618 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Oct 9 07:24:57.906728 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 07:24:57.906856 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Oct 9 07:24:57.906949 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 9 07:24:57.906961 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:24:57.906970 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:24:57.906979 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:24:57.906988 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:24:57.906996 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:24:57.907009 kernel: iommu: Default domain type: Translated Oct 9 07:24:57.907018 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:24:57.907026 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:24:57.907035 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:24:57.907044 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:24:57.907053 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Oct 9 07:24:57.909213 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 07:24:57.909321 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 07:24:57.909418 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:24:57.909437 kernel: vgaarb: loaded Oct 9 07:24:57.909446 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:24:57.909456 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:24:57.909464 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:24:57.909473 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:24:57.909482 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:24:57.909491 kernel: pnp: PnP ACPI init Oct 9 07:24:57.909500 kernel: pnp: PnP ACPI: found 4 devices Oct 9 07:24:57.909509 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:24:57.909521 kernel: NET: Registered PF_INET protocol family Oct 9 07:24:57.909529 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:24:57.909538 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:24:57.909547 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:24:57.909556 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:24:57.909565 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:24:57.909573 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:24:57.909582 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:24:57.909591 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:24:57.909602 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:24:57.909611 kernel: NET: Registered PF_XDP protocol family Oct 9 07:24:57.909703 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:24:57.909787 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:24:57.909870 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:24:57.909954 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:24:57.910035 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 07:24:57.911193 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 07:24:57.911309 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:24:57.911323 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:24:57.911416 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 27232 usecs Oct 9 07:24:57.911429 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:24:57.911438 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 07:24:57.911447 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 07:24:57.911456 kernel: Initialise system trusted keyrings Oct 9 07:24:57.911465 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:24:57.911478 kernel: Key type asymmetric registered Oct 9 07:24:57.911487 kernel: Asymmetric key parser 'x509' registered Oct 9 07:24:57.911495 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:24:57.911504 kernel: io scheduler mq-deadline registered Oct 9 07:24:57.911512 kernel: io scheduler kyber registered Oct 9 07:24:57.911521 kernel: io scheduler bfq registered Oct 9 07:24:57.911530 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:24:57.911539 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 07:24:57.911547 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 07:24:57.911558 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 07:24:57.911567 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:24:57.911576 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:24:57.911585 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:24:57.911593 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:24:57.911602 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:24:57.911711 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 07:24:57.911724 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:24:57.911809 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 07:24:57.911897 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T07:24:57 UTC (1728458697) Oct 9 07:24:57.911981 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:24:57.911992 kernel: intel_pstate: CPU model not supported Oct 9 07:24:57.912001 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:24:57.912011 kernel: Segment Routing with IPv6 Oct 9 07:24:57.912019 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:24:57.912028 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:24:57.912037 kernel: Key type dns_resolver registered Oct 9 07:24:57.912049 kernel: IPI shorthand broadcast: enabled Oct 9 07:24:57.912058 kernel: sched_clock: Marking stable (790002460, 90973910)->(970050144, -89073774) Oct 9 07:24:57.913089 kernel: registered taskstats version 1 Oct 9 07:24:57.913100 kernel: Loading compiled-in X.509 certificates Oct 9 07:24:57.913110 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 0b7ba59a46acf969bcd97270f441857501641c76' Oct 9 07:24:57.913119 kernel: Key type .fscrypt registered Oct 9 07:24:57.913128 kernel: Key type fscrypt-provisioning registered Oct 9 07:24:57.913137 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:24:57.913145 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:24:57.913158 kernel: ima: No architecture policies found Oct 9 07:24:57.913167 kernel: clk: Disabling unused clocks Oct 9 07:24:57.913175 kernel: Freeing unused kernel image (initmem) memory: 49452K Oct 9 07:24:57.913184 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:24:57.913193 kernel: Freeing unused kernel image (rodata/data gap) memory: 1928K Oct 9 07:24:57.913223 kernel: Run /init as init process Oct 9 07:24:57.913237 kernel: with arguments: Oct 9 07:24:57.913250 kernel: /init Oct 9 07:24:57.913263 kernel: with environment: Oct 9 07:24:57.913280 kernel: HOME=/ Oct 9 07:24:57.913293 kernel: TERM=linux Oct 9 07:24:57.913306 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:24:57.913323 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:24:57.913339 systemd[1]: Detected virtualization kvm. Oct 9 07:24:57.913349 systemd[1]: Detected architecture x86-64. Oct 9 07:24:57.913358 systemd[1]: Running in initrd. Oct 9 07:24:57.913370 systemd[1]: No hostname configured, using default hostname. Oct 9 07:24:57.913379 systemd[1]: Hostname set to . Oct 9 07:24:57.913389 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:24:57.913399 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:24:57.913409 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:24:57.913418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:24:57.913428 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:24:57.913438 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:24:57.913450 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:24:57.913460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:24:57.913471 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:24:57.913481 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:24:57.913491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:24:57.913500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:24:57.913510 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:24:57.913522 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:24:57.913532 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:24:57.913541 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:24:57.913553 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:24:57.913563 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:24:57.913573 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:24:57.913585 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:24:57.913595 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:24:57.913604 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:24:57.913614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:24:57.913623 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:24:57.913633 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:24:57.913642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:24:57.913652 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:24:57.913664 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:24:57.913674 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:24:57.913684 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:24:57.913693 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:24:57.913703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:24:57.913715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:24:57.913725 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:24:57.913765 systemd-journald[182]: Collecting audit messages is disabled. Oct 9 07:24:57.913789 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:24:57.913803 systemd-journald[182]: Journal started Oct 9 07:24:57.913824 systemd-journald[182]: Runtime Journal (/run/log/journal/0d85e446bd274b2b9344eeb992ba8850) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:24:57.890904 systemd-modules-load[183]: Inserted module 'overlay' Oct 9 07:24:57.943769 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:24:57.943798 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:24:57.943821 kernel: Bridge firewalling registered Oct 9 07:24:57.934461 systemd-modules-load[183]: Inserted module 'br_netfilter' Oct 9 07:24:57.944735 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:24:57.945657 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:24:57.946446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:24:57.957238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:24:57.959088 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:24:57.961325 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:24:57.965521 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:24:57.981674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:24:57.989449 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:24:57.990948 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:24:57.992077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:24:57.993524 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:24:58.001265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:24:58.008034 dracut-cmdline[213]: dracut-dracut-053 Oct 9 07:24:58.013794 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1839da262570fb938be558d95db7fc3d986a0d71e1b77d40d35a3e2a1bac7dcd Oct 9 07:24:58.033118 systemd-resolved[219]: Positive Trust Anchors: Oct 9 07:24:58.033131 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:24:58.033167 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:24:58.038455 systemd-resolved[219]: Defaulting to hostname 'linux'. Oct 9 07:24:58.039764 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:24:58.041089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:24:58.106171 kernel: SCSI subsystem initialized Oct 9 07:24:58.122101 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:24:58.139102 kernel: iscsi: registered transport (tcp) Oct 9 07:24:58.165093 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:24:58.165164 kernel: QLogic iSCSI HBA Driver Oct 9 07:24:58.208983 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:24:58.214264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:24:58.241264 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:24:58.241336 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:24:58.242484 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:24:58.288128 kernel: raid6: avx2x4 gen() 18394 MB/s Oct 9 07:24:58.305129 kernel: raid6: avx2x2 gen() 17797 MB/s Oct 9 07:24:58.322398 kernel: raid6: avx2x1 gen() 13772 MB/s Oct 9 07:24:58.322472 kernel: raid6: using algorithm avx2x4 gen() 18394 MB/s Oct 9 07:24:58.340361 kernel: raid6: .... xor() 7779 MB/s, rmw enabled Oct 9 07:24:58.340435 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:24:58.366101 kernel: xor: automatically using best checksumming function avx Oct 9 07:24:58.550111 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:24:58.564365 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:24:58.570304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:24:58.593564 systemd-udevd[401]: Using default interface naming scheme 'v255'. Oct 9 07:24:58.599546 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:24:58.605495 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:24:58.624946 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Oct 9 07:24:58.660778 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:24:58.666264 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:24:58.721508 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:24:58.729585 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:24:58.762045 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:24:58.764531 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:24:58.765370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:24:58.766437 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:24:58.773279 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:24:58.801311 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:24:58.806117 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 9 07:24:58.814101 kernel: scsi host0: Virtio SCSI HBA Oct 9 07:24:58.821558 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 07:24:58.839083 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:24:58.848581 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:24:58.848646 kernel: GPT:9289727 != 125829119 Oct 9 07:24:58.848659 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:24:58.848672 kernel: GPT:9289727 != 125829119 Oct 9 07:24:58.848683 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:24:58.848703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:24:58.870103 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:24:58.870231 kernel: AES CTR mode by8 optimization enabled Oct 9 07:24:58.872085 kernel: libata version 3.00 loaded. Oct 9 07:24:58.874087 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 9 07:24:58.876083 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 07:24:58.877078 kernel: scsi host1: ata_piix Oct 9 07:24:58.880162 kernel: scsi host2: ata_piix Oct 9 07:24:58.880353 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Oct 9 07:24:58.881458 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Oct 9 07:24:58.886741 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Oct 9 07:24:58.893579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:24:58.893696 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:24:58.894966 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:24:58.895469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:24:58.895610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:24:58.898351 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:24:58.913452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:24:58.918741 kernel: ACPI: bus type USB registered Oct 9 07:24:58.918831 kernel: usbcore: registered new interface driver usbfs Oct 9 07:24:58.918846 kernel: usbcore: registered new interface driver hub Oct 9 07:24:58.920456 kernel: usbcore: registered new device driver usb Oct 9 07:24:58.961412 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:24:58.966221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:24:58.980159 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:24:59.060574 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Oct 9 07:24:59.078294 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:24:59.094336 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 9 07:24:59.094530 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 9 07:24:59.094659 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 9 07:24:59.094855 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 9 07:24:59.094972 kernel: BTRFS: device fsid a442e753-4749-4732-ba27-ea845965fe4a devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (464) Oct 9 07:24:59.094985 kernel: hub 1-0:1.0: USB hub found Oct 9 07:24:59.095149 kernel: hub 1-0:1.0: 2 ports detected Oct 9 07:24:59.097813 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:24:59.109204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:24:59.113404 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:24:59.114373 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:24:59.121240 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:24:59.137052 disk-uuid[552]: Primary Header is updated. Oct 9 07:24:59.137052 disk-uuid[552]: Secondary Entries is updated. Oct 9 07:24:59.137052 disk-uuid[552]: Secondary Header is updated. Oct 9 07:24:59.147102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:24:59.152091 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:25:00.159106 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:25:00.159485 disk-uuid[553]: The operation has completed successfully. Oct 9 07:25:00.201636 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:25:00.201759 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:25:00.211269 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:25:00.215947 sh[566]: Success Oct 9 07:25:00.233286 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 07:25:00.299235 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:25:00.315383 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:25:00.320230 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:25:00.333154 kernel: BTRFS info (device dm-0): first mount of filesystem a442e753-4749-4732-ba27-ea845965fe4a Oct 9 07:25:00.333216 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:25:00.333230 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:25:00.334277 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:25:00.335090 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:25:00.346428 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:25:00.347579 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:25:00.353299 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:25:00.357453 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:25:00.368131 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:25:00.368198 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:25:00.369306 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:25:00.375099 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:25:00.387276 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:25:00.389932 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:25:00.395968 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:25:00.401253 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:25:00.516808 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:25:00.528497 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:25:00.532573 ignition[651]: Ignition 2.18.0 Oct 9 07:25:00.532585 ignition[651]: Stage: fetch-offline Oct 9 07:25:00.532646 ignition[651]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:25:00.532666 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:25:00.532820 ignition[651]: parsed url from cmdline: "" Oct 9 07:25:00.532824 ignition[651]: no config URL provided Oct 9 07:25:00.532829 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:25:00.532838 ignition[651]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:25:00.532844 ignition[651]: failed to fetch config: resource requires networking Oct 9 07:25:00.533057 ignition[651]: Ignition finished successfully Oct 9 07:25:00.537511 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:25:00.551887 systemd-networkd[756]: lo: Link UP Oct 9 07:25:00.551908 systemd-networkd[756]: lo: Gained carrier Oct 9 07:25:00.554180 systemd-networkd[756]: Enumeration completed Oct 9 07:25:00.554553 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:25:00.554557 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 9 07:25:00.555763 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:25:00.556596 systemd-networkd[756]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:25:00.556601 systemd-networkd[756]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:25:00.557418 systemd-networkd[756]: eth0: Link UP Oct 9 07:25:00.557423 systemd-networkd[756]: eth0: Gained carrier Oct 9 07:25:00.557452 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:25:00.557838 systemd[1]: Reached target network.target - Network. Oct 9 07:25:00.562425 systemd-networkd[756]: eth1: Link UP Oct 9 07:25:00.562429 systemd-networkd[756]: eth1: Gained carrier Oct 9 07:25:00.562441 systemd-networkd[756]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:25:00.564721 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:25:00.573156 systemd-networkd[756]: eth1: DHCPv4 address 10.124.0.11/20 acquired from 169.254.169.253 Oct 9 07:25:00.577158 systemd-networkd[756]: eth0: DHCPv4 address 209.38.154.162/19, gateway 209.38.128.1 acquired from 169.254.169.253 Oct 9 07:25:00.587285 ignition[761]: Ignition 2.18.0 Oct 9 07:25:00.587298 ignition[761]: Stage: fetch Oct 9 07:25:00.587498 ignition[761]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:25:00.587510 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:25:00.587647 ignition[761]: parsed url from cmdline: "" Oct 9 07:25:00.587652 ignition[761]: no config URL provided Oct 9 07:25:00.587660 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:25:00.587670 ignition[761]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:25:00.587693 ignition[761]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 9 07:25:00.616971 ignition[761]: GET result: OK Oct 9 07:25:00.617146 ignition[761]: parsing config with SHA512: cb2bae4d4c07690a0ad066712b3cfb89fc18d2780b6ec986fb624c9249eff6432b2f6ac4414647bdf23f99c71ea7d0aba7f34ecc9a3f9cf040c958d17c12a543 Oct 9 07:25:00.623343 unknown[761]: fetched base config from "system" Oct 9 07:25:00.623863 ignition[761]: fetch: fetch complete Oct 9 07:25:00.623357 unknown[761]: fetched base config from "system" Oct 9 07:25:00.623869 ignition[761]: fetch: fetch passed Oct 9 07:25:00.623365 unknown[761]: fetched user config from "digitalocean" Oct 9 07:25:00.623921 ignition[761]: Ignition finished successfully Oct 9 07:25:00.625444 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:25:00.631281 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:25:00.647602 ignition[769]: Ignition 2.18.0 Oct 9 07:25:00.647618 ignition[769]: Stage: kargs Oct 9 07:25:00.647824 ignition[769]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:25:00.647835 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:25:00.648694 ignition[769]: kargs: kargs passed Oct 9 07:25:00.648742 ignition[769]: Ignition finished successfully Oct 9 07:25:00.652534 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:25:00.656290 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:25:00.673403 ignition[776]: Ignition 2.18.0 Oct 9 07:25:00.673417 ignition[776]: Stage: disks Oct 9 07:25:00.673668 ignition[776]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:25:00.673680 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:25:00.674604 ignition[776]: disks: disks passed Oct 9 07:25:00.675937 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:25:00.674651 ignition[776]: Ignition finished successfully Oct 9 07:25:00.677646 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:25:00.681342 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:25:00.682273 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:25:00.683179 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:25:00.683862 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:25:00.690302 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:25:00.716189 systemd-fsck[785]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:25:00.718327 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:25:00.723224 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:25:00.836085 kernel: EXT4-fs (vda9): mounted filesystem ef891253-2811-499a-a9aa-02f0764c1b95 r/w with ordered data mode. Quota mode: none. Oct 9 07:25:00.836672 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:25:00.838017 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:25:00.849242 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:25:00.851712 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:25:00.853217 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Oct 9 07:25:00.863098 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (793) Oct 9 07:25:00.865983 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:25:00.866035 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:25:00.866049 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:25:00.866350 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 07:25:00.868528 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:25:00.868563 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:25:00.879116 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:25:00.877309 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:25:00.885286 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:25:00.887664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:25:00.961745 coreos-metadata[796]: Oct 09 07:25:00.961 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:25:00.966352 coreos-metadata[795]: Oct 09 07:25:00.965 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:25:00.967341 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:25:00.971872 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:25:00.972861 coreos-metadata[796]: Oct 09 07:25:00.972 INFO Fetch successful Oct 9 07:25:00.976180 coreos-metadata[795]: Oct 09 07:25:00.975 INFO Fetch successful Oct 9 07:25:00.981309 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:25:00.983657 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Oct 9 07:25:00.984040 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Oct 9 07:25:00.988048 coreos-metadata[796]: Oct 09 07:25:00.984 INFO wrote hostname ci-3975.2.2-3-9020298c9e to /sysroot/etc/hostname Oct 9 07:25:00.986045 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:25:00.990245 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:25:01.110127 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:25:01.114271 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:25:01.116284 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:25:01.130096 kernel: BTRFS info (device vda6): last unmount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:25:01.159982 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:25:01.164289 ignition[916]: INFO : Ignition 2.18.0 Oct 9 07:25:01.164289 ignition[916]: INFO : Stage: mount Oct 9 07:25:01.165382 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:25:01.165382 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:25:01.167135 ignition[916]: INFO : mount: mount passed Oct 9 07:25:01.167135 ignition[916]: INFO : Ignition finished successfully Oct 9 07:25:01.167214 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:25:01.172214 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:25:01.332264 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:25:01.338385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:25:01.356137 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Oct 9 07:25:01.358325 kernel: BTRFS info (device vda6): first mount of filesystem aa256cb8-f25c-41d0-8582-dc8cedfde7ce Oct 9 07:25:01.358415 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:25:01.360105 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:25:01.365157 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:25:01.368235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:25:01.400404 ignition[944]: INFO : Ignition 2.18.0 Oct 9 07:25:01.400404 ignition[944]: INFO : Stage: files Oct 9 07:25:01.401885 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:25:01.401885 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:25:01.401885 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:25:01.404476 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:25:01.404476 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:25:01.406156 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:25:01.407025 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:25:01.407025 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:25:01.406686 unknown[944]: wrote ssh authorized keys file for user: core Oct 9 07:25:01.417919 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:25:01.418904 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:25:01.468321 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:25:01.608381 systemd-networkd[756]: eth0: Gained IPv6LL Oct 9 07:25:01.632318 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:25:01.632318 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:25:01.633952 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:25:01.633952 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:25:01.633952 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:25:01.633952 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:25:01.633952 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:25:01.633952 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:25:01.633952 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:25:01.633952 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:25:01.638924 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:25:01.638924 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:25:01.638924 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:25:01.638924 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:25:01.638924 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 07:25:01.863358 systemd-networkd[756]: eth1: Gained IPv6LL Oct 9 07:25:01.978506 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 07:25:02.817900 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 07:25:02.844279 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 07:25:02.844279 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:25:02.844279 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:25:02.844279 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 07:25:02.844279 ignition[944]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:25:02.844279 ignition[944]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:25:02.844279 ignition[944]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:25:02.844279 ignition[944]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:25:02.844279 ignition[944]: INFO : files: files passed Oct 9 07:25:02.844279 ignition[944]: INFO : Ignition finished successfully Oct 9 07:25:02.841053 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:25:02.860429 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:25:02.885378 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:25:02.895671 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:25:02.896730 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:25:02.924911 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:25:02.924911 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:25:02.927014 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:25:02.928006 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:25:02.929574 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:25:02.940374 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:25:02.989510 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:25:02.989687 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:25:02.991166 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:25:02.991686 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:25:02.992582 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:25:02.997322 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:25:03.020111 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:25:03.027400 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:25:03.055402 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:25:03.056286 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:25:03.058839 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:25:03.059537 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:25:03.059778 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:25:03.061587 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:25:03.062259 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:25:03.063378 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:25:03.064420 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:25:03.065369 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:25:03.066369 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:25:03.067555 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:25:03.068571 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:25:03.069544 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:25:03.070574 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:25:03.071565 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:25:03.071695 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:25:03.072734 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:25:03.073249 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:25:03.074075 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:25:03.074442 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:25:03.075311 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:25:03.075440 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:25:03.076636 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:25:03.076765 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:25:03.077379 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:25:03.077512 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:25:03.078389 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 07:25:03.078585 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:25:03.085534 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:25:03.086142 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:25:03.086345 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:25:03.093434 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:25:03.094054 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:25:03.094330 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:25:03.095579 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:25:03.095798 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:25:03.109564 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:25:03.109846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:25:03.115139 ignition[998]: INFO : Ignition 2.18.0 Oct 9 07:25:03.117221 ignition[998]: INFO : Stage: umount Oct 9 07:25:03.118149 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:25:03.118149 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:25:03.119652 ignition[998]: INFO : umount: umount passed Oct 9 07:25:03.119652 ignition[998]: INFO : Ignition finished successfully Oct 9 07:25:03.122438 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:25:03.123415 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:25:03.125611 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:25:03.125762 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:25:03.128261 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:25:03.128327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:25:03.133377 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:25:03.133445 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:25:03.133843 systemd[1]: Stopped target network.target - Network. Oct 9 07:25:03.136236 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:25:03.136321 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:25:03.142218 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:25:03.142574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:25:03.146196 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:25:03.146731 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:25:03.147307 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:25:03.147818 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:25:03.147897 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:25:03.148444 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:25:03.148502 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:25:03.149488 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:25:03.149569 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:25:03.150305 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:25:03.150378 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:25:03.151420 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:25:03.152314 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:25:03.154426 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:25:03.155298 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:25:03.155405 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:25:03.156175 systemd-networkd[756]: eth1: DHCPv6 lease lost Oct 9 07:25:03.157600 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:25:03.157744 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:25:03.160190 systemd-networkd[756]: eth0: DHCPv6 lease lost Oct 9 07:25:03.163327 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:25:03.163505 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:25:03.164819 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:25:03.164951 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:25:03.169142 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:25:03.169203 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:25:03.174329 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:25:03.174794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:25:03.174883 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:25:03.175774 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:25:03.175824 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:25:03.176540 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:25:03.176584 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:25:03.178241 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:25:03.178291 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:25:03.179502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:25:03.198103 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:25:03.198288 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:25:03.199589 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:25:03.199687 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:25:03.200891 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:25:03.200991 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:25:03.201922 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:25:03.201972 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:25:03.202708 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:25:03.202879 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:25:03.203948 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:25:03.203996 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:25:03.204701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:25:03.204741 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:25:03.211313 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:25:03.211790 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:25:03.211861 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:25:03.212785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:25:03.212842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:25:03.223362 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:25:03.223488 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:25:03.225327 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:25:03.231404 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:25:03.252466 systemd[1]: Switching root. Oct 9 07:25:03.310609 systemd-journald[182]: Journal stopped Oct 9 07:25:04.607854 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Oct 9 07:25:04.607984 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:25:04.608007 kernel: SELinux: policy capability open_perms=1 Oct 9 07:25:04.608031 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:25:04.608047 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:25:04.609142 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:25:04.609198 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:25:04.609216 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:25:04.609233 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:25:04.609250 kernel: audit: type=1403 audit(1728458703.532:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:25:04.609285 systemd[1]: Successfully loaded SELinux policy in 41.884ms. Oct 9 07:25:04.609314 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.078ms. Oct 9 07:25:04.609336 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:25:04.609354 systemd[1]: Detected virtualization kvm. Oct 9 07:25:04.609376 systemd[1]: Detected architecture x86-64. Oct 9 07:25:04.609393 systemd[1]: Detected first boot. Oct 9 07:25:04.609413 systemd[1]: Hostname set to . Oct 9 07:25:04.609429 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:25:04.609451 zram_generator::config[1041]: No configuration found. Oct 9 07:25:04.609472 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:25:04.609489 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:25:04.609507 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:25:04.609526 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:25:04.609547 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:25:04.609570 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:25:04.609589 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:25:04.609614 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:25:04.609632 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:25:04.609650 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:25:04.609671 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:25:04.609691 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:25:04.609713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:25:04.609731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:25:04.609750 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:25:04.609770 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:25:04.609801 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:25:04.609820 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:25:04.609838 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:25:04.609856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:25:04.609876 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:25:04.609895 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:25:04.609918 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:25:04.609938 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:25:04.609960 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:25:04.609980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:25:04.609997 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:25:04.610015 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:25:04.610034 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:25:04.610051 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:25:04.610090 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:25:04.610116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:25:04.610133 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:25:04.610151 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:25:04.610171 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:25:04.610189 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:25:04.610208 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:25:04.610227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:04.610245 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:25:04.610262 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:25:04.610286 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:25:04.610307 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:25:04.610325 systemd[1]: Reached target machines.target - Containers. Oct 9 07:25:04.610342 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:25:04.610359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:25:04.610376 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:25:04.610395 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:25:04.610412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:25:04.610430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:25:04.610456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:25:04.610475 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:25:04.610493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:25:04.610513 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:25:04.610531 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:25:04.610549 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:25:04.610566 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:25:04.610584 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:25:04.610609 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:25:04.610630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:25:04.610649 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:25:04.610669 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:25:04.610690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:25:04.610708 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:25:04.610726 systemd[1]: Stopped verity-setup.service. Oct 9 07:25:04.610766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:04.610788 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:25:04.610815 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:25:04.610833 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:25:04.610853 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:25:04.610872 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:25:04.610896 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:25:04.610915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:25:04.610937 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:25:04.610960 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:25:04.610981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:25:04.611006 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:25:04.611032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:25:04.611054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:25:04.612193 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:25:04.612232 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:25:04.612252 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:25:04.612270 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:25:04.612340 systemd-journald[1109]: Collecting audit messages is disabled. Oct 9 07:25:04.612390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:25:04.612412 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:25:04.612433 systemd-journald[1109]: Journal started Oct 9 07:25:04.612474 systemd-journald[1109]: Runtime Journal (/run/log/journal/0d85e446bd274b2b9344eeb992ba8850) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:25:04.244928 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:25:04.264490 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:25:04.614158 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:25:04.265037 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:25:04.617117 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:25:04.620137 kernel: loop: module loaded Oct 9 07:25:04.627367 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:25:04.629238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:25:04.633115 kernel: fuse: init (API version 7.39) Oct 9 07:25:04.637636 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:25:04.638544 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:25:04.685309 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:25:04.686044 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:25:04.686606 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:25:04.689775 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:25:04.697619 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:25:04.714321 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:25:04.715266 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:25:04.720377 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:25:04.735366 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:25:04.737199 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:25:04.767298 kernel: ACPI: bus type drm_connector registered Oct 9 07:25:04.781284 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:25:04.782650 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:25:04.790372 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:25:04.802713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:25:04.804796 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:25:04.806242 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:25:04.808144 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:25:04.810139 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:25:04.812332 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:25:04.833206 kernel: loop0: detected capacity change from 0 to 80568 Oct 9 07:25:04.836196 systemd-journald[1109]: Time spent on flushing to /var/log/journal/0d85e446bd274b2b9344eeb992ba8850 is 135.422ms for 986 entries. Oct 9 07:25:04.836196 systemd-journald[1109]: System Journal (/var/log/journal/0d85e446bd274b2b9344eeb992ba8850) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:25:05.028314 systemd-journald[1109]: Received client request to flush runtime journal. Oct 9 07:25:05.028399 kernel: block loop0: the capability attribute has been deprecated. Oct 9 07:25:05.028569 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:25:05.028613 kernel: loop1: detected capacity change from 0 to 8 Oct 9 07:25:05.028640 kernel: loop2: detected capacity change from 0 to 139904 Oct 9 07:25:04.839738 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:25:04.883275 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:25:04.885338 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:25:04.902841 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:25:04.979494 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:25:04.990326 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:25:05.020325 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 07:25:05.024699 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:25:05.026460 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:25:05.031375 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:25:05.044142 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:25:05.046109 kernel: loop3: detected capacity change from 0 to 211296 Oct 9 07:25:05.055715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:25:05.117137 kernel: loop4: detected capacity change from 0 to 80568 Oct 9 07:25:05.136102 kernel: loop5: detected capacity change from 0 to 8 Oct 9 07:25:05.139139 kernel: loop6: detected capacity change from 0 to 139904 Oct 9 07:25:05.168116 kernel: loop7: detected capacity change from 0 to 211296 Oct 9 07:25:05.168492 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Oct 9 07:25:05.168522 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Oct 9 07:25:05.187641 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 9 07:25:05.188390 (sd-merge)[1184]: Merged extensions into '/usr'. Oct 9 07:25:05.203221 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:25:05.225528 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:25:05.225557 systemd[1]: Reloading... Oct 9 07:25:05.354098 zram_generator::config[1206]: No configuration found. Oct 9 07:25:05.561164 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:25:05.660899 systemd[1]: Reloading finished in 434 ms. Oct 9 07:25:05.720130 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:25:05.722811 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:25:05.728034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:25:05.738359 systemd[1]: Starting ensure-sysext.service... Oct 9 07:25:05.743238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 9 07:25:05.762554 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:25:05.762583 systemd[1]: Reloading... Oct 9 07:25:05.829189 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:25:05.829609 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:25:05.831664 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:25:05.831943 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Oct 9 07:25:05.832004 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Oct 9 07:25:05.843809 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:25:05.843821 systemd-tmpfiles[1254]: Skipping /boot Oct 9 07:25:05.887142 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:25:05.887155 systemd-tmpfiles[1254]: Skipping /boot Oct 9 07:25:05.920121 zram_generator::config[1294]: No configuration found. Oct 9 07:25:06.081265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:25:06.174461 systemd[1]: Reloading finished in 411 ms. Oct 9 07:25:06.212054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 9 07:25:06.231566 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:25:06.237393 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:25:06.246490 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:25:06.257533 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:25:06.275453 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:25:06.285337 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:06.285564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:25:06.312545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:25:06.332257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:25:06.337536 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:25:06.338343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:25:06.338498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:06.343604 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:06.343882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:25:06.344095 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:25:06.351493 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:25:06.352040 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:06.355925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:06.356217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:25:06.399491 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:25:06.400330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:25:06.400562 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:06.401344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:25:06.401509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:25:06.407950 systemd[1]: Finished ensure-sysext.service. Oct 9 07:25:06.421463 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:25:06.437364 augenrules[1351]: No rules Oct 9 07:25:06.451478 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:25:06.453274 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:25:06.462700 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:25:06.465204 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:25:06.466986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:25:06.467614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:25:06.475438 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:25:06.476157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:25:06.484970 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:25:06.486264 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:25:06.497237 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:25:06.497442 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:25:06.507420 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:25:06.511153 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:25:06.513154 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:25:06.516889 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:25:06.536925 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:25:06.539833 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Oct 9 07:25:06.560537 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:25:06.592322 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:25:06.605310 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:25:06.695664 systemd-resolved[1330]: Positive Trust Anchors: Oct 9 07:25:06.695691 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:25:06.695729 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 9 07:25:06.702236 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:25:06.702826 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:25:06.711860 systemd-resolved[1330]: Using system hostname 'ci-3975.2.2-3-9020298c9e'. Oct 9 07:25:06.730136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:25:06.730952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:25:06.752780 systemd-networkd[1375]: lo: Link UP Oct 9 07:25:06.752792 systemd-networkd[1375]: lo: Gained carrier Oct 9 07:25:06.753923 systemd-networkd[1375]: Enumeration completed Oct 9 07:25:06.754059 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:25:06.754803 systemd[1]: Reached target network.target - Network. Oct 9 07:25:06.762363 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:25:06.779106 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1384) Oct 9 07:25:06.792333 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:25:06.818295 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 9 07:25:06.819322 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:06.819584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:25:06.835533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:25:06.839426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:25:06.844362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:25:06.844936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:25:06.844978 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:25:06.844997 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:25:06.872059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:25:06.872358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:25:06.880205 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1377) Oct 9 07:25:06.922094 kernel: ISO 9660 Extensions: RRIP_1991A Oct 9 07:25:06.925995 systemd-networkd[1375]: eth0: Configuring with /run/systemd/network/10-16:2a:c3:6a:65:ec.network. Oct 9 07:25:06.927311 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 9 07:25:06.928948 systemd-networkd[1375]: eth0: Link UP Oct 9 07:25:06.928959 systemd-networkd[1375]: eth0: Gained carrier Oct 9 07:25:06.936642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:25:06.937830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:25:06.939590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:25:06.939861 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:06.941435 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:25:06.942329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:25:06.948528 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:25:07.012231 systemd-networkd[1375]: eth1: Configuring with /run/systemd/network/10-de:30:85:71:50:fa.network. Oct 9 07:25:07.013462 systemd-networkd[1375]: eth1: Link UP Oct 9 07:25:07.013467 systemd-networkd[1375]: eth1: Gained carrier Oct 9 07:25:07.020368 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:07.027931 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:07.041725 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:25:07.051455 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:25:07.055098 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 07:25:07.062088 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:25:07.085421 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:25:07.087247 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:25:07.154091 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:25:07.187667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:25:07.189609 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:25:07.271104 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 07:25:07.276101 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 07:25:07.284374 kernel: Console: switching to colour dummy device 80x25 Oct 9 07:25:07.284528 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 07:25:07.284559 kernel: [drm] features: -context_init Oct 9 07:25:07.288249 kernel: [drm] number of scanouts: 1 Oct 9 07:25:07.288350 kernel: [drm] number of cap sets: 0 Oct 9 07:25:07.298112 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 07:25:07.300983 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:25:07.301520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:25:07.310368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:25:07.324116 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 07:25:07.324294 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 07:25:07.332105 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 07:25:07.364924 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:25:07.365379 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:25:07.429667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:25:07.487096 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:25:07.493491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:25:07.528370 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:25:07.535359 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:25:07.564138 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:25:07.594791 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:25:07.596208 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:25:07.596336 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:25:07.596565 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:25:07.596721 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:25:07.597007 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:25:07.597259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:25:07.597336 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:25:07.597394 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:25:07.597419 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:25:07.597483 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:25:07.600166 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:25:07.601983 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:25:07.607596 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:25:07.609272 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:25:07.612401 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:25:07.613715 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:25:07.615688 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:25:07.617420 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:25:07.617458 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:25:07.626238 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:25:07.633244 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:25:07.635337 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:25:07.639835 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:25:07.644424 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:25:07.655292 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:25:07.655766 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:25:07.658529 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:25:07.665303 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:25:07.675784 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:25:07.686502 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:25:07.695354 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:25:07.696821 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:25:07.698872 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:25:07.703014 jq[1444]: false Oct 9 07:25:07.710344 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:25:07.720255 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:25:07.724758 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:25:07.734774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:25:07.735080 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:25:07.742657 dbus-daemon[1443]: [system] SELinux support is enabled Oct 9 07:25:07.743713 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:25:07.751022 coreos-metadata[1442]: Oct 09 07:25:07.750 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:25:07.758792 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:25:07.758854 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:25:07.763107 coreos-metadata[1442]: Oct 09 07:25:07.761 INFO Fetch successful Oct 9 07:25:07.761951 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:25:07.762126 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 9 07:25:07.762152 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:25:07.793578 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:25:07.793876 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:25:07.816382 jq[1454]: true Oct 9 07:25:07.829798 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:25:07.830181 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:25:07.844234 update_engine[1453]: I1009 07:25:07.843143 1453 main.cc:92] Flatcar Update Engine starting Oct 9 07:25:07.858316 update_engine[1453]: I1009 07:25:07.849340 1453 update_check_scheduler.cc:74] Next update check in 4m43s Oct 9 07:25:07.855055 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:25:07.867658 extend-filesystems[1445]: Found loop4 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found loop5 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found loop6 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found loop7 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found vda Oct 9 07:25:07.867658 extend-filesystems[1445]: Found vda1 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found vda2 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found vda3 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found usr Oct 9 07:25:07.867658 extend-filesystems[1445]: Found vda4 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found vda6 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found vda7 Oct 9 07:25:07.867658 extend-filesystems[1445]: Found vda9 Oct 9 07:25:07.867658 extend-filesystems[1445]: Checking size of /dev/vda9 Oct 9 07:25:07.932267 tar[1457]: linux-amd64/helm Oct 9 07:25:07.872283 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:25:07.937399 jq[1473]: true Oct 9 07:25:07.893079 systemd-logind[1452]: New seat seat0. Oct 9 07:25:07.901198 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:25:07.946544 extend-filesystems[1445]: Resized partition /dev/vda9 Oct 9 07:25:07.901219 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:25:07.901912 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:25:07.913892 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:25:07.965673 extend-filesystems[1489]: resize2fs 1.47.0 (5-Feb-2023) Oct 9 07:25:07.960918 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:25:07.964385 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:25:07.976001 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 9 07:25:08.036248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1381) Oct 9 07:25:08.166903 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:25:08.169696 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:25:08.216406 systemd[1]: Starting sshkeys.service... Oct 9 07:25:08.274694 systemd-networkd[1375]: eth1: Gained IPv6LL Oct 9 07:25:08.275182 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:08.279914 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:25:08.291265 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:25:08.294561 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:25:08.300646 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:25:08.310413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:25:08.313496 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:25:08.339143 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 07:25:08.362051 coreos-metadata[1510]: Oct 09 07:25:08.361 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:25:08.373586 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:25:08.374545 coreos-metadata[1510]: Oct 09 07:25:08.373 INFO Fetch successful Oct 9 07:25:08.380627 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:25:08.380627 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 07:25:08.380627 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 07:25:08.392858 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Oct 9 07:25:08.392858 extend-filesystems[1445]: Found vdb Oct 9 07:25:08.385050 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:25:08.386134 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:25:08.420222 unknown[1510]: wrote ssh authorized keys file for user: core Oct 9 07:25:08.450517 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:25:08.482575 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:25:08.484617 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:25:08.489146 systemd[1]: Finished sshkeys.service. Oct 9 07:25:08.507350 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:25:08.583295 systemd-networkd[1375]: eth0: Gained IPv6LL Oct 9 07:25:08.584470 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:08.644218 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:25:08.671938 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:25:08.729534 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:25:08.730116 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:25:08.757750 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:25:08.776363 containerd[1477]: time="2024-10-09T07:25:08.776233915Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 9 07:25:08.806213 containerd[1477]: time="2024-10-09T07:25:08.806112303Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:25:08.806407 containerd[1477]: time="2024-10-09T07:25:08.806391271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:25:08.811396 containerd[1477]: time="2024-10-09T07:25:08.811326034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:25:08.811532 containerd[1477]: time="2024-10-09T07:25:08.811518665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:25:08.811851 containerd[1477]: time="2024-10-09T07:25:08.811828256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:25:08.811932 containerd[1477]: time="2024-10-09T07:25:08.811920422Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:25:08.812058 containerd[1477]: time="2024-10-09T07:25:08.812044577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:25:08.812175 containerd[1477]: time="2024-10-09T07:25:08.812157301Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:25:08.812222 containerd[1477]: time="2024-10-09T07:25:08.812212575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:25:08.812337 containerd[1477]: time="2024-10-09T07:25:08.812323715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:25:08.812660 containerd[1477]: time="2024-10-09T07:25:08.812635559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:25:08.812740 containerd[1477]: time="2024-10-09T07:25:08.812727640Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 9 07:25:08.812783 containerd[1477]: time="2024-10-09T07:25:08.812774762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:25:08.812970 containerd[1477]: time="2024-10-09T07:25:08.812954663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:25:08.813020 containerd[1477]: time="2024-10-09T07:25:08.813011452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:25:08.813136 containerd[1477]: time="2024-10-09T07:25:08.813121398Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 9 07:25:08.813191 containerd[1477]: time="2024-10-09T07:25:08.813181958Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:25:08.825321 containerd[1477]: time="2024-10-09T07:25:08.825250969Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:25:08.826225 containerd[1477]: time="2024-10-09T07:25:08.826182291Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:25:08.826359 containerd[1477]: time="2024-10-09T07:25:08.826336902Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:25:08.826467 containerd[1477]: time="2024-10-09T07:25:08.826448662Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:25:08.826879 containerd[1477]: time="2024-10-09T07:25:08.826848011Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:25:08.827367 containerd[1477]: time="2024-10-09T07:25:08.827344285Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:25:08.827470 containerd[1477]: time="2024-10-09T07:25:08.827450591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:25:08.827738 containerd[1477]: time="2024-10-09T07:25:08.827712224Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:25:08.827877 containerd[1477]: time="2024-10-09T07:25:08.827855897Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:25:08.827977 containerd[1477]: time="2024-10-09T07:25:08.827956731Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:25:08.828060 containerd[1477]: time="2024-10-09T07:25:08.828045239Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:25:08.828137 containerd[1477]: time="2024-10-09T07:25:08.828118892Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:25:08.828217 containerd[1477]: time="2024-10-09T07:25:08.828201087Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:25:08.828291 containerd[1477]: time="2024-10-09T07:25:08.828277421Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:25:08.828535 containerd[1477]: time="2024-10-09T07:25:08.828517961Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:25:08.828615 containerd[1477]: time="2024-10-09T07:25:08.828602828Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:25:08.828669 containerd[1477]: time="2024-10-09T07:25:08.828659609Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:25:08.828714 containerd[1477]: time="2024-10-09T07:25:08.828705698Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:25:08.828758 containerd[1477]: time="2024-10-09T07:25:08.828750008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:25:08.828969 containerd[1477]: time="2024-10-09T07:25:08.828948965Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:25:08.829365 containerd[1477]: time="2024-10-09T07:25:08.829343289Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:25:08.829466 containerd[1477]: time="2024-10-09T07:25:08.829453128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.829536 containerd[1477]: time="2024-10-09T07:25:08.829524321Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:25:08.829605 containerd[1477]: time="2024-10-09T07:25:08.829595011Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:25:08.830303 containerd[1477]: time="2024-10-09T07:25:08.830284509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.830517 containerd[1477]: time="2024-10-09T07:25:08.830491505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.830616 containerd[1477]: time="2024-10-09T07:25:08.830598065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.830699 containerd[1477]: time="2024-10-09T07:25:08.830684003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.830787 containerd[1477]: time="2024-10-09T07:25:08.830770637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.830869 containerd[1477]: time="2024-10-09T07:25:08.830852603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.830940 containerd[1477]: time="2024-10-09T07:25:08.830925603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831010 containerd[1477]: time="2024-10-09T07:25:08.830998865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831060 containerd[1477]: time="2024-10-09T07:25:08.831051375Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:25:08.831303 containerd[1477]: time="2024-10-09T07:25:08.831285425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831368 containerd[1477]: time="2024-10-09T07:25:08.831358218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831415 containerd[1477]: time="2024-10-09T07:25:08.831405431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831462 containerd[1477]: time="2024-10-09T07:25:08.831453237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831522 containerd[1477]: time="2024-10-09T07:25:08.831512717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831587 containerd[1477]: time="2024-10-09T07:25:08.831576608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831639 containerd[1477]: time="2024-10-09T07:25:08.831628677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.831712 containerd[1477]: time="2024-10-09T07:25:08.831686698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:25:08.832192 containerd[1477]: time="2024-10-09T07:25:08.832125522Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:25:08.832551 containerd[1477]: time="2024-10-09T07:25:08.832534191Z" level=info msg="Connect containerd service" Oct 9 07:25:08.832649 containerd[1477]: time="2024-10-09T07:25:08.832638518Z" level=info msg="using legacy CRI server" Oct 9 07:25:08.832692 containerd[1477]: time="2024-10-09T07:25:08.832683524Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:25:08.832870 containerd[1477]: time="2024-10-09T07:25:08.832854863Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:25:08.833719 containerd[1477]: time="2024-10-09T07:25:08.833695122Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:25:08.833846 containerd[1477]: time="2024-10-09T07:25:08.833831380Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:25:08.833985 containerd[1477]: time="2024-10-09T07:25:08.833967943Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:25:08.834055 containerd[1477]: time="2024-10-09T07:25:08.834044618Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:25:08.834133 containerd[1477]: time="2024-10-09T07:25:08.834120647Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:25:08.834663 containerd[1477]: time="2024-10-09T07:25:08.834643302Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:25:08.835090 containerd[1477]: time="2024-10-09T07:25:08.833921534Z" level=info msg="Start subscribing containerd event" Oct 9 07:25:08.835483 containerd[1477]: time="2024-10-09T07:25:08.835182200Z" level=info msg="Start recovering state" Oct 9 07:25:08.835483 containerd[1477]: time="2024-10-09T07:25:08.835262682Z" level=info msg="Start event monitor" Oct 9 07:25:08.835483 containerd[1477]: time="2024-10-09T07:25:08.835274337Z" level=info msg="Start snapshots syncer" Oct 9 07:25:08.835483 containerd[1477]: time="2024-10-09T07:25:08.835286658Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:25:08.835483 containerd[1477]: time="2024-10-09T07:25:08.835294487Z" level=info msg="Start streaming server" Oct 9 07:25:08.835648 containerd[1477]: time="2024-10-09T07:25:08.835633805Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:25:08.835835 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:25:08.838529 containerd[1477]: time="2024-10-09T07:25:08.838258220Z" level=info msg="containerd successfully booted in 0.065967s" Oct 9 07:25:08.856784 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:25:08.868476 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:25:08.886604 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:25:08.887398 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:25:09.320448 tar[1457]: linux-amd64/LICENSE Oct 9 07:25:09.320448 tar[1457]: linux-amd64/README.md Oct 9 07:25:09.337444 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:25:10.169320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:25:10.171037 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:25:10.173003 systemd[1]: Startup finished in 915ms (kernel) + 5.844s (initrd) + 6.682s (userspace) = 13.442s. Oct 9 07:25:10.178194 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:25:11.190271 kubelet[1566]: E1009 07:25:11.190150 1566 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:25:11.194384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:25:11.194566 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:25:11.194959 systemd[1]: kubelet.service: Consumed 1.766s CPU time. Oct 9 07:25:12.131400 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:25:12.133278 systemd[1]: Started sshd@0-209.38.154.162:22-147.75.109.163:54166.service - OpenSSH per-connection server daemon (147.75.109.163:54166). Oct 9 07:25:12.215808 sshd[1579]: Accepted publickey for core from 147.75.109.163 port 54166 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:25:12.217033 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:12.231716 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:25:12.231800 systemd-logind[1452]: New session 1 of user core. Oct 9 07:25:12.245568 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:25:12.264286 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:25:12.272530 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:25:12.277996 (systemd)[1583]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:12.403252 systemd[1583]: Queued start job for default target default.target. Oct 9 07:25:12.412461 systemd[1583]: Created slice app.slice - User Application Slice. Oct 9 07:25:12.412507 systemd[1583]: Reached target paths.target - Paths. Oct 9 07:25:12.412524 systemd[1583]: Reached target timers.target - Timers. Oct 9 07:25:12.414019 systemd[1583]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:25:12.434401 systemd[1583]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:25:12.434547 systemd[1583]: Reached target sockets.target - Sockets. Oct 9 07:25:12.434564 systemd[1583]: Reached target basic.target - Basic System. Oct 9 07:25:12.434610 systemd[1583]: Reached target default.target - Main User Target. Oct 9 07:25:12.434643 systemd[1583]: Startup finished in 148ms. Oct 9 07:25:12.434826 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:25:12.447414 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:25:12.515412 systemd[1]: Started sshd@1-209.38.154.162:22-147.75.109.163:54180.service - OpenSSH per-connection server daemon (147.75.109.163:54180). Oct 9 07:25:12.570504 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 54180 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:25:12.572658 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:12.578433 systemd-logind[1452]: New session 2 of user core. Oct 9 07:25:12.588352 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:25:12.650555 sshd[1594]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:12.663060 systemd[1]: sshd@1-209.38.154.162:22-147.75.109.163:54180.service: Deactivated successfully. Oct 9 07:25:12.665333 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:25:12.668290 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:25:12.674445 systemd[1]: Started sshd@2-209.38.154.162:22-147.75.109.163:54196.service - OpenSSH per-connection server daemon (147.75.109.163:54196). Oct 9 07:25:12.676173 systemd-logind[1452]: Removed session 2. Oct 9 07:25:12.723991 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 54196 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:25:12.725719 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:12.731525 systemd-logind[1452]: New session 3 of user core. Oct 9 07:25:12.739365 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:25:12.798433 sshd[1601]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:12.815472 systemd[1]: sshd@2-209.38.154.162:22-147.75.109.163:54196.service: Deactivated successfully. Oct 9 07:25:12.817618 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:25:12.819697 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:25:12.824477 systemd[1]: Started sshd@3-209.38.154.162:22-147.75.109.163:54198.service - OpenSSH per-connection server daemon (147.75.109.163:54198). Oct 9 07:25:12.826058 systemd-logind[1452]: Removed session 3. Oct 9 07:25:12.875085 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 54198 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:25:12.877176 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:12.883325 systemd-logind[1452]: New session 4 of user core. Oct 9 07:25:12.894465 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:25:12.968163 sshd[1608]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:12.985006 systemd[1]: sshd@3-209.38.154.162:22-147.75.109.163:54198.service: Deactivated successfully. Oct 9 07:25:12.989004 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:25:12.993508 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:25:12.999081 systemd[1]: Started sshd@4-209.38.154.162:22-147.75.109.163:54208.service - OpenSSH per-connection server daemon (147.75.109.163:54208). Oct 9 07:25:13.002512 systemd-logind[1452]: Removed session 4. Oct 9 07:25:13.063994 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 54208 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:25:13.066021 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:13.071804 systemd-logind[1452]: New session 5 of user core. Oct 9 07:25:13.079363 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:25:13.177132 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:25:13.177894 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:25:13.195850 sudo[1618]: pam_unix(sudo:session): session closed for user root Oct 9 07:25:13.201060 sshd[1615]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:13.212348 systemd[1]: sshd@4-209.38.154.162:22-147.75.109.163:54208.service: Deactivated successfully. Oct 9 07:25:13.214864 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:25:13.217235 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:25:13.223450 systemd[1]: Started sshd@5-209.38.154.162:22-147.75.109.163:54220.service - OpenSSH per-connection server daemon (147.75.109.163:54220). Oct 9 07:25:13.224783 systemd-logind[1452]: Removed session 5. Oct 9 07:25:13.274358 sshd[1623]: Accepted publickey for core from 147.75.109.163 port 54220 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:25:13.276287 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:13.282855 systemd-logind[1452]: New session 6 of user core. Oct 9 07:25:13.289365 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:25:13.350298 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:25:13.351038 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:25:13.356010 sudo[1627]: pam_unix(sudo:session): session closed for user root Oct 9 07:25:13.363272 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:25:13.363614 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:25:13.380472 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:25:13.383266 auditctl[1630]: No rules Oct 9 07:25:13.383986 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:25:13.384289 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:25:13.391959 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:25:13.422822 augenrules[1648]: No rules Oct 9 07:25:13.424298 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:25:13.426183 sudo[1626]: pam_unix(sudo:session): session closed for user root Oct 9 07:25:13.430669 sshd[1623]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:13.442105 systemd[1]: sshd@5-209.38.154.162:22-147.75.109.163:54220.service: Deactivated successfully. Oct 9 07:25:13.444173 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:25:13.444848 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:25:13.453534 systemd[1]: Started sshd@6-209.38.154.162:22-147.75.109.163:54232.service - OpenSSH per-connection server daemon (147.75.109.163:54232). Oct 9 07:25:13.455211 systemd-logind[1452]: Removed session 6. Oct 9 07:25:13.501980 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 54232 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:25:13.504909 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:25:13.513770 systemd-logind[1452]: New session 7 of user core. Oct 9 07:25:13.518423 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:25:13.580986 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:25:13.581296 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 9 07:25:13.757815 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:25:13.757954 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:25:14.212794 dockerd[1669]: time="2024-10-09T07:25:14.212612786Z" level=info msg="Starting up" Oct 9 07:25:14.291049 dockerd[1669]: time="2024-10-09T07:25:14.290663949Z" level=info msg="Loading containers: start." Oct 9 07:25:14.486107 kernel: Initializing XFRM netlink socket Oct 9 07:25:14.519644 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:14.519790 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:14.537668 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:14.587524 systemd-networkd[1375]: docker0: Link UP Oct 9 07:25:14.587892 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Oct 9 07:25:14.621261 dockerd[1669]: time="2024-10-09T07:25:14.621216837Z" level=info msg="Loading containers: done." Oct 9 07:25:14.785755 dockerd[1669]: time="2024-10-09T07:25:14.785600593Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:25:14.785933 dockerd[1669]: time="2024-10-09T07:25:14.785897422Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 9 07:25:14.786258 dockerd[1669]: time="2024-10-09T07:25:14.786015182Z" level=info msg="Daemon has completed initialization" Oct 9 07:25:14.823037 dockerd[1669]: time="2024-10-09T07:25:14.822931224Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:25:14.823196 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:25:15.857330 containerd[1477]: time="2024-10-09T07:25:15.856119106Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 07:25:16.527940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711156781.mount: Deactivated successfully. Oct 9 07:25:18.017339 containerd[1477]: time="2024-10-09T07:25:18.017099145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:18.019628 containerd[1477]: time="2024-10-09T07:25:18.019519552Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213841" Oct 9 07:25:18.020868 containerd[1477]: time="2024-10-09T07:25:18.020798165Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:18.024327 containerd[1477]: time="2024-10-09T07:25:18.024258190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:18.026107 containerd[1477]: time="2024-10-09T07:25:18.025669568Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.169412831s" Oct 9 07:25:18.026107 containerd[1477]: time="2024-10-09T07:25:18.025728081Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 07:25:18.080671 containerd[1477]: time="2024-10-09T07:25:18.080618749Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 07:25:20.002096 containerd[1477]: time="2024-10-09T07:25:20.000181394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:20.003585 containerd[1477]: time="2024-10-09T07:25:20.003521295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208673" Oct 9 07:25:20.006341 containerd[1477]: time="2024-10-09T07:25:20.006299044Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:20.013179 containerd[1477]: time="2024-10-09T07:25:20.013124181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:20.014426 containerd[1477]: time="2024-10-09T07:25:20.014374582Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.9337073s" Oct 9 07:25:20.014540 containerd[1477]: time="2024-10-09T07:25:20.014430409Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 07:25:20.049191 containerd[1477]: time="2024-10-09T07:25:20.049149647Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 07:25:21.196847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:25:21.207394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:25:21.449163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:25:21.460996 containerd[1477]: time="2024-10-09T07:25:21.460933215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:21.462237 containerd[1477]: time="2024-10-09T07:25:21.462178959Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320456" Oct 9 07:25:21.462979 containerd[1477]: time="2024-10-09T07:25:21.462942832Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:21.468569 containerd[1477]: time="2024-10-09T07:25:21.468540590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:21.468653 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:25:21.471591 containerd[1477]: time="2024-10-09T07:25:21.470056076Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.420718216s" Oct 9 07:25:21.471591 containerd[1477]: time="2024-10-09T07:25:21.470681561Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 07:25:21.517565 containerd[1477]: time="2024-10-09T07:25:21.517509159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 07:25:21.592447 kubelet[1887]: E1009 07:25:21.592170 1887 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:25:21.599559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:25:21.599821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:25:22.848617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000617058.mount: Deactivated successfully. Oct 9 07:25:23.365589 containerd[1477]: time="2024-10-09T07:25:23.365508490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:23.367882 containerd[1477]: time="2024-10-09T07:25:23.367816282Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601750" Oct 9 07:25:23.368946 containerd[1477]: time="2024-10-09T07:25:23.368875210Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:23.371682 containerd[1477]: time="2024-10-09T07:25:23.371623010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:23.372696 containerd[1477]: time="2024-10-09T07:25:23.372205488Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.854367122s" Oct 9 07:25:23.372696 containerd[1477]: time="2024-10-09T07:25:23.372257388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 07:25:23.406467 containerd[1477]: time="2024-10-09T07:25:23.406047872Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:25:23.957585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617206318.mount: Deactivated successfully. Oct 9 07:25:24.897189 containerd[1477]: time="2024-10-09T07:25:24.897106335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:24.900128 containerd[1477]: time="2024-10-09T07:25:24.900031393Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:25:24.901315 containerd[1477]: time="2024-10-09T07:25:24.901253064Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:24.913406 containerd[1477]: time="2024-10-09T07:25:24.912565154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:24.913931 containerd[1477]: time="2024-10-09T07:25:24.913858648Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.507731428s" Oct 9 07:25:24.913931 containerd[1477]: time="2024-10-09T07:25:24.913931577Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:25:24.951220 containerd[1477]: time="2024-10-09T07:25:24.951169695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:25:25.486108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495708953.mount: Deactivated successfully. Oct 9 07:25:25.496299 containerd[1477]: time="2024-10-09T07:25:25.495324270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:25.497929 containerd[1477]: time="2024-10-09T07:25:25.497855954Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 07:25:25.499299 containerd[1477]: time="2024-10-09T07:25:25.499261009Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:25.501359 containerd[1477]: time="2024-10-09T07:25:25.501320382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:25.502201 containerd[1477]: time="2024-10-09T07:25:25.502163865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 550.940479ms" Oct 9 07:25:25.502455 containerd[1477]: time="2024-10-09T07:25:25.502350523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:25:25.535053 containerd[1477]: time="2024-10-09T07:25:25.534990010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 07:25:26.115088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628292538.mount: Deactivated successfully. Oct 9 07:25:28.143102 containerd[1477]: time="2024-10-09T07:25:28.141701142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:28.143638 containerd[1477]: time="2024-10-09T07:25:28.143486249Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Oct 9 07:25:28.144031 containerd[1477]: time="2024-10-09T07:25:28.143988276Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:28.148141 containerd[1477]: time="2024-10-09T07:25:28.148074870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:28.149703 containerd[1477]: time="2024-10-09T07:25:28.149652150Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.614306268s" Oct 9 07:25:28.149924 containerd[1477]: time="2024-10-09T07:25:28.149895105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 07:25:31.352607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:25:31.371885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:25:31.403572 systemd[1]: Reloading requested from client PID 2077 ('systemctl') (unit session-7.scope)... Oct 9 07:25:31.403621 systemd[1]: Reloading... Oct 9 07:25:31.577138 zram_generator::config[2114]: No configuration found. Oct 9 07:25:31.759324 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:25:31.859618 systemd[1]: Reloading finished in 455 ms. Oct 9 07:25:31.921939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:25:31.928179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:25:31.932730 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:25:31.933092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:25:31.939819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:25:32.125051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:25:32.137301 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:25:32.210318 kubelet[2170]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:25:32.212089 kubelet[2170]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:25:32.212089 kubelet[2170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:25:32.212089 kubelet[2170]: I1009 07:25:32.211150 2170 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:25:33.072119 kubelet[2170]: I1009 07:25:33.070998 2170 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:25:33.072119 kubelet[2170]: I1009 07:25:33.071035 2170 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:25:33.072119 kubelet[2170]: I1009 07:25:33.071372 2170 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:25:33.096387 kubelet[2170]: I1009 07:25:33.096342 2170 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:25:33.098852 kubelet[2170]: E1009 07:25:33.098812 2170 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://209.38.154.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.112862 kubelet[2170]: I1009 07:25:33.112825 2170 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:25:33.114474 kubelet[2170]: I1009 07:25:33.114429 2170 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:25:33.116005 kubelet[2170]: I1009 07:25:33.115957 2170 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:25:33.117209 kubelet[2170]: I1009 07:25:33.116446 2170 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:25:33.117209 kubelet[2170]: I1009 07:25:33.116473 2170 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:25:33.117209 kubelet[2170]: I1009 07:25:33.116631 2170 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:25:33.117209 kubelet[2170]: I1009 07:25:33.116827 2170 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:25:33.117209 kubelet[2170]: I1009 07:25:33.116847 2170 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:25:33.117209 kubelet[2170]: I1009 07:25:33.116895 2170 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:25:33.117209 kubelet[2170]: I1009 07:25:33.116926 2170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:25:33.117826 kubelet[2170]: W1009 07:25:33.117644 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://209.38.154.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.2-3-9020298c9e&limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.117826 kubelet[2170]: E1009 07:25:33.117746 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.154.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.2-3-9020298c9e&limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.119970 kubelet[2170]: W1009 07:25:33.119472 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://209.38.154.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.119970 kubelet[2170]: E1009 07:25:33.119554 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.154.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.120859 kubelet[2170]: I1009 07:25:33.120386 2170 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:25:33.126100 kubelet[2170]: I1009 07:25:33.125386 2170 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:25:33.126100 kubelet[2170]: W1009 07:25:33.125522 2170 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:25:33.127312 kubelet[2170]: I1009 07:25:33.127287 2170 server.go:1256] "Started kubelet" Oct 9 07:25:33.128750 kubelet[2170]: I1009 07:25:33.128724 2170 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:25:33.136727 kubelet[2170]: E1009 07:25:33.136621 2170 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.154.162:6443/api/v1/namespaces/default/events\": dial tcp 209.38.154.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.2.2-3-9020298c9e.17fcb80f5cf886a9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.2-3-9020298c9e,UID:ci-3975.2.2-3-9020298c9e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.2-3-9020298c9e,},FirstTimestamp:2024-10-09 07:25:33.127239337 +0000 UTC m=+0.980556955,LastTimestamp:2024-10-09 07:25:33.127239337 +0000 UTC m=+0.980556955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.2-3-9020298c9e,}" Oct 9 07:25:33.139686 kubelet[2170]: I1009 07:25:33.139614 2170 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:25:33.141172 kubelet[2170]: I1009 07:25:33.141151 2170 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:25:33.142556 kubelet[2170]: I1009 07:25:33.142531 2170 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:25:33.145534 kubelet[2170]: I1009 07:25:33.144305 2170 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:25:33.145534 kubelet[2170]: I1009 07:25:33.144466 2170 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:25:33.147225 kubelet[2170]: E1009 07:25:33.147191 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.154.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.2-3-9020298c9e?timeout=10s\": dial tcp 209.38.154.162:6443: connect: connection refused" interval="200ms" Oct 9 07:25:33.147754 kubelet[2170]: I1009 07:25:33.147721 2170 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:25:33.149744 kubelet[2170]: I1009 07:25:33.149714 2170 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:25:33.152847 kubelet[2170]: W1009 07:25:33.152791 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://209.38.154.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.152847 kubelet[2170]: E1009 07:25:33.152847 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.154.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.153798 kubelet[2170]: I1009 07:25:33.153743 2170 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:25:33.153991 kubelet[2170]: I1009 07:25:33.153976 2170 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:25:33.157093 kubelet[2170]: I1009 07:25:33.155038 2170 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:25:33.165061 kubelet[2170]: E1009 07:25:33.165020 2170 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:25:33.177643 kubelet[2170]: I1009 07:25:33.177608 2170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:25:33.180189 kubelet[2170]: I1009 07:25:33.179560 2170 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:25:33.180189 kubelet[2170]: I1009 07:25:33.179615 2170 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:25:33.180189 kubelet[2170]: I1009 07:25:33.179700 2170 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:25:33.180189 kubelet[2170]: E1009 07:25:33.179767 2170 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:25:33.183353 kubelet[2170]: W1009 07:25:33.183317 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://209.38.154.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.183353 kubelet[2170]: E1009 07:25:33.183351 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.154.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:33.185033 kubelet[2170]: I1009 07:25:33.184999 2170 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:25:33.185241 kubelet[2170]: I1009 07:25:33.185226 2170 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:25:33.187138 kubelet[2170]: I1009 07:25:33.186969 2170 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:25:33.191505 kubelet[2170]: I1009 07:25:33.191468 2170 policy_none.go:49] "None policy: Start" Oct 9 07:25:33.192767 kubelet[2170]: I1009 07:25:33.192734 2170 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:25:33.192900 kubelet[2170]: I1009 07:25:33.192780 2170 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:25:33.207800 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:25:33.225150 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:25:33.231268 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:25:33.242506 kubelet[2170]: I1009 07:25:33.242459 2170 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:25:33.244084 kubelet[2170]: I1009 07:25:33.243823 2170 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:25:33.246907 kubelet[2170]: I1009 07:25:33.246355 2170 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.246907 kubelet[2170]: E1009 07:25:33.246869 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.154.162:6443/api/v1/nodes\": dial tcp 209.38.154.162:6443: connect: connection refused" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.248301 kubelet[2170]: E1009 07:25:33.248175 2170 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.2-3-9020298c9e\" not found" Oct 9 07:25:33.280685 kubelet[2170]: I1009 07:25:33.280624 2170 topology_manager.go:215] "Topology Admit Handler" podUID="d5e9a53d045799e261ac7aa7a03f23c1" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.284733 kubelet[2170]: I1009 07:25:33.284698 2170 topology_manager.go:215] "Topology Admit Handler" podUID="8b04c2273a0b7a995569e5391ffe0897" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.286460 kubelet[2170]: I1009 07:25:33.285872 2170 topology_manager.go:215] "Topology Admit Handler" podUID="c4b9c46dd8c543abb0251bfef29a212a" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.294138 systemd[1]: Created slice kubepods-burstable-podd5e9a53d045799e261ac7aa7a03f23c1.slice - libcontainer container kubepods-burstable-podd5e9a53d045799e261ac7aa7a03f23c1.slice. Oct 9 07:25:33.320300 systemd[1]: Created slice kubepods-burstable-pod8b04c2273a0b7a995569e5391ffe0897.slice - libcontainer container kubepods-burstable-pod8b04c2273a0b7a995569e5391ffe0897.slice. Oct 9 07:25:33.337403 systemd[1]: Created slice kubepods-burstable-podc4b9c46dd8c543abb0251bfef29a212a.slice - libcontainer container kubepods-burstable-podc4b9c46dd8c543abb0251bfef29a212a.slice. Oct 9 07:25:33.347781 kubelet[2170]: E1009 07:25:33.347736 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.154.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.2-3-9020298c9e?timeout=10s\": dial tcp 209.38.154.162:6443: connect: connection refused" interval="400ms" Oct 9 07:25:33.352803 kubelet[2170]: I1009 07:25:33.352302 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5e9a53d045799e261ac7aa7a03f23c1-ca-certs\") pod \"kube-apiserver-ci-3975.2.2-3-9020298c9e\" (UID: \"d5e9a53d045799e261ac7aa7a03f23c1\") " pod="kube-system/kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.352803 kubelet[2170]: I1009 07:25:33.352371 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5e9a53d045799e261ac7aa7a03f23c1-k8s-certs\") pod \"kube-apiserver-ci-3975.2.2-3-9020298c9e\" (UID: \"d5e9a53d045799e261ac7aa7a03f23c1\") " pod="kube-system/kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.352803 kubelet[2170]: I1009 07:25:33.352405 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5e9a53d045799e261ac7aa7a03f23c1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.2-3-9020298c9e\" (UID: \"d5e9a53d045799e261ac7aa7a03f23c1\") " pod="kube-system/kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.352803 kubelet[2170]: I1009 07:25:33.352457 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-ca-certs\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.352803 kubelet[2170]: I1009 07:25:33.352487 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.353198 kubelet[2170]: I1009 07:25:33.352519 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.353198 kubelet[2170]: I1009 07:25:33.352547 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4b9c46dd8c543abb0251bfef29a212a-kubeconfig\") pod \"kube-scheduler-ci-3975.2.2-3-9020298c9e\" (UID: \"c4b9c46dd8c543abb0251bfef29a212a\") " pod="kube-system/kube-scheduler-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.353198 kubelet[2170]: I1009 07:25:33.352576 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.353198 kubelet[2170]: I1009 07:25:33.352606 2170 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.448631 kubelet[2170]: I1009 07:25:33.448595 2170 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.449060 kubelet[2170]: E1009 07:25:33.449038 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.154.162:6443/api/v1/nodes\": dial tcp 209.38.154.162:6443: connect: connection refused" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.617773 kubelet[2170]: E1009 07:25:33.617627 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:33.618966 containerd[1477]: time="2024-10-09T07:25:33.618836939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.2-3-9020298c9e,Uid:d5e9a53d045799e261ac7aa7a03f23c1,Namespace:kube-system,Attempt:0,}" Oct 9 07:25:33.633098 kubelet[2170]: E1009 07:25:33.633009 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:33.637562 containerd[1477]: time="2024-10-09T07:25:33.637481485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.2-3-9020298c9e,Uid:8b04c2273a0b7a995569e5391ffe0897,Namespace:kube-system,Attempt:0,}" Oct 9 07:25:33.642139 kubelet[2170]: E1009 07:25:33.642104 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:33.642889 containerd[1477]: time="2024-10-09T07:25:33.642844038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.2-3-9020298c9e,Uid:c4b9c46dd8c543abb0251bfef29a212a,Namespace:kube-system,Attempt:0,}" Oct 9 07:25:33.748723 kubelet[2170]: E1009 07:25:33.748655 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.154.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.2-3-9020298c9e?timeout=10s\": dial tcp 209.38.154.162:6443: connect: connection refused" interval="800ms" Oct 9 07:25:33.851524 kubelet[2170]: I1009 07:25:33.851475 2170 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:33.851886 kubelet[2170]: E1009 07:25:33.851841 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.154.162:6443/api/v1/nodes\": dial tcp 209.38.154.162:6443: connect: connection refused" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:34.003975 kubelet[2170]: W1009 07:25:34.003898 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://209.38.154.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:34.004176 kubelet[2170]: E1009 07:25:34.003993 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.154.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:34.113841 kubelet[2170]: W1009 07:25:34.113749 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://209.38.154.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:34.113841 kubelet[2170]: E1009 07:25:34.113827 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.154.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:34.151371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521331153.mount: Deactivated successfully. Oct 9 07:25:34.164571 containerd[1477]: time="2024-10-09T07:25:34.163500835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:25:34.166463 containerd[1477]: time="2024-10-09T07:25:34.166404297Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:25:34.168282 containerd[1477]: time="2024-10-09T07:25:34.168242093Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:25:34.169567 containerd[1477]: time="2024-10-09T07:25:34.169530302Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:25:34.170406 containerd[1477]: time="2024-10-09T07:25:34.170364362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:25:34.172841 containerd[1477]: time="2024-10-09T07:25:34.172585327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:25:34.172841 containerd[1477]: time="2024-10-09T07:25:34.172798979Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:25:34.178103 containerd[1477]: time="2024-10-09T07:25:34.177054934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:25:34.178103 containerd[1477]: time="2024-10-09T07:25:34.178019166Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.061889ms" Oct 9 07:25:34.179501 containerd[1477]: time="2024-10-09T07:25:34.179444677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.052417ms" Oct 9 07:25:34.184508 containerd[1477]: time="2024-10-09T07:25:34.184455816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.623741ms" Oct 9 07:25:34.376729 containerd[1477]: time="2024-10-09T07:25:34.376277774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:34.376729 containerd[1477]: time="2024-10-09T07:25:34.376353375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:34.378735 containerd[1477]: time="2024-10-09T07:25:34.378372084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:34.378735 containerd[1477]: time="2024-10-09T07:25:34.378451181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:34.378735 containerd[1477]: time="2024-10-09T07:25:34.378494741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:34.378735 containerd[1477]: time="2024-10-09T07:25:34.378532861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:34.379275 containerd[1477]: time="2024-10-09T07:25:34.379139342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:34.379275 containerd[1477]: time="2024-10-09T07:25:34.379237811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:34.389453 containerd[1477]: time="2024-10-09T07:25:34.388546238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:34.389453 containerd[1477]: time="2024-10-09T07:25:34.388629371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:34.389453 containerd[1477]: time="2024-10-09T07:25:34.388650682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:34.389453 containerd[1477]: time="2024-10-09T07:25:34.388663309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:34.409013 kubelet[2170]: W1009 07:25:34.408745 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://209.38.154.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:34.409013 kubelet[2170]: E1009 07:25:34.408813 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.154.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:34.419342 systemd[1]: Started cri-containerd-96dac721b3fd5bda27ed4b42364cfc90864011089cd270c36b58ac5fb0f45425.scope - libcontainer container 96dac721b3fd5bda27ed4b42364cfc90864011089cd270c36b58ac5fb0f45425. Oct 9 07:25:34.435355 systemd[1]: Started cri-containerd-9bc3b185e48545b558484bbe1f7b1d2cd5c428b50848909aabe9ae8d74bf40f4.scope - libcontainer container 9bc3b185e48545b558484bbe1f7b1d2cd5c428b50848909aabe9ae8d74bf40f4. Oct 9 07:25:34.452402 systemd[1]: Started cri-containerd-a15875d1d81d89e0f31b30d6244ab00dedd4456cc643cd7d71cf8b49991c9051.scope - libcontainer container a15875d1d81d89e0f31b30d6244ab00dedd4456cc643cd7d71cf8b49991c9051. Oct 9 07:25:34.516952 kubelet[2170]: W1009 07:25:34.516880 2170 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://209.38.154.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.2-3-9020298c9e&limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:34.516952 kubelet[2170]: E1009 07:25:34.516954 2170 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.154.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.2-3-9020298c9e&limit=500&resourceVersion=0": dial tcp 209.38.154.162:6443: connect: connection refused Oct 9 07:25:34.542102 containerd[1477]: time="2024-10-09T07:25:34.541672812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.2-3-9020298c9e,Uid:d5e9a53d045799e261ac7aa7a03f23c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bc3b185e48545b558484bbe1f7b1d2cd5c428b50848909aabe9ae8d74bf40f4\"" Oct 9 07:25:34.544155 kubelet[2170]: E1009 07:25:34.544114 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:34.551772 kubelet[2170]: E1009 07:25:34.551737 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.154.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.2-3-9020298c9e?timeout=10s\": dial tcp 209.38.154.162:6443: connect: connection refused" interval="1.6s" Oct 9 07:25:34.553663 containerd[1477]: time="2024-10-09T07:25:34.553365390Z" level=info msg="CreateContainer within sandbox \"9bc3b185e48545b558484bbe1f7b1d2cd5c428b50848909aabe9ae8d74bf40f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:25:34.554254 containerd[1477]: time="2024-10-09T07:25:34.554151556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.2-3-9020298c9e,Uid:c4b9c46dd8c543abb0251bfef29a212a,Namespace:kube-system,Attempt:0,} returns sandbox id \"96dac721b3fd5bda27ed4b42364cfc90864011089cd270c36b58ac5fb0f45425\"" Oct 9 07:25:34.557293 containerd[1477]: time="2024-10-09T07:25:34.557175234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.2-3-9020298c9e,Uid:8b04c2273a0b7a995569e5391ffe0897,Namespace:kube-system,Attempt:0,} returns sandbox id \"a15875d1d81d89e0f31b30d6244ab00dedd4456cc643cd7d71cf8b49991c9051\"" Oct 9 07:25:34.558852 kubelet[2170]: E1009 07:25:34.558686 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:34.559596 kubelet[2170]: E1009 07:25:34.559138 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:34.564049 containerd[1477]: time="2024-10-09T07:25:34.563284636Z" level=info msg="CreateContainer within sandbox \"96dac721b3fd5bda27ed4b42364cfc90864011089cd270c36b58ac5fb0f45425\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:25:34.564916 containerd[1477]: time="2024-10-09T07:25:34.564879481Z" level=info msg="CreateContainer within sandbox \"a15875d1d81d89e0f31b30d6244ab00dedd4456cc643cd7d71cf8b49991c9051\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:25:34.579529 containerd[1477]: time="2024-10-09T07:25:34.579469942Z" level=info msg="CreateContainer within sandbox \"9bc3b185e48545b558484bbe1f7b1d2cd5c428b50848909aabe9ae8d74bf40f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"131dd673eec4f15fcf3cecf0b021834952bee2638e09105fca851a6adcdb261b\"" Oct 9 07:25:34.581207 containerd[1477]: time="2024-10-09T07:25:34.581155044Z" level=info msg="StartContainer for \"131dd673eec4f15fcf3cecf0b021834952bee2638e09105fca851a6adcdb261b\"" Oct 9 07:25:34.592840 containerd[1477]: time="2024-10-09T07:25:34.592459035Z" level=info msg="CreateContainer within sandbox \"a15875d1d81d89e0f31b30d6244ab00dedd4456cc643cd7d71cf8b49991c9051\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"22e70141e24d03bb35b12e8542beedb92707fdfc4edac29af97682593d15371f\"" Oct 9 07:25:34.594123 containerd[1477]: time="2024-10-09T07:25:34.593920480Z" level=info msg="StartContainer for \"22e70141e24d03bb35b12e8542beedb92707fdfc4edac29af97682593d15371f\"" Oct 9 07:25:34.596192 containerd[1477]: time="2024-10-09T07:25:34.596033966Z" level=info msg="CreateContainer within sandbox \"96dac721b3fd5bda27ed4b42364cfc90864011089cd270c36b58ac5fb0f45425\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bd5bf28dfc3b69b4c42fdc32fabe0f68e0707a4a2d0d15bba768c73c324825ff\"" Oct 9 07:25:34.597447 containerd[1477]: time="2024-10-09T07:25:34.597092472Z" level=info msg="StartContainer for \"bd5bf28dfc3b69b4c42fdc32fabe0f68e0707a4a2d0d15bba768c73c324825ff\"" Oct 9 07:25:34.635332 systemd[1]: Started cri-containerd-131dd673eec4f15fcf3cecf0b021834952bee2638e09105fca851a6adcdb261b.scope - libcontainer container 131dd673eec4f15fcf3cecf0b021834952bee2638e09105fca851a6adcdb261b. Oct 9 07:25:34.651325 systemd[1]: Started cri-containerd-22e70141e24d03bb35b12e8542beedb92707fdfc4edac29af97682593d15371f.scope - libcontainer container 22e70141e24d03bb35b12e8542beedb92707fdfc4edac29af97682593d15371f. Oct 9 07:25:34.657600 kubelet[2170]: I1009 07:25:34.657158 2170 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:34.659267 kubelet[2170]: E1009 07:25:34.658178 2170 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.154.162:6443/api/v1/nodes\": dial tcp 209.38.154.162:6443: connect: connection refused" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:34.675320 systemd[1]: Started cri-containerd-bd5bf28dfc3b69b4c42fdc32fabe0f68e0707a4a2d0d15bba768c73c324825ff.scope - libcontainer container bd5bf28dfc3b69b4c42fdc32fabe0f68e0707a4a2d0d15bba768c73c324825ff. Oct 9 07:25:34.755929 containerd[1477]: time="2024-10-09T07:25:34.755674464Z" level=info msg="StartContainer for \"131dd673eec4f15fcf3cecf0b021834952bee2638e09105fca851a6adcdb261b\" returns successfully" Oct 9 07:25:34.759218 containerd[1477]: time="2024-10-09T07:25:34.758036000Z" level=info msg="StartContainer for \"22e70141e24d03bb35b12e8542beedb92707fdfc4edac29af97682593d15371f\" returns successfully" Oct 9 07:25:34.791527 containerd[1477]: time="2024-10-09T07:25:34.791041867Z" level=info msg="StartContainer for \"bd5bf28dfc3b69b4c42fdc32fabe0f68e0707a4a2d0d15bba768c73c324825ff\" returns successfully" Oct 9 07:25:35.219815 kubelet[2170]: E1009 07:25:35.219758 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:35.225330 kubelet[2170]: E1009 07:25:35.223508 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:35.227168 kubelet[2170]: E1009 07:25:35.227005 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:36.228460 kubelet[2170]: E1009 07:25:36.228423 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:36.262781 kubelet[2170]: I1009 07:25:36.262324 2170 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:36.707677 kubelet[2170]: E1009 07:25:36.707595 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:37.147443 kubelet[2170]: I1009 07:25:37.147026 2170 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:37.220032 kubelet[2170]: E1009 07:25:37.219978 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Oct 9 07:25:37.245485 kubelet[2170]: E1009 07:25:37.245433 2170 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.2.2-3-9020298c9e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:37.246255 kubelet[2170]: E1009 07:25:37.245925 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:38.121459 kubelet[2170]: I1009 07:25:38.121400 2170 apiserver.go:52] "Watching apiserver" Oct 9 07:25:38.150466 kubelet[2170]: I1009 07:25:38.150409 2170 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:25:39.653481 kubelet[2170]: W1009 07:25:39.653403 2170 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:25:39.657536 kubelet[2170]: E1009 07:25:39.656116 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:40.236792 kubelet[2170]: E1009 07:25:40.234893 2170 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:40.357954 systemd[1]: Reloading requested from client PID 2443 ('systemctl') (unit session-7.scope)... Oct 9 07:25:40.357976 systemd[1]: Reloading... Oct 9 07:25:40.516102 zram_generator::config[2480]: No configuration found. Oct 9 07:25:40.726670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:25:40.845186 systemd[1]: Reloading finished in 486 ms. Oct 9 07:25:40.902184 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:25:40.902900 kubelet[2170]: I1009 07:25:40.902261 2170 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:25:40.917179 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:25:40.917419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:25:40.917485 systemd[1]: kubelet.service: Consumed 1.513s CPU time, 110.5M memory peak, 0B memory swap peak. Oct 9 07:25:40.927595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:25:41.128384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:25:41.133054 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:25:41.233617 kubelet[2531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:25:41.233617 kubelet[2531]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:25:41.233617 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:25:41.234445 kubelet[2531]: I1009 07:25:41.233655 2531 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:25:41.240140 kubelet[2531]: I1009 07:25:41.240062 2531 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 07:25:41.241158 kubelet[2531]: I1009 07:25:41.240325 2531 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:25:41.241158 kubelet[2531]: I1009 07:25:41.240592 2531 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 07:25:41.246334 kubelet[2531]: I1009 07:25:41.244944 2531 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:25:41.255182 kubelet[2531]: I1009 07:25:41.254540 2531 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:25:41.268616 kubelet[2531]: I1009 07:25:41.268564 2531 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:25:41.268868 kubelet[2531]: I1009 07:25:41.268847 2531 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:25:41.269791 kubelet[2531]: I1009 07:25:41.269240 2531 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:25:41.269791 kubelet[2531]: I1009 07:25:41.269390 2531 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:25:41.269791 kubelet[2531]: I1009 07:25:41.269402 2531 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:25:41.269791 kubelet[2531]: I1009 07:25:41.269458 2531 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:25:41.273554 kubelet[2531]: I1009 07:25:41.271151 2531 kubelet.go:396] "Attempting to sync node with API server" Oct 9 07:25:41.273554 kubelet[2531]: I1009 07:25:41.271191 2531 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:25:41.273554 kubelet[2531]: I1009 07:25:41.271227 2531 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:25:41.273554 kubelet[2531]: I1009 07:25:41.271244 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:25:41.278223 kubelet[2531]: I1009 07:25:41.276629 2531 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 9 07:25:41.280929 kubelet[2531]: I1009 07:25:41.280012 2531 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:25:41.286498 kubelet[2531]: I1009 07:25:41.285633 2531 server.go:1256] "Started kubelet" Oct 9 07:25:41.302907 kubelet[2531]: I1009 07:25:41.302862 2531 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:25:41.304535 kubelet[2531]: I1009 07:25:41.304495 2531 server.go:461] "Adding debug handlers to kubelet server" Oct 9 07:25:41.308159 kubelet[2531]: I1009 07:25:41.308116 2531 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:25:41.309095 kubelet[2531]: I1009 07:25:41.308563 2531 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:25:41.310432 kubelet[2531]: I1009 07:25:41.309877 2531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:25:41.329957 kubelet[2531]: I1009 07:25:41.329914 2531 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:25:41.332638 kubelet[2531]: I1009 07:25:41.332596 2531 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 07:25:41.333060 kubelet[2531]: I1009 07:25:41.333034 2531 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 07:25:41.339328 kubelet[2531]: I1009 07:25:41.338789 2531 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:25:41.341102 kubelet[2531]: I1009 07:25:41.339999 2531 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:25:41.350582 kubelet[2531]: E1009 07:25:41.350543 2531 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:25:41.355412 kubelet[2531]: I1009 07:25:41.355017 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:25:41.358513 kubelet[2531]: I1009 07:25:41.356908 2531 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:25:41.358964 kubelet[2531]: I1009 07:25:41.358933 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:25:41.359211 kubelet[2531]: I1009 07:25:41.359194 2531 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:25:41.359320 kubelet[2531]: I1009 07:25:41.359309 2531 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 07:25:41.359473 kubelet[2531]: E1009 07:25:41.359460 2531 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:25:41.433319 kubelet[2531]: I1009 07:25:41.433103 2531 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.461171 kubelet[2531]: E1009 07:25:41.461130 2531 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:25:41.468831 kubelet[2531]: I1009 07:25:41.468784 2531 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.469536 kubelet[2531]: I1009 07:25:41.469144 2531 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.487651 kubelet[2531]: I1009 07:25:41.486944 2531 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:25:41.487651 kubelet[2531]: I1009 07:25:41.486968 2531 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:25:41.487651 kubelet[2531]: I1009 07:25:41.486995 2531 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:25:41.487651 kubelet[2531]: I1009 07:25:41.487274 2531 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:25:41.487651 kubelet[2531]: I1009 07:25:41.487299 2531 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:25:41.487651 kubelet[2531]: I1009 07:25:41.487306 2531 policy_none.go:49] "None policy: Start" Oct 9 07:25:41.489662 kubelet[2531]: I1009 07:25:41.488556 2531 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:25:41.489662 kubelet[2531]: I1009 07:25:41.488601 2531 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:25:41.489662 kubelet[2531]: I1009 07:25:41.488816 2531 state_mem.go:75] "Updated machine memory state" Oct 9 07:25:41.503506 kubelet[2531]: I1009 07:25:41.503168 2531 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:25:41.503685 kubelet[2531]: I1009 07:25:41.503551 2531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:25:41.663615 kubelet[2531]: I1009 07:25:41.662264 2531 topology_manager.go:215] "Topology Admit Handler" podUID="d5e9a53d045799e261ac7aa7a03f23c1" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.663615 kubelet[2531]: I1009 07:25:41.662410 2531 topology_manager.go:215] "Topology Admit Handler" podUID="8b04c2273a0b7a995569e5391ffe0897" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.663615 kubelet[2531]: I1009 07:25:41.662461 2531 topology_manager.go:215] "Topology Admit Handler" podUID="c4b9c46dd8c543abb0251bfef29a212a" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.675777 kubelet[2531]: W1009 07:25:41.674119 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:25:41.676270 kubelet[2531]: W1009 07:25:41.676169 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:25:41.676433 kubelet[2531]: E1009 07:25:41.676408 2531 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.2.2-3-9020298c9e\" already exists" pod="kube-system/kube-scheduler-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.678142 kubelet[2531]: W1009 07:25:41.678106 2531 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:25:41.735906 kubelet[2531]: I1009 07:25:41.735521 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.735906 kubelet[2531]: I1009 07:25:41.735570 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4b9c46dd8c543abb0251bfef29a212a-kubeconfig\") pod \"kube-scheduler-ci-3975.2.2-3-9020298c9e\" (UID: \"c4b9c46dd8c543abb0251bfef29a212a\") " pod="kube-system/kube-scheduler-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.735906 kubelet[2531]: I1009 07:25:41.735591 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5e9a53d045799e261ac7aa7a03f23c1-ca-certs\") pod \"kube-apiserver-ci-3975.2.2-3-9020298c9e\" (UID: \"d5e9a53d045799e261ac7aa7a03f23c1\") " pod="kube-system/kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.735906 kubelet[2531]: I1009 07:25:41.735615 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5e9a53d045799e261ac7aa7a03f23c1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.2-3-9020298c9e\" (UID: \"d5e9a53d045799e261ac7aa7a03f23c1\") " pod="kube-system/kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.735906 kubelet[2531]: I1009 07:25:41.735645 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-ca-certs\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.736308 kubelet[2531]: I1009 07:25:41.735664 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.736308 kubelet[2531]: I1009 07:25:41.735680 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5e9a53d045799e261ac7aa7a03f23c1-k8s-certs\") pod \"kube-apiserver-ci-3975.2.2-3-9020298c9e\" (UID: \"d5e9a53d045799e261ac7aa7a03f23c1\") " pod="kube-system/kube-apiserver-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.736308 kubelet[2531]: I1009 07:25:41.735699 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.736308 kubelet[2531]: I1009 07:25:41.735720 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b04c2273a0b7a995569e5391ffe0897-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.2-3-9020298c9e\" (UID: \"8b04c2273a0b7a995569e5391ffe0897\") " pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" Oct 9 07:25:41.977968 kubelet[2531]: E1009 07:25:41.977687 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:41.980891 kubelet[2531]: E1009 07:25:41.978798 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:41.980891 kubelet[2531]: E1009 07:25:41.979340 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:42.272964 kubelet[2531]: I1009 07:25:42.272539 2531 apiserver.go:52] "Watching apiserver" Oct 9 07:25:42.334112 kubelet[2531]: I1009 07:25:42.333995 2531 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 07:25:42.422928 kubelet[2531]: E1009 07:25:42.422352 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:42.423705 kubelet[2531]: E1009 07:25:42.423682 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:42.424536 kubelet[2531]: E1009 07:25:42.424520 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:42.549044 kubelet[2531]: I1009 07:25:42.548852 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.2-3-9020298c9e" podStartSLOduration=1.548683472 podStartE2EDuration="1.548683472s" podCreationTimestamp="2024-10-09 07:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:25:42.505256494 +0000 UTC m=+1.365805313" watchObservedRunningTime="2024-10-09 07:25:42.548683472 +0000 UTC m=+1.409232258" Oct 9 07:25:42.550228 kubelet[2531]: I1009 07:25:42.550025 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.2-3-9020298c9e" podStartSLOduration=3.549976619 podStartE2EDuration="3.549976619s" podCreationTimestamp="2024-10-09 07:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:25:42.549327134 +0000 UTC m=+1.409875920" watchObservedRunningTime="2024-10-09 07:25:42.549976619 +0000 UTC m=+1.410525405" Oct 9 07:25:42.584384 kubelet[2531]: I1009 07:25:42.584167 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.2-3-9020298c9e" podStartSLOduration=1.584118409 podStartE2EDuration="1.584118409s" podCreationTimestamp="2024-10-09 07:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:25:42.583129647 +0000 UTC m=+1.443678441" watchObservedRunningTime="2024-10-09 07:25:42.584118409 +0000 UTC m=+1.444667194" Oct 9 07:25:43.425299 kubelet[2531]: E1009 07:25:43.425185 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:44.426506 kubelet[2531]: E1009 07:25:44.426458 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:44.937325 systemd-timesyncd[1349]: Contacted time server 209.51.161.238:123 (2.flatcar.pool.ntp.org). Oct 9 07:25:44.937440 systemd-timesyncd[1349]: Initial clock synchronization to Wed 2024-10-09 07:25:44.937914 UTC. Oct 9 07:25:46.829272 sudo[1659]: pam_unix(sudo:session): session closed for user root Oct 9 07:25:46.835816 sshd[1656]: pam_unix(sshd:session): session closed for user core Oct 9 07:25:46.841169 systemd[1]: sshd@6-209.38.154.162:22-147.75.109.163:54232.service: Deactivated successfully. Oct 9 07:25:46.846310 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:25:46.847013 systemd[1]: session-7.scope: Consumed 5.674s CPU time, 134.2M memory peak, 0B memory swap peak. Oct 9 07:25:46.849502 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:25:46.851717 systemd-logind[1452]: Removed session 7. Oct 9 07:25:50.170851 kubelet[2531]: E1009 07:25:50.170805 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:50.441290 kubelet[2531]: E1009 07:25:50.440999 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:50.737452 kubelet[2531]: E1009 07:25:50.737412 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:51.443729 kubelet[2531]: E1009 07:25:51.443657 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:52.734230 update_engine[1453]: I1009 07:25:52.733446 1453 update_attempter.cc:509] Updating boot flags... Oct 9 07:25:52.769134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2614) Oct 9 07:25:52.835140 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2615) Oct 9 07:25:53.605172 kubelet[2531]: E1009 07:25:53.605125 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:54.216503 kubelet[2531]: I1009 07:25:54.216470 2531 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:25:54.217358 containerd[1477]: time="2024-10-09T07:25:54.217312465Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:25:54.220332 kubelet[2531]: I1009 07:25:54.217963 2531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:25:54.447549 kubelet[2531]: E1009 07:25:54.447500 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:54.489867 kubelet[2531]: I1009 07:25:54.488824 2531 topology_manager.go:215] "Topology Admit Handler" podUID="b880b9d9-7b14-4606-bfd2-4f5bce91a5f5" podNamespace="kube-system" podName="kube-proxy-lxzkk" Oct 9 07:25:54.502135 systemd[1]: Created slice kubepods-besteffort-podb880b9d9_7b14_4606_bfd2_4f5bce91a5f5.slice - libcontainer container kubepods-besteffort-podb880b9d9_7b14_4606_bfd2_4f5bce91a5f5.slice. Oct 9 07:25:54.519826 kubelet[2531]: I1009 07:25:54.519672 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b880b9d9-7b14-4606-bfd2-4f5bce91a5f5-lib-modules\") pod \"kube-proxy-lxzkk\" (UID: \"b880b9d9-7b14-4606-bfd2-4f5bce91a5f5\") " pod="kube-system/kube-proxy-lxzkk" Oct 9 07:25:54.519826 kubelet[2531]: I1009 07:25:54.519726 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlxxn\" (UniqueName: \"kubernetes.io/projected/b880b9d9-7b14-4606-bfd2-4f5bce91a5f5-kube-api-access-jlxxn\") pod \"kube-proxy-lxzkk\" (UID: \"b880b9d9-7b14-4606-bfd2-4f5bce91a5f5\") " pod="kube-system/kube-proxy-lxzkk" Oct 9 07:25:54.519826 kubelet[2531]: I1009 07:25:54.519753 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b880b9d9-7b14-4606-bfd2-4f5bce91a5f5-xtables-lock\") pod \"kube-proxy-lxzkk\" (UID: \"b880b9d9-7b14-4606-bfd2-4f5bce91a5f5\") " pod="kube-system/kube-proxy-lxzkk" Oct 9 07:25:54.519826 kubelet[2531]: I1009 07:25:54.519773 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b880b9d9-7b14-4606-bfd2-4f5bce91a5f5-kube-proxy\") pod \"kube-proxy-lxzkk\" (UID: \"b880b9d9-7b14-4606-bfd2-4f5bce91a5f5\") " pod="kube-system/kube-proxy-lxzkk" Oct 9 07:25:54.768297 kubelet[2531]: I1009 07:25:54.768156 2531 topology_manager.go:215] "Topology Admit Handler" podUID="5785980b-938a-497c-a1cb-bc4a4dcc5b50" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-h7fkz" Oct 9 07:25:54.779566 systemd[1]: Created slice kubepods-besteffort-pod5785980b_938a_497c_a1cb_bc4a4dcc5b50.slice - libcontainer container kubepods-besteffort-pod5785980b_938a_497c_a1cb_bc4a4dcc5b50.slice. Oct 9 07:25:54.812543 kubelet[2531]: E1009 07:25:54.812486 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:54.813493 containerd[1477]: time="2024-10-09T07:25:54.813448328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lxzkk,Uid:b880b9d9-7b14-4606-bfd2-4f5bce91a5f5,Namespace:kube-system,Attempt:0,}" Oct 9 07:25:54.822557 kubelet[2531]: I1009 07:25:54.822464 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5785980b-938a-497c-a1cb-bc4a4dcc5b50-var-lib-calico\") pod \"tigera-operator-5d56685c77-h7fkz\" (UID: \"5785980b-938a-497c-a1cb-bc4a4dcc5b50\") " pod="tigera-operator/tigera-operator-5d56685c77-h7fkz" Oct 9 07:25:54.822557 kubelet[2531]: I1009 07:25:54.822524 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrbr\" (UniqueName: \"kubernetes.io/projected/5785980b-938a-497c-a1cb-bc4a4dcc5b50-kube-api-access-4mrbr\") pod \"tigera-operator-5d56685c77-h7fkz\" (UID: \"5785980b-938a-497c-a1cb-bc4a4dcc5b50\") " pod="tigera-operator/tigera-operator-5d56685c77-h7fkz" Oct 9 07:25:54.854662 containerd[1477]: time="2024-10-09T07:25:54.854119804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:54.854662 containerd[1477]: time="2024-10-09T07:25:54.854220875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:54.854662 containerd[1477]: time="2024-10-09T07:25:54.854243723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:54.854662 containerd[1477]: time="2024-10-09T07:25:54.854257616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:54.884388 systemd[1]: Started cri-containerd-b82747169cf37804167e3d25d59a28322059beed35ff1381d978f67a9bfebd93.scope - libcontainer container b82747169cf37804167e3d25d59a28322059beed35ff1381d978f67a9bfebd93. Oct 9 07:25:54.914188 containerd[1477]: time="2024-10-09T07:25:54.913956312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lxzkk,Uid:b880b9d9-7b14-4606-bfd2-4f5bce91a5f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b82747169cf37804167e3d25d59a28322059beed35ff1381d978f67a9bfebd93\"" Oct 9 07:25:54.918215 kubelet[2531]: E1009 07:25:54.916840 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:54.927667 containerd[1477]: time="2024-10-09T07:25:54.927613916Z" level=info msg="CreateContainer within sandbox \"b82747169cf37804167e3d25d59a28322059beed35ff1381d978f67a9bfebd93\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:25:54.953892 containerd[1477]: time="2024-10-09T07:25:54.953844544Z" level=info msg="CreateContainer within sandbox \"b82747169cf37804167e3d25d59a28322059beed35ff1381d978f67a9bfebd93\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"356e1eb994eca3599a7c548c43fef6e1d48998ab394e7fb7d66eced3c3733727\"" Oct 9 07:25:54.955823 containerd[1477]: time="2024-10-09T07:25:54.955583813Z" level=info msg="StartContainer for \"356e1eb994eca3599a7c548c43fef6e1d48998ab394e7fb7d66eced3c3733727\"" Oct 9 07:25:54.996465 systemd[1]: Started cri-containerd-356e1eb994eca3599a7c548c43fef6e1d48998ab394e7fb7d66eced3c3733727.scope - libcontainer container 356e1eb994eca3599a7c548c43fef6e1d48998ab394e7fb7d66eced3c3733727. Oct 9 07:25:55.046289 containerd[1477]: time="2024-10-09T07:25:55.045205240Z" level=info msg="StartContainer for \"356e1eb994eca3599a7c548c43fef6e1d48998ab394e7fb7d66eced3c3733727\" returns successfully" Oct 9 07:25:55.084884 containerd[1477]: time="2024-10-09T07:25:55.084739143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-h7fkz,Uid:5785980b-938a-497c-a1cb-bc4a4dcc5b50,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:25:55.129267 containerd[1477]: time="2024-10-09T07:25:55.123407840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:25:55.129267 containerd[1477]: time="2024-10-09T07:25:55.123602258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:55.129267 containerd[1477]: time="2024-10-09T07:25:55.123630424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:25:55.129267 containerd[1477]: time="2024-10-09T07:25:55.123648372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:25:55.162392 systemd[1]: Started cri-containerd-c7524ecf87c50efd2ce96a171a15419c557eab64d206f8a00b4b870babbc1829.scope - libcontainer container c7524ecf87c50efd2ce96a171a15419c557eab64d206f8a00b4b870babbc1829. Oct 9 07:25:55.239231 containerd[1477]: time="2024-10-09T07:25:55.239045897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-h7fkz,Uid:5785980b-938a-497c-a1cb-bc4a4dcc5b50,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c7524ecf87c50efd2ce96a171a15419c557eab64d206f8a00b4b870babbc1829\"" Oct 9 07:25:55.244906 containerd[1477]: time="2024-10-09T07:25:55.244847540Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:25:55.456231 kubelet[2531]: E1009 07:25:55.455401 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:25:55.480123 kubelet[2531]: I1009 07:25:55.479921 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lxzkk" podStartSLOduration=1.479698285 podStartE2EDuration="1.479698285s" podCreationTimestamp="2024-10-09 07:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:25:55.4747987 +0000 UTC m=+14.335347495" watchObservedRunningTime="2024-10-09 07:25:55.479698285 +0000 UTC m=+14.340247080" Oct 9 07:25:56.733700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997559709.mount: Deactivated successfully. Oct 9 07:25:57.964660 containerd[1477]: time="2024-10-09T07:25:57.964600473Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:57.966087 containerd[1477]: time="2024-10-09T07:25:57.965616311Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136537" Oct 9 07:25:57.966845 containerd[1477]: time="2024-10-09T07:25:57.966787000Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:57.970174 containerd[1477]: time="2024-10-09T07:25:57.969877419Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:25:57.970675 containerd[1477]: time="2024-10-09T07:25:57.970613784Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.725723007s" Oct 9 07:25:57.970786 containerd[1477]: time="2024-10-09T07:25:57.970682186Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:25:57.974953 containerd[1477]: time="2024-10-09T07:25:57.974895179Z" level=info msg="CreateContainer within sandbox \"c7524ecf87c50efd2ce96a171a15419c557eab64d206f8a00b4b870babbc1829\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:25:57.996935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756167691.mount: Deactivated successfully. Oct 9 07:25:58.006313 containerd[1477]: time="2024-10-09T07:25:58.006121047Z" level=info msg="CreateContainer within sandbox \"c7524ecf87c50efd2ce96a171a15419c557eab64d206f8a00b4b870babbc1829\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3d4a71f9b85abfb3df3c16d644c8c79a9d3ab14ddfb1f7a534b093a6c73e14f5\"" Oct 9 07:25:58.008012 containerd[1477]: time="2024-10-09T07:25:58.007728272Z" level=info msg="StartContainer for \"3d4a71f9b85abfb3df3c16d644c8c79a9d3ab14ddfb1f7a534b093a6c73e14f5\"" Oct 9 07:25:58.058650 systemd[1]: run-containerd-runc-k8s.io-3d4a71f9b85abfb3df3c16d644c8c79a9d3ab14ddfb1f7a534b093a6c73e14f5-runc.XLiYn2.mount: Deactivated successfully. Oct 9 07:25:58.072418 systemd[1]: Started cri-containerd-3d4a71f9b85abfb3df3c16d644c8c79a9d3ab14ddfb1f7a534b093a6c73e14f5.scope - libcontainer container 3d4a71f9b85abfb3df3c16d644c8c79a9d3ab14ddfb1f7a534b093a6c73e14f5. Oct 9 07:25:58.120164 containerd[1477]: time="2024-10-09T07:25:58.119968694Z" level=info msg="StartContainer for \"3d4a71f9b85abfb3df3c16d644c8c79a9d3ab14ddfb1f7a534b093a6c73e14f5\" returns successfully" Oct 9 07:25:58.485090 kubelet[2531]: I1009 07:25:58.485029 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-h7fkz" podStartSLOduration=1.756986167 podStartE2EDuration="4.484964467s" podCreationTimestamp="2024-10-09 07:25:54 +0000 UTC" firstStartedPulling="2024-10-09 07:25:55.243348793 +0000 UTC m=+14.103897566" lastFinishedPulling="2024-10-09 07:25:57.971327075 +0000 UTC m=+16.831875866" observedRunningTime="2024-10-09 07:25:58.484744767 +0000 UTC m=+17.345293562" watchObservedRunningTime="2024-10-09 07:25:58.484964467 +0000 UTC m=+17.345513264" Oct 9 07:26:02.190123 kubelet[2531]: I1009 07:26:02.187544 2531 topology_manager.go:215] "Topology Admit Handler" podUID="b2ba00c8-2e89-4c12-9c18-63e98f65dca7" podNamespace="calico-system" podName="calico-typha-7cb5c7d448-psp59" Oct 9 07:26:02.219296 systemd[1]: Created slice kubepods-besteffort-podb2ba00c8_2e89_4c12_9c18_63e98f65dca7.slice - libcontainer container kubepods-besteffort-podb2ba00c8_2e89_4c12_9c18_63e98f65dca7.slice. Oct 9 07:26:02.305847 kubelet[2531]: I1009 07:26:02.305781 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nff6t\" (UniqueName: \"kubernetes.io/projected/b2ba00c8-2e89-4c12-9c18-63e98f65dca7-kube-api-access-nff6t\") pod \"calico-typha-7cb5c7d448-psp59\" (UID: \"b2ba00c8-2e89-4c12-9c18-63e98f65dca7\") " pod="calico-system/calico-typha-7cb5c7d448-psp59" Oct 9 07:26:02.305847 kubelet[2531]: I1009 07:26:02.305861 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b2ba00c8-2e89-4c12-9c18-63e98f65dca7-typha-certs\") pod \"calico-typha-7cb5c7d448-psp59\" (UID: \"b2ba00c8-2e89-4c12-9c18-63e98f65dca7\") " pod="calico-system/calico-typha-7cb5c7d448-psp59" Oct 9 07:26:02.306189 kubelet[2531]: I1009 07:26:02.305907 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2ba00c8-2e89-4c12-9c18-63e98f65dca7-tigera-ca-bundle\") pod \"calico-typha-7cb5c7d448-psp59\" (UID: \"b2ba00c8-2e89-4c12-9c18-63e98f65dca7\") " pod="calico-system/calico-typha-7cb5c7d448-psp59" Oct 9 07:26:02.489116 kubelet[2531]: I1009 07:26:02.488944 2531 topology_manager.go:215] "Topology Admit Handler" podUID="83717185-0866-48aa-a708-26fecd39d616" podNamespace="calico-system" podName="calico-node-zpldc" Oct 9 07:26:02.516305 systemd[1]: Created slice kubepods-besteffort-pod83717185_0866_48aa_a708_26fecd39d616.slice - libcontainer container kubepods-besteffort-pod83717185_0866_48aa_a708_26fecd39d616.slice. Oct 9 07:26:02.538846 kubelet[2531]: E1009 07:26:02.538553 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:02.562605 containerd[1477]: time="2024-10-09T07:26:02.562546265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cb5c7d448-psp59,Uid:b2ba00c8-2e89-4c12-9c18-63e98f65dca7,Namespace:calico-system,Attempt:0,}" Oct 9 07:26:02.609398 kubelet[2531]: I1009 07:26:02.607916 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83717185-0866-48aa-a708-26fecd39d616-tigera-ca-bundle\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.609398 kubelet[2531]: I1009 07:26:02.608003 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-var-run-calico\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.609398 kubelet[2531]: I1009 07:26:02.608044 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-cni-log-dir\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.609398 kubelet[2531]: I1009 07:26:02.608103 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-lib-modules\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.609398 kubelet[2531]: I1009 07:26:02.608137 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-policysync\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.611831 kubelet[2531]: I1009 07:26:02.608173 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-cni-bin-dir\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.611831 kubelet[2531]: I1009 07:26:02.608214 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-cni-net-dir\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.611831 kubelet[2531]: I1009 07:26:02.608280 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-xtables-lock\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.611831 kubelet[2531]: I1009 07:26:02.608324 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/83717185-0866-48aa-a708-26fecd39d616-node-certs\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.611831 kubelet[2531]: I1009 07:26:02.608375 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-flexvol-driver-host\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.612064 kubelet[2531]: I1009 07:26:02.608430 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6dj\" (UniqueName: \"kubernetes.io/projected/83717185-0866-48aa-a708-26fecd39d616-kube-api-access-rb6dj\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.612064 kubelet[2531]: I1009 07:26:02.608464 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/83717185-0866-48aa-a708-26fecd39d616-var-lib-calico\") pod \"calico-node-zpldc\" (UID: \"83717185-0866-48aa-a708-26fecd39d616\") " pod="calico-system/calico-node-zpldc" Oct 9 07:26:02.638957 containerd[1477]: time="2024-10-09T07:26:02.638443585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:26:02.638957 containerd[1477]: time="2024-10-09T07:26:02.638570383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:02.638957 containerd[1477]: time="2024-10-09T07:26:02.638601672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:26:02.638957 containerd[1477]: time="2024-10-09T07:26:02.638623079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:02.725506 systemd[1]: Started cri-containerd-5353e5e81ee3abd4333090f399f2d786cbdab6a18d1a205012f27c314428d9da.scope - libcontainer container 5353e5e81ee3abd4333090f399f2d786cbdab6a18d1a205012f27c314428d9da. Oct 9 07:26:02.750284 kubelet[2531]: E1009 07:26:02.749974 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.750284 kubelet[2531]: W1009 07:26:02.750029 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.750284 kubelet[2531]: E1009 07:26:02.750108 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.767381 kubelet[2531]: E1009 07:26:02.765487 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.767381 kubelet[2531]: W1009 07:26:02.765524 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.767381 kubelet[2531]: E1009 07:26:02.765565 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.813744 kubelet[2531]: I1009 07:26:02.812788 2531 topology_manager.go:215] "Topology Admit Handler" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" podNamespace="calico-system" podName="csi-node-driver-n8tlj" Oct 9 07:26:02.816977 kubelet[2531]: E1009 07:26:02.816588 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8tlj" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" Oct 9 07:26:02.820041 kubelet[2531]: E1009 07:26:02.818358 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.820041 kubelet[2531]: W1009 07:26:02.818430 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.820041 kubelet[2531]: E1009 07:26:02.818464 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.820041 kubelet[2531]: E1009 07:26:02.818906 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.820041 kubelet[2531]: W1009 07:26:02.818936 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.820041 kubelet[2531]: E1009 07:26:02.818961 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.820041 kubelet[2531]: E1009 07:26:02.819351 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.820041 kubelet[2531]: W1009 07:26:02.819381 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.820041 kubelet[2531]: E1009 07:26:02.819402 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.820041 kubelet[2531]: E1009 07:26:02.819743 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.820730 kubelet[2531]: W1009 07:26:02.819758 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.820730 kubelet[2531]: E1009 07:26:02.820184 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.820730 kubelet[2531]: E1009 07:26:02.820519 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.820730 kubelet[2531]: W1009 07:26:02.820550 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.820730 kubelet[2531]: E1009 07:26:02.820571 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.823754 kubelet[2531]: E1009 07:26:02.821204 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.823754 kubelet[2531]: W1009 07:26:02.821221 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.823754 kubelet[2531]: E1009 07:26:02.821246 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.823754 kubelet[2531]: E1009 07:26:02.821667 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.823754 kubelet[2531]: W1009 07:26:02.821679 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.823754 kubelet[2531]: E1009 07:26:02.821705 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.823754 kubelet[2531]: E1009 07:26:02.822456 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.823754 kubelet[2531]: W1009 07:26:02.822493 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.823754 kubelet[2531]: E1009 07:26:02.822511 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.823754 kubelet[2531]: E1009 07:26:02.822816 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.824362 kubelet[2531]: W1009 07:26:02.822836 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.824362 kubelet[2531]: E1009 07:26:02.822869 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.824362 kubelet[2531]: E1009 07:26:02.824298 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.824362 kubelet[2531]: W1009 07:26:02.824340 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.824516 kubelet[2531]: E1009 07:26:02.824374 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.826226 kubelet[2531]: E1009 07:26:02.824779 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.826226 kubelet[2531]: W1009 07:26:02.824817 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.826226 kubelet[2531]: E1009 07:26:02.824839 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.826226 kubelet[2531]: E1009 07:26:02.825101 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.826226 kubelet[2531]: W1009 07:26:02.825112 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.826226 kubelet[2531]: E1009 07:26:02.825126 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.826226 kubelet[2531]: E1009 07:26:02.825382 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.826226 kubelet[2531]: W1009 07:26:02.825392 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.826226 kubelet[2531]: E1009 07:26:02.825405 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.826226 kubelet[2531]: E1009 07:26:02.826178 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.826900 kubelet[2531]: W1009 07:26:02.826191 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.826900 kubelet[2531]: E1009 07:26:02.826233 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.826900 kubelet[2531]: E1009 07:26:02.826493 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.826900 kubelet[2531]: W1009 07:26:02.826503 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.826900 kubelet[2531]: E1009 07:26:02.826522 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.826900 kubelet[2531]: E1009 07:26:02.826757 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.826900 kubelet[2531]: W1009 07:26:02.826765 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.826900 kubelet[2531]: E1009 07:26:02.826784 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.827742 kubelet[2531]: E1009 07:26:02.827267 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.827742 kubelet[2531]: W1009 07:26:02.827287 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.827742 kubelet[2531]: E1009 07:26:02.827307 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.827742 kubelet[2531]: E1009 07:26:02.827518 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.827742 kubelet[2531]: W1009 07:26:02.827527 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.827742 kubelet[2531]: E1009 07:26:02.827545 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.836311 kubelet[2531]: E1009 07:26:02.836253 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.836311 kubelet[2531]: W1009 07:26:02.836292 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.836612 kubelet[2531]: E1009 07:26:02.836328 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.839586 kubelet[2531]: E1009 07:26:02.836846 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.839586 kubelet[2531]: W1009 07:26:02.836864 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.839586 kubelet[2531]: E1009 07:26:02.836897 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.839586 kubelet[2531]: E1009 07:26:02.837454 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:02.840792 containerd[1477]: time="2024-10-09T07:26:02.840724575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zpldc,Uid:83717185-0866-48aa-a708-26fecd39d616,Namespace:calico-system,Attempt:0,}" Oct 9 07:26:02.912814 kubelet[2531]: E1009 07:26:02.912719 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.912814 kubelet[2531]: W1009 07:26:02.912769 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.913127 kubelet[2531]: E1009 07:26:02.912806 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.913468 kubelet[2531]: I1009 07:26:02.913203 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e1dc5ee5-5114-4f0a-8bec-990b3efcd704-varrun\") pod \"csi-node-driver-n8tlj\" (UID: \"e1dc5ee5-5114-4f0a-8bec-990b3efcd704\") " pod="calico-system/csi-node-driver-n8tlj" Oct 9 07:26:02.915178 kubelet[2531]: E1009 07:26:02.915094 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.917548 kubelet[2531]: W1009 07:26:02.917142 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.917548 kubelet[2531]: E1009 07:26:02.917233 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.918192 kubelet[2531]: E1009 07:26:02.917947 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.918192 kubelet[2531]: W1009 07:26:02.917970 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.918192 kubelet[2531]: E1009 07:26:02.918029 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.919403 kubelet[2531]: E1009 07:26:02.919307 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.919403 kubelet[2531]: W1009 07:26:02.919337 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.919403 kubelet[2531]: E1009 07:26:02.919368 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.920028 kubelet[2531]: I1009 07:26:02.920001 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e1dc5ee5-5114-4f0a-8bec-990b3efcd704-socket-dir\") pod \"csi-node-driver-n8tlj\" (UID: \"e1dc5ee5-5114-4f0a-8bec-990b3efcd704\") " pod="calico-system/csi-node-driver-n8tlj" Oct 9 07:26:02.920273 kubelet[2531]: E1009 07:26:02.920260 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.920950 kubelet[2531]: W1009 07:26:02.920418 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.920950 kubelet[2531]: E1009 07:26:02.920800 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.921732 kubelet[2531]: E1009 07:26:02.921706 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.925640 kubelet[2531]: W1009 07:26:02.925169 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.925640 kubelet[2531]: E1009 07:26:02.925348 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.926340 kubelet[2531]: E1009 07:26:02.926013 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.926340 kubelet[2531]: W1009 07:26:02.926038 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.926340 kubelet[2531]: E1009 07:26:02.926102 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.926340 kubelet[2531]: I1009 07:26:02.926162 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e1dc5ee5-5114-4f0a-8bec-990b3efcd704-registration-dir\") pod \"csi-node-driver-n8tlj\" (UID: \"e1dc5ee5-5114-4f0a-8bec-990b3efcd704\") " pod="calico-system/csi-node-driver-n8tlj" Oct 9 07:26:02.926744 kubelet[2531]: E1009 07:26:02.926717 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.926972 kubelet[2531]: W1009 07:26:02.926850 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.926972 kubelet[2531]: E1009 07:26:02.926893 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.926972 kubelet[2531]: I1009 07:26:02.926926 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq8mg\" (UniqueName: \"kubernetes.io/projected/e1dc5ee5-5114-4f0a-8bec-990b3efcd704-kube-api-access-sq8mg\") pod \"csi-node-driver-n8tlj\" (UID: \"e1dc5ee5-5114-4f0a-8bec-990b3efcd704\") " pod="calico-system/csi-node-driver-n8tlj" Oct 9 07:26:02.929987 kubelet[2531]: E1009 07:26:02.929885 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.929987 kubelet[2531]: W1009 07:26:02.929922 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.929987 kubelet[2531]: E1009 07:26:02.929973 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.933207 kubelet[2531]: E1009 07:26:02.931495 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.933207 kubelet[2531]: W1009 07:26:02.931527 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.933207 kubelet[2531]: E1009 07:26:02.932400 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.933207 kubelet[2531]: W1009 07:26:02.932427 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.933506 kubelet[2531]: E1009 07:26:02.933418 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.933506 kubelet[2531]: W1009 07:26:02.933443 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.933506 kubelet[2531]: E1009 07:26:02.933472 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.933506 kubelet[2531]: E1009 07:26:02.933493 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.934682 kubelet[2531]: E1009 07:26:02.934293 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.934682 kubelet[2531]: W1009 07:26:02.934323 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.934682 kubelet[2531]: E1009 07:26:02.934352 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.934682 kubelet[2531]: E1009 07:26:02.934362 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.934682 kubelet[2531]: I1009 07:26:02.934438 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e1dc5ee5-5114-4f0a-8bec-990b3efcd704-kubelet-dir\") pod \"csi-node-driver-n8tlj\" (UID: \"e1dc5ee5-5114-4f0a-8bec-990b3efcd704\") " pod="calico-system/csi-node-driver-n8tlj" Oct 9 07:26:02.935360 kubelet[2531]: E1009 07:26:02.935083 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.935360 kubelet[2531]: W1009 07:26:02.935111 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.935360 kubelet[2531]: E1009 07:26:02.935139 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.936725 kubelet[2531]: E1009 07:26:02.935664 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:02.936925 kubelet[2531]: W1009 07:26:02.936898 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:02.937221 kubelet[2531]: E1009 07:26:02.937010 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:02.969845 containerd[1477]: time="2024-10-09T07:26:02.969427565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:26:02.969845 containerd[1477]: time="2024-10-09T07:26:02.969538127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:02.969845 containerd[1477]: time="2024-10-09T07:26:02.969561244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:26:02.969845 containerd[1477]: time="2024-10-09T07:26:02.969575146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:03.018767 systemd[1]: Started cri-containerd-0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1.scope - libcontainer container 0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1. Oct 9 07:26:03.037120 kubelet[2531]: E1009 07:26:03.036189 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.037120 kubelet[2531]: W1009 07:26:03.036223 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.037120 kubelet[2531]: E1009 07:26:03.036256 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.038148 kubelet[2531]: E1009 07:26:03.037951 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.038148 kubelet[2531]: W1009 07:26:03.037983 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.038148 kubelet[2531]: E1009 07:26:03.038025 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.038772 kubelet[2531]: E1009 07:26:03.038697 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.038772 kubelet[2531]: W1009 07:26:03.038721 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.038772 kubelet[2531]: E1009 07:26:03.038777 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.040283 kubelet[2531]: E1009 07:26:03.040241 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.040283 kubelet[2531]: W1009 07:26:03.040272 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.040723 kubelet[2531]: E1009 07:26:03.040317 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.040723 kubelet[2531]: E1009 07:26:03.040698 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.040723 kubelet[2531]: W1009 07:26:03.040713 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.041025 kubelet[2531]: E1009 07:26:03.040999 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.041828 kubelet[2531]: E1009 07:26:03.041026 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.041828 kubelet[2531]: W1009 07:26:03.041658 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.042191 kubelet[2531]: E1009 07:26:03.041978 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.042494 kubelet[2531]: E1009 07:26:03.042332 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.042494 kubelet[2531]: W1009 07:26:03.042351 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.042494 kubelet[2531]: E1009 07:26:03.042375 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.043111 kubelet[2531]: E1009 07:26:03.043036 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.043111 kubelet[2531]: W1009 07:26:03.043054 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.044579 kubelet[2531]: E1009 07:26:03.044399 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.045013 kubelet[2531]: E1009 07:26:03.044778 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.045013 kubelet[2531]: W1009 07:26:03.044799 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.045013 kubelet[2531]: E1009 07:26:03.044837 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.045610 kubelet[2531]: E1009 07:26:03.045469 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.045610 kubelet[2531]: W1009 07:26:03.045489 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.045610 kubelet[2531]: E1009 07:26:03.045529 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.049112 kubelet[2531]: E1009 07:26:03.048797 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.049112 kubelet[2531]: W1009 07:26:03.048834 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.049322 kubelet[2531]: E1009 07:26:03.049139 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.049935 kubelet[2531]: E1009 07:26:03.049594 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.049935 kubelet[2531]: W1009 07:26:03.049619 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.049935 kubelet[2531]: E1009 07:26:03.049725 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.051109 kubelet[2531]: E1009 07:26:03.050559 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.051109 kubelet[2531]: W1009 07:26:03.050583 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.051587 kubelet[2531]: E1009 07:26:03.051337 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.052332 kubelet[2531]: E1009 07:26:03.052137 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.052332 kubelet[2531]: W1009 07:26:03.052162 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.052332 kubelet[2531]: E1009 07:26:03.052237 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.053291 kubelet[2531]: E1009 07:26:03.053184 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.053291 kubelet[2531]: W1009 07:26:03.053211 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.053440 kubelet[2531]: E1009 07:26:03.053338 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.054584 kubelet[2531]: E1009 07:26:03.054393 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.054584 kubelet[2531]: W1009 07:26:03.054419 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.054584 kubelet[2531]: E1009 07:26:03.054476 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.055307 kubelet[2531]: E1009 07:26:03.055277 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.055307 kubelet[2531]: W1009 07:26:03.055302 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.055796 kubelet[2531]: E1009 07:26:03.055528 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.055796 kubelet[2531]: E1009 07:26:03.055746 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.055796 kubelet[2531]: W1009 07:26:03.055763 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.058135 kubelet[2531]: E1009 07:26:03.056132 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.059221 kubelet[2531]: E1009 07:26:03.058651 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.059221 kubelet[2531]: W1009 07:26:03.059229 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.061209 kubelet[2531]: E1009 07:26:03.061160 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.061391 kubelet[2531]: E1009 07:26:03.061368 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.061391 kubelet[2531]: W1009 07:26:03.061387 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.061497 kubelet[2531]: E1009 07:26:03.061421 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.061880 kubelet[2531]: E1009 07:26:03.061855 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.061985 kubelet[2531]: W1009 07:26:03.061879 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.061985 kubelet[2531]: E1009 07:26:03.061928 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.062385 kubelet[2531]: E1009 07:26:03.062354 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.062385 kubelet[2531]: W1009 07:26:03.062376 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.062827 kubelet[2531]: E1009 07:26:03.062406 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.063055 kubelet[2531]: E1009 07:26:03.062887 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.063055 kubelet[2531]: W1009 07:26:03.062909 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.063055 kubelet[2531]: E1009 07:26:03.062938 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.063554 kubelet[2531]: E1009 07:26:03.063402 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.063554 kubelet[2531]: W1009 07:26:03.063418 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.063554 kubelet[2531]: E1009 07:26:03.063446 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.064466 kubelet[2531]: E1009 07:26:03.064442 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.064466 kubelet[2531]: W1009 07:26:03.064463 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.064617 kubelet[2531]: E1009 07:26:03.064486 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.088906 kubelet[2531]: E1009 07:26:03.088849 2531 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:26:03.088906 kubelet[2531]: W1009 07:26:03.088896 2531 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:26:03.089232 kubelet[2531]: E1009 07:26:03.088933 2531 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:26:03.190453 containerd[1477]: time="2024-10-09T07:26:03.190345364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zpldc,Uid:83717185-0866-48aa-a708-26fecd39d616,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1\"" Oct 9 07:26:03.194124 kubelet[2531]: E1009 07:26:03.194057 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:03.202468 containerd[1477]: time="2024-10-09T07:26:03.202395680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:26:03.358116 containerd[1477]: time="2024-10-09T07:26:03.357884860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cb5c7d448-psp59,Uid:b2ba00c8-2e89-4c12-9c18-63e98f65dca7,Namespace:calico-system,Attempt:0,} returns sandbox id \"5353e5e81ee3abd4333090f399f2d786cbdab6a18d1a205012f27c314428d9da\"" Oct 9 07:26:03.371787 kubelet[2531]: E1009 07:26:03.371733 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:03.434759 systemd[1]: run-containerd-runc-k8s.io-5353e5e81ee3abd4333090f399f2d786cbdab6a18d1a205012f27c314428d9da-runc.ebrbBx.mount: Deactivated successfully. Oct 9 07:26:04.362030 kubelet[2531]: E1009 07:26:04.360672 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8tlj" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" Oct 9 07:26:04.694388 containerd[1477]: time="2024-10-09T07:26:04.693121444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:04.696027 containerd[1477]: time="2024-10-09T07:26:04.695968133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:26:04.697128 containerd[1477]: time="2024-10-09T07:26:04.697057554Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:04.701850 containerd[1477]: time="2024-10-09T07:26:04.701801717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:04.705133 containerd[1477]: time="2024-10-09T07:26:04.705050378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.502585285s" Oct 9 07:26:04.705133 containerd[1477]: time="2024-10-09T07:26:04.705128536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:26:04.708220 containerd[1477]: time="2024-10-09T07:26:04.707850509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:26:04.714134 containerd[1477]: time="2024-10-09T07:26:04.714061336Z" level=info msg="CreateContainer within sandbox \"0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:26:04.742754 containerd[1477]: time="2024-10-09T07:26:04.742587548Z" level=info msg="CreateContainer within sandbox \"0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1\"" Oct 9 07:26:04.743708 containerd[1477]: time="2024-10-09T07:26:04.743679577Z" level=info msg="StartContainer for \"796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1\"" Oct 9 07:26:04.823563 systemd[1]: Started cri-containerd-796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1.scope - libcontainer container 796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1. Oct 9 07:26:04.880631 containerd[1477]: time="2024-10-09T07:26:04.880578318Z" level=info msg="StartContainer for \"796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1\" returns successfully" Oct 9 07:26:04.905216 systemd[1]: cri-containerd-796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1.scope: Deactivated successfully. Oct 9 07:26:04.968520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1-rootfs.mount: Deactivated successfully. Oct 9 07:26:04.989583 containerd[1477]: time="2024-10-09T07:26:04.989372808Z" level=info msg="shim disconnected" id=796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1 namespace=k8s.io Oct 9 07:26:04.991242 containerd[1477]: time="2024-10-09T07:26:04.990122570Z" level=warning msg="cleaning up after shim disconnected" id=796c97d169816862fbc87534a8fb2d5b0bcdd65a028e79615156c6fe34ce72c1 namespace=k8s.io Oct 9 07:26:04.991242 containerd[1477]: time="2024-10-09T07:26:04.990164777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:26:05.515090 kubelet[2531]: E1009 07:26:05.515038 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:06.380008 kubelet[2531]: E1009 07:26:06.379219 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8tlj" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" Oct 9 07:26:07.312900 containerd[1477]: time="2024-10-09T07:26:07.312845156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:07.316896 containerd[1477]: time="2024-10-09T07:26:07.315218885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:26:07.316896 containerd[1477]: time="2024-10-09T07:26:07.315499716Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:07.317628 containerd[1477]: time="2024-10-09T07:26:07.317591187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:07.318814 containerd[1477]: time="2024-10-09T07:26:07.318780059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.610889116s" Oct 9 07:26:07.319061 containerd[1477]: time="2024-10-09T07:26:07.319033994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:26:07.320309 containerd[1477]: time="2024-10-09T07:26:07.320273373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:26:07.350585 containerd[1477]: time="2024-10-09T07:26:07.350536003Z" level=info msg="CreateContainer within sandbox \"5353e5e81ee3abd4333090f399f2d786cbdab6a18d1a205012f27c314428d9da\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:26:07.378332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410729211.mount: Deactivated successfully. Oct 9 07:26:07.393730 containerd[1477]: time="2024-10-09T07:26:07.393572816Z" level=info msg="CreateContainer within sandbox \"5353e5e81ee3abd4333090f399f2d786cbdab6a18d1a205012f27c314428d9da\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"45f9005bf8f3a30ad0d48015086877b7ee1a9ba0a4b07a62b6cbd96a85e1a05e\"" Oct 9 07:26:07.397800 containerd[1477]: time="2024-10-09T07:26:07.397140245Z" level=info msg="StartContainer for \"45f9005bf8f3a30ad0d48015086877b7ee1a9ba0a4b07a62b6cbd96a85e1a05e\"" Oct 9 07:26:07.453576 systemd[1]: Started cri-containerd-45f9005bf8f3a30ad0d48015086877b7ee1a9ba0a4b07a62b6cbd96a85e1a05e.scope - libcontainer container 45f9005bf8f3a30ad0d48015086877b7ee1a9ba0a4b07a62b6cbd96a85e1a05e. Oct 9 07:26:07.545749 containerd[1477]: time="2024-10-09T07:26:07.545650929Z" level=info msg="StartContainer for \"45f9005bf8f3a30ad0d48015086877b7ee1a9ba0a4b07a62b6cbd96a85e1a05e\" returns successfully" Oct 9 07:26:08.360478 kubelet[2531]: E1009 07:26:08.360405 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8tlj" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" Oct 9 07:26:08.548847 kubelet[2531]: E1009 07:26:08.548482 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:08.571546 kubelet[2531]: I1009 07:26:08.571479 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7cb5c7d448-psp59" podStartSLOduration=2.625836599 podStartE2EDuration="6.569218868s" podCreationTimestamp="2024-10-09 07:26:02 +0000 UTC" firstStartedPulling="2024-10-09 07:26:03.376134056 +0000 UTC m=+22.236682843" lastFinishedPulling="2024-10-09 07:26:07.319516337 +0000 UTC m=+26.180065112" observedRunningTime="2024-10-09 07:26:08.566341463 +0000 UTC m=+27.426890256" watchObservedRunningTime="2024-10-09 07:26:08.569218868 +0000 UTC m=+27.429767655" Oct 9 07:26:09.550186 kubelet[2531]: I1009 07:26:09.549815 2531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:26:09.552100 kubelet[2531]: E1009 07:26:09.551037 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:10.361106 kubelet[2531]: E1009 07:26:10.360410 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8tlj" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" Oct 9 07:26:12.066175 containerd[1477]: time="2024-10-09T07:26:12.066100929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:12.067733 containerd[1477]: time="2024-10-09T07:26:12.067234497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:26:12.084855 containerd[1477]: time="2024-10-09T07:26:12.083867452Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:12.087494 containerd[1477]: time="2024-10-09T07:26:12.087302340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:12.088525 containerd[1477]: time="2024-10-09T07:26:12.088472998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.768032877s" Oct 9 07:26:12.088525 containerd[1477]: time="2024-10-09T07:26:12.088511356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:26:12.102768 containerd[1477]: time="2024-10-09T07:26:12.102692944Z" level=info msg="CreateContainer within sandbox \"0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:26:12.145693 containerd[1477]: time="2024-10-09T07:26:12.145642153Z" level=info msg="CreateContainer within sandbox \"0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395\"" Oct 9 07:26:12.146963 containerd[1477]: time="2024-10-09T07:26:12.146927242Z" level=info msg="StartContainer for \"28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395\"" Oct 9 07:26:12.235036 systemd[1]: run-containerd-runc-k8s.io-28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395-runc.Fj9EgM.mount: Deactivated successfully. Oct 9 07:26:12.242355 systemd[1]: Started cri-containerd-28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395.scope - libcontainer container 28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395. Oct 9 07:26:12.300316 containerd[1477]: time="2024-10-09T07:26:12.300232540Z" level=info msg="StartContainer for \"28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395\" returns successfully" Oct 9 07:26:12.361338 kubelet[2531]: E1009 07:26:12.360132 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8tlj" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" Oct 9 07:26:12.564022 kubelet[2531]: E1009 07:26:12.563923 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:12.935200 systemd[1]: cri-containerd-28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395.scope: Deactivated successfully. Oct 9 07:26:12.978659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395-rootfs.mount: Deactivated successfully. Oct 9 07:26:12.983957 containerd[1477]: time="2024-10-09T07:26:12.983833232Z" level=info msg="shim disconnected" id=28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395 namespace=k8s.io Oct 9 07:26:12.984397 containerd[1477]: time="2024-10-09T07:26:12.984205683Z" level=warning msg="cleaning up after shim disconnected" id=28e1b42807dab7183a8e68e5a09d660051ddcc85949a80663bf276c3b14a3395 namespace=k8s.io Oct 9 07:26:12.984397 containerd[1477]: time="2024-10-09T07:26:12.984228306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:26:13.005731 containerd[1477]: time="2024-10-09T07:26:13.005664895Z" level=warning msg="cleanup warnings time=\"2024-10-09T07:26:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 07:26:13.007135 kubelet[2531]: I1009 07:26:13.006995 2531 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:26:13.047526 kubelet[2531]: I1009 07:26:13.047474 2531 topology_manager.go:215] "Topology Admit Handler" podUID="8c66a0a7-9b6e-49c4-8126-0cf24d852972" podNamespace="kube-system" podName="coredns-76f75df574-5blrg" Oct 9 07:26:13.053681 kubelet[2531]: I1009 07:26:13.053371 2531 topology_manager.go:215] "Topology Admit Handler" podUID="04bea8bb-885b-40da-9413-78b8300274e6" podNamespace="kube-system" podName="coredns-76f75df574-zlsrg" Oct 9 07:26:13.060111 kubelet[2531]: I1009 07:26:13.058308 2531 topology_manager.go:215] "Topology Admit Handler" podUID="a2a5b674-9417-4e0a-953e-24c56675ec4e" podNamespace="calico-system" podName="calico-kube-controllers-6dc4f97766-xj7nc" Oct 9 07:26:13.067965 systemd[1]: Created slice kubepods-burstable-pod8c66a0a7_9b6e_49c4_8126_0cf24d852972.slice - libcontainer container kubepods-burstable-pod8c66a0a7_9b6e_49c4_8126_0cf24d852972.slice. Oct 9 07:26:13.083762 systemd[1]: Created slice kubepods-burstable-pod04bea8bb_885b_40da_9413_78b8300274e6.slice - libcontainer container kubepods-burstable-pod04bea8bb_885b_40da_9413_78b8300274e6.slice. Oct 9 07:26:13.089707 systemd[1]: Created slice kubepods-besteffort-poda2a5b674_9417_4e0a_953e_24c56675ec4e.slice - libcontainer container kubepods-besteffort-poda2a5b674_9417_4e0a_953e_24c56675ec4e.slice. Oct 9 07:26:13.233403 kubelet[2531]: I1009 07:26:13.233319 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgnnz\" (UniqueName: \"kubernetes.io/projected/8c66a0a7-9b6e-49c4-8126-0cf24d852972-kube-api-access-xgnnz\") pod \"coredns-76f75df574-5blrg\" (UID: \"8c66a0a7-9b6e-49c4-8126-0cf24d852972\") " pod="kube-system/coredns-76f75df574-5blrg" Oct 9 07:26:13.233403 kubelet[2531]: I1009 07:26:13.233385 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2a5b674-9417-4e0a-953e-24c56675ec4e-tigera-ca-bundle\") pod \"calico-kube-controllers-6dc4f97766-xj7nc\" (UID: \"a2a5b674-9417-4e0a-953e-24c56675ec4e\") " pod="calico-system/calico-kube-controllers-6dc4f97766-xj7nc" Oct 9 07:26:13.233403 kubelet[2531]: I1009 07:26:13.233410 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwrmv\" (UniqueName: \"kubernetes.io/projected/a2a5b674-9417-4e0a-953e-24c56675ec4e-kube-api-access-qwrmv\") pod \"calico-kube-controllers-6dc4f97766-xj7nc\" (UID: \"a2a5b674-9417-4e0a-953e-24c56675ec4e\") " pod="calico-system/calico-kube-controllers-6dc4f97766-xj7nc" Oct 9 07:26:13.233659 kubelet[2531]: I1009 07:26:13.233438 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bea8bb-885b-40da-9413-78b8300274e6-config-volume\") pod \"coredns-76f75df574-zlsrg\" (UID: \"04bea8bb-885b-40da-9413-78b8300274e6\") " pod="kube-system/coredns-76f75df574-zlsrg" Oct 9 07:26:13.233659 kubelet[2531]: I1009 07:26:13.233468 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c66a0a7-9b6e-49c4-8126-0cf24d852972-config-volume\") pod \"coredns-76f75df574-5blrg\" (UID: \"8c66a0a7-9b6e-49c4-8126-0cf24d852972\") " pod="kube-system/coredns-76f75df574-5blrg" Oct 9 07:26:13.233659 kubelet[2531]: I1009 07:26:13.233489 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltt55\" (UniqueName: \"kubernetes.io/projected/04bea8bb-885b-40da-9413-78b8300274e6-kube-api-access-ltt55\") pod \"coredns-76f75df574-zlsrg\" (UID: \"04bea8bb-885b-40da-9413-78b8300274e6\") " pod="kube-system/coredns-76f75df574-zlsrg" Oct 9 07:26:13.375801 kubelet[2531]: E1009 07:26:13.374355 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:13.377372 containerd[1477]: time="2024-10-09T07:26:13.377290997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5blrg,Uid:8c66a0a7-9b6e-49c4-8126-0cf24d852972,Namespace:kube-system,Attempt:0,}" Oct 9 07:26:13.388997 kubelet[2531]: E1009 07:26:13.388859 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:13.392231 containerd[1477]: time="2024-10-09T07:26:13.391617979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zlsrg,Uid:04bea8bb-885b-40da-9413-78b8300274e6,Namespace:kube-system,Attempt:0,}" Oct 9 07:26:13.402566 containerd[1477]: time="2024-10-09T07:26:13.402504802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc4f97766-xj7nc,Uid:a2a5b674-9417-4e0a-953e-24c56675ec4e,Namespace:calico-system,Attempt:0,}" Oct 9 07:26:13.570800 kubelet[2531]: E1009 07:26:13.570660 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:13.576261 containerd[1477]: time="2024-10-09T07:26:13.575651406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:26:13.724527 containerd[1477]: time="2024-10-09T07:26:13.724435928Z" level=error msg="Failed to destroy network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.732081 containerd[1477]: time="2024-10-09T07:26:13.731886100Z" level=error msg="encountered an error cleaning up failed sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.732620 containerd[1477]: time="2024-10-09T07:26:13.732513533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zlsrg,Uid:04bea8bb-885b-40da-9413-78b8300274e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.733522 kubelet[2531]: E1009 07:26:13.732971 2531 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.733522 kubelet[2531]: E1009 07:26:13.733047 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zlsrg" Oct 9 07:26:13.733522 kubelet[2531]: E1009 07:26:13.733079 2531 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zlsrg" Oct 9 07:26:13.733654 kubelet[2531]: E1009 07:26:13.733146 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zlsrg_kube-system(04bea8bb-885b-40da-9413-78b8300274e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zlsrg_kube-system(04bea8bb-885b-40da-9413-78b8300274e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zlsrg" podUID="04bea8bb-885b-40da-9413-78b8300274e6" Oct 9 07:26:13.749306 containerd[1477]: time="2024-10-09T07:26:13.749238080Z" level=error msg="Failed to destroy network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.750742 containerd[1477]: time="2024-10-09T07:26:13.749825217Z" level=error msg="encountered an error cleaning up failed sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.750742 containerd[1477]: time="2024-10-09T07:26:13.749915231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5blrg,Uid:8c66a0a7-9b6e-49c4-8126-0cf24d852972,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.750966 kubelet[2531]: E1009 07:26:13.750232 2531 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.750966 kubelet[2531]: E1009 07:26:13.750324 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5blrg" Oct 9 07:26:13.750966 kubelet[2531]: E1009 07:26:13.750358 2531 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5blrg" Oct 9 07:26:13.751135 kubelet[2531]: E1009 07:26:13.750442 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5blrg_kube-system(8c66a0a7-9b6e-49c4-8126-0cf24d852972)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5blrg_kube-system(8c66a0a7-9b6e-49c4-8126-0cf24d852972)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5blrg" podUID="8c66a0a7-9b6e-49c4-8126-0cf24d852972" Oct 9 07:26:13.752387 containerd[1477]: time="2024-10-09T07:26:13.752329342Z" level=error msg="Failed to destroy network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.754081 containerd[1477]: time="2024-10-09T07:26:13.754011666Z" level=error msg="encountered an error cleaning up failed sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.754213 containerd[1477]: time="2024-10-09T07:26:13.754126693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc4f97766-xj7nc,Uid:a2a5b674-9417-4e0a-953e-24c56675ec4e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.754622 kubelet[2531]: E1009 07:26:13.754417 2531 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:13.754622 kubelet[2531]: E1009 07:26:13.754477 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc4f97766-xj7nc" Oct 9 07:26:13.754622 kubelet[2531]: E1009 07:26:13.754498 2531 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dc4f97766-xj7nc" Oct 9 07:26:13.754886 kubelet[2531]: E1009 07:26:13.754573 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dc4f97766-xj7nc_calico-system(a2a5b674-9417-4e0a-953e-24c56675ec4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dc4f97766-xj7nc_calico-system(a2a5b674-9417-4e0a-953e-24c56675ec4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc4f97766-xj7nc" podUID="a2a5b674-9417-4e0a-953e-24c56675ec4e" Oct 9 07:26:14.352242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7-shm.mount: Deactivated successfully. Oct 9 07:26:14.352350 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208-shm.mount: Deactivated successfully. Oct 9 07:26:14.368604 systemd[1]: Created slice kubepods-besteffort-pode1dc5ee5_5114_4f0a_8bec_990b3efcd704.slice - libcontainer container kubepods-besteffort-pode1dc5ee5_5114_4f0a_8bec_990b3efcd704.slice. Oct 9 07:26:14.372023 containerd[1477]: time="2024-10-09T07:26:14.371955978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8tlj,Uid:e1dc5ee5-5114-4f0a-8bec-990b3efcd704,Namespace:calico-system,Attempt:0,}" Oct 9 07:26:14.503106 containerd[1477]: time="2024-10-09T07:26:14.501177407Z" level=error msg="Failed to destroy network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:14.507177 containerd[1477]: time="2024-10-09T07:26:14.504041664Z" level=error msg="encountered an error cleaning up failed sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:14.507177 containerd[1477]: time="2024-10-09T07:26:14.504170700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8tlj,Uid:e1dc5ee5-5114-4f0a-8bec-990b3efcd704,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:14.507577 kubelet[2531]: E1009 07:26:14.504614 2531 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:14.507577 kubelet[2531]: E1009 07:26:14.504672 2531 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8tlj" Oct 9 07:26:14.507577 kubelet[2531]: E1009 07:26:14.504710 2531 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8tlj" Oct 9 07:26:14.507373 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c-shm.mount: Deactivated successfully. Oct 9 07:26:14.509251 kubelet[2531]: E1009 07:26:14.504772 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n8tlj_calico-system(e1dc5ee5-5114-4f0a-8bec-990b3efcd704)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n8tlj_calico-system(e1dc5ee5-5114-4f0a-8bec-990b3efcd704)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8tlj" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" Oct 9 07:26:14.574386 kubelet[2531]: I1009 07:26:14.574335 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:14.581658 kubelet[2531]: I1009 07:26:14.581125 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:14.581956 containerd[1477]: time="2024-10-09T07:26:14.581867977Z" level=info msg="StopPodSandbox for \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\"" Oct 9 07:26:14.582896 containerd[1477]: time="2024-10-09T07:26:14.581883986Z" level=info msg="StopPodSandbox for \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\"" Oct 9 07:26:14.590432 containerd[1477]: time="2024-10-09T07:26:14.590366462Z" level=info msg="Ensure that sandbox 02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3 in task-service has been cleanup successfully" Oct 9 07:26:14.591092 kubelet[2531]: I1009 07:26:14.590969 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:14.592230 containerd[1477]: time="2024-10-09T07:26:14.591511346Z" level=info msg="Ensure that sandbox 92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7 in task-service has been cleanup successfully" Oct 9 07:26:14.594266 containerd[1477]: time="2024-10-09T07:26:14.594218827Z" level=info msg="StopPodSandbox for \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\"" Oct 9 07:26:14.595497 containerd[1477]: time="2024-10-09T07:26:14.595444436Z" level=info msg="Ensure that sandbox 5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208 in task-service has been cleanup successfully" Oct 9 07:26:14.597269 kubelet[2531]: I1009 07:26:14.596797 2531 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:14.598191 containerd[1477]: time="2024-10-09T07:26:14.598061447Z" level=info msg="StopPodSandbox for \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\"" Oct 9 07:26:14.599033 containerd[1477]: time="2024-10-09T07:26:14.598993673Z" level=info msg="Ensure that sandbox 5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c in task-service has been cleanup successfully" Oct 9 07:26:14.695913 containerd[1477]: time="2024-10-09T07:26:14.694553998Z" level=error msg="StopPodSandbox for \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\" failed" error="failed to destroy network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:14.696323 kubelet[2531]: E1009 07:26:14.695525 2531 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:14.696323 kubelet[2531]: E1009 07:26:14.695660 2531 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3"} Oct 9 07:26:14.696323 kubelet[2531]: E1009 07:26:14.695728 2531 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2a5b674-9417-4e0a-953e-24c56675ec4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:26:14.696323 kubelet[2531]: E1009 07:26:14.695760 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2a5b674-9417-4e0a-953e-24c56675ec4e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dc4f97766-xj7nc" podUID="a2a5b674-9417-4e0a-953e-24c56675ec4e" Oct 9 07:26:14.706684 containerd[1477]: time="2024-10-09T07:26:14.706375672Z" level=error msg="StopPodSandbox for \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\" failed" error="failed to destroy network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:14.707021 kubelet[2531]: E1009 07:26:14.706962 2531 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:14.708351 kubelet[2531]: E1009 07:26:14.707056 2531 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208"} Oct 9 07:26:14.708351 kubelet[2531]: E1009 07:26:14.707231 2531 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c66a0a7-9b6e-49c4-8126-0cf24d852972\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:26:14.708351 kubelet[2531]: E1009 07:26:14.707278 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c66a0a7-9b6e-49c4-8126-0cf24d852972\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5blrg" podUID="8c66a0a7-9b6e-49c4-8126-0cf24d852972" Oct 9 07:26:14.709119 containerd[1477]: time="2024-10-09T07:26:14.709009582Z" level=error msg="StopPodSandbox for \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\" failed" error="failed to destroy network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:14.709482 kubelet[2531]: E1009 07:26:14.709450 2531 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:14.709557 kubelet[2531]: E1009 07:26:14.709509 2531 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7"} Oct 9 07:26:14.709557 kubelet[2531]: E1009 07:26:14.709545 2531 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04bea8bb-885b-40da-9413-78b8300274e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:26:14.709648 kubelet[2531]: E1009 07:26:14.709574 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04bea8bb-885b-40da-9413-78b8300274e6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zlsrg" podUID="04bea8bb-885b-40da-9413-78b8300274e6" Oct 9 07:26:14.716485 containerd[1477]: time="2024-10-09T07:26:14.716378658Z" level=error msg="StopPodSandbox for \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\" failed" error="failed to destroy network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:26:14.716959 kubelet[2531]: E1009 07:26:14.716913 2531 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:14.717178 kubelet[2531]: E1009 07:26:14.716976 2531 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c"} Oct 9 07:26:14.717178 kubelet[2531]: E1009 07:26:14.717016 2531 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1dc5ee5-5114-4f0a-8bec-990b3efcd704\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:26:14.717178 kubelet[2531]: E1009 07:26:14.717053 2531 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1dc5ee5-5114-4f0a-8bec-990b3efcd704\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8tlj" podUID="e1dc5ee5-5114-4f0a-8bec-990b3efcd704" Oct 9 07:26:20.281932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271084255.mount: Deactivated successfully. Oct 9 07:26:20.365172 containerd[1477]: time="2024-10-09T07:26:20.358744489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:20.369833 containerd[1477]: time="2024-10-09T07:26:20.359614692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:26:20.370877 containerd[1477]: time="2024-10-09T07:26:20.370833041Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:20.373382 containerd[1477]: time="2024-10-09T07:26:20.373339524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:20.374783 containerd[1477]: time="2024-10-09T07:26:20.374725232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.799023954s" Oct 9 07:26:20.374783 containerd[1477]: time="2024-10-09T07:26:20.374782435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:26:20.424924 containerd[1477]: time="2024-10-09T07:26:20.424868634Z" level=info msg="CreateContainer within sandbox \"0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:26:20.496707 containerd[1477]: time="2024-10-09T07:26:20.496532052Z" level=info msg="CreateContainer within sandbox \"0cc4f48d4ba6b655fb4494f1f5b09c50f7555b72b377fe8a888adf51971a95d1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"883e114a4b746e2c085b677095b0765346719b74375e5b207c133db16fb58aae\"" Oct 9 07:26:20.499168 containerd[1477]: time="2024-10-09T07:26:20.497576076Z" level=info msg="StartContainer for \"883e114a4b746e2c085b677095b0765346719b74375e5b207c133db16fb58aae\"" Oct 9 07:26:20.664408 systemd[1]: Started cri-containerd-883e114a4b746e2c085b677095b0765346719b74375e5b207c133db16fb58aae.scope - libcontainer container 883e114a4b746e2c085b677095b0765346719b74375e5b207c133db16fb58aae. Oct 9 07:26:20.769138 containerd[1477]: time="2024-10-09T07:26:20.768191516Z" level=info msg="StartContainer for \"883e114a4b746e2c085b677095b0765346719b74375e5b207c133db16fb58aae\" returns successfully" Oct 9 07:26:20.894621 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:26:20.895992 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:26:21.665517 kubelet[2531]: E1009 07:26:21.663021 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:21.704945 kubelet[2531]: I1009 07:26:21.704869 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zpldc" podStartSLOduration=2.523590889 podStartE2EDuration="19.697761299s" podCreationTimestamp="2024-10-09 07:26:02 +0000 UTC" firstStartedPulling="2024-10-09 07:26:03.201127523 +0000 UTC m=+22.061676334" lastFinishedPulling="2024-10-09 07:26:20.375297969 +0000 UTC m=+39.235846744" observedRunningTime="2024-10-09 07:26:21.694314634 +0000 UTC m=+40.554863432" watchObservedRunningTime="2024-10-09 07:26:21.697761299 +0000 UTC m=+40.558310262" Oct 9 07:26:21.797738 systemd[1]: run-containerd-runc-k8s.io-883e114a4b746e2c085b677095b0765346719b74375e5b207c133db16fb58aae-runc.XfEUuC.mount: Deactivated successfully. Oct 9 07:26:22.048590 systemd[1]: Started sshd@7-209.38.154.162:22-147.75.109.163:57132.service - OpenSSH per-connection server daemon (147.75.109.163:57132). Oct 9 07:26:22.174419 sshd[3548]: Accepted publickey for core from 147.75.109.163 port 57132 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:22.177820 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:22.188118 systemd-logind[1452]: New session 8 of user core. Oct 9 07:26:22.196549 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:26:22.389135 sshd[3548]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:22.394783 systemd[1]: sshd@7-209.38.154.162:22-147.75.109.163:57132.service: Deactivated successfully. Oct 9 07:26:22.397875 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:26:22.402049 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:26:22.405437 systemd-logind[1452]: Removed session 8. Oct 9 07:26:22.667276 kubelet[2531]: E1009 07:26:22.666663 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:23.670014 kubelet[2531]: E1009 07:26:23.669477 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:23.698249 systemd[1]: run-containerd-runc-k8s.io-883e114a4b746e2c085b677095b0765346719b74375e5b207c133db16fb58aae-runc.Kaitme.mount: Deactivated successfully. Oct 9 07:26:23.742345 kubelet[2531]: I1009 07:26:23.741222 2531 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:26:23.742345 kubelet[2531]: E1009 07:26:23.742224 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:24.580125 kernel: bpftool[3749]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:26:24.675141 kubelet[2531]: E1009 07:26:24.675033 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:24.905570 systemd-networkd[1375]: vxlan.calico: Link UP Oct 9 07:26:24.905579 systemd-networkd[1375]: vxlan.calico: Gained carrier Oct 9 07:26:26.151355 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Oct 9 07:26:26.371278 containerd[1477]: time="2024-10-09T07:26:26.371208180Z" level=info msg="StopPodSandbox for \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\"" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.475 [INFO][3838] k8s.go 608: Cleaning up netns ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.477 [INFO][3838] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" iface="eth0" netns="/var/run/netns/cni-096fe5c8-52fc-a04b-652f-cd28fe5942f6" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.478 [INFO][3838] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" iface="eth0" netns="/var/run/netns/cni-096fe5c8-52fc-a04b-652f-cd28fe5942f6" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.478 [INFO][3838] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" iface="eth0" netns="/var/run/netns/cni-096fe5c8-52fc-a04b-652f-cd28fe5942f6" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.478 [INFO][3838] k8s.go 615: Releasing IP address(es) ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.479 [INFO][3838] utils.go 188: Calico CNI releasing IP address ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.638 [INFO][3844] ipam_plugin.go 417: Releasing address using handleID ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.639 [INFO][3844] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.639 [INFO][3844] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.652 [WARNING][3844] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.652 [INFO][3844] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.655 [INFO][3844] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:26.659600 containerd[1477]: 2024-10-09 07:26:26.657 [INFO][3838] k8s.go 621: Teardown processing complete. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:26.663418 containerd[1477]: time="2024-10-09T07:26:26.660382166Z" level=info msg="TearDown network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\" successfully" Oct 9 07:26:26.663418 containerd[1477]: time="2024-10-09T07:26:26.660422776Z" level=info msg="StopPodSandbox for \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\" returns successfully" Oct 9 07:26:26.668417 systemd[1]: run-netns-cni\x2d096fe5c8\x2d52fc\x2da04b\x2d652f\x2dcd28fe5942f6.mount: Deactivated successfully. Oct 9 07:26:26.734866 containerd[1477]: time="2024-10-09T07:26:26.733305258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8tlj,Uid:e1dc5ee5-5114-4f0a-8bec-990b3efcd704,Namespace:calico-system,Attempt:1,}" Oct 9 07:26:27.009340 systemd-networkd[1375]: cali5c54ac42df8: Link UP Oct 9 07:26:27.009655 systemd-networkd[1375]: cali5c54ac42df8: Gained carrier Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.874 [INFO][3852] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0 csi-node-driver- calico-system e1dc5ee5-5114-4f0a-8bec-990b3efcd704 820 0 2024-10-09 07:26:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.2-3-9020298c9e csi-node-driver-n8tlj eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali5c54ac42df8 [] []}} ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Namespace="calico-system" Pod="csi-node-driver-n8tlj" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.875 [INFO][3852] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Namespace="calico-system" Pod="csi-node-driver-n8tlj" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.924 [INFO][3863] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" HandleID="k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.942 [INFO][3863] ipam_plugin.go 270: Auto assigning IP ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" HandleID="k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.2-3-9020298c9e", "pod":"csi-node-driver-n8tlj", "timestamp":"2024-10-09 07:26:26.924934393 +0000 UTC"}, Hostname:"ci-3975.2.2-3-9020298c9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.942 [INFO][3863] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.942 [INFO][3863] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.942 [INFO][3863] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-3-9020298c9e' Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.945 [INFO][3863] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.956 [INFO][3863] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.964 [INFO][3863] ipam.go 489: Trying affinity for 192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.967 [INFO][3863] ipam.go 155: Attempting to load block cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.971 [INFO][3863] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.971 [INFO][3863] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.974 [INFO][3863] ipam.go 1685: Creating new handle: k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.982 [INFO][3863] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.997 [INFO][3863] ipam.go 1216: Successfully claimed IPs: [192.168.60.65/26] block=192.168.60.64/26 handle="k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.998 [INFO][3863] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.65/26] handle="k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.998 [INFO][3863] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:27.036742 containerd[1477]: 2024-10-09 07:26:26.998 [INFO][3863] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.65/26] IPv6=[] ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" HandleID="k8s-pod-network.8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:27.038897 containerd[1477]: 2024-10-09 07:26:27.004 [INFO][3852] k8s.go 386: Populated endpoint ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Namespace="calico-system" Pod="csi-node-driver-n8tlj" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1dc5ee5-5114-4f0a-8bec-990b3efcd704", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"", Pod:"csi-node-driver-n8tlj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c54ac42df8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:27.038897 containerd[1477]: 2024-10-09 07:26:27.004 [INFO][3852] k8s.go 387: Calico CNI using IPs: [192.168.60.65/32] ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Namespace="calico-system" Pod="csi-node-driver-n8tlj" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:27.038897 containerd[1477]: 2024-10-09 07:26:27.004 [INFO][3852] dataplane_linux.go 68: Setting the host side veth name to cali5c54ac42df8 ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Namespace="calico-system" Pod="csi-node-driver-n8tlj" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:27.038897 containerd[1477]: 2024-10-09 07:26:27.007 [INFO][3852] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Namespace="calico-system" Pod="csi-node-driver-n8tlj" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:27.038897 containerd[1477]: 2024-10-09 07:26:27.007 [INFO][3852] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Namespace="calico-system" Pod="csi-node-driver-n8tlj" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1dc5ee5-5114-4f0a-8bec-990b3efcd704", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc", Pod:"csi-node-driver-n8tlj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c54ac42df8", MAC:"3a:5b:26:18:d5:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:27.038897 containerd[1477]: 2024-10-09 07:26:27.029 [INFO][3852] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc" Namespace="calico-system" Pod="csi-node-driver-n8tlj" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:27.086111 containerd[1477]: time="2024-10-09T07:26:27.085742749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:26:27.086111 containerd[1477]: time="2024-10-09T07:26:27.085847811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:27.086111 containerd[1477]: time="2024-10-09T07:26:27.085872268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:26:27.086111 containerd[1477]: time="2024-10-09T07:26:27.085890164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:27.132382 systemd[1]: Started cri-containerd-8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc.scope - libcontainer container 8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc. Oct 9 07:26:27.222425 containerd[1477]: time="2024-10-09T07:26:27.222326375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8tlj,Uid:e1dc5ee5-5114-4f0a-8bec-990b3efcd704,Namespace:calico-system,Attempt:1,} returns sandbox id \"8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc\"" Oct 9 07:26:27.232374 containerd[1477]: time="2024-10-09T07:26:27.232315144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:26:27.362546 containerd[1477]: time="2024-10-09T07:26:27.361469665Z" level=info msg="StopPodSandbox for \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\"" Oct 9 07:26:27.413497 systemd[1]: Started sshd@8-209.38.154.162:22-147.75.109.163:58846.service - OpenSSH per-connection server daemon (147.75.109.163:58846). Oct 9 07:26:27.527700 sshd[3939]: Accepted publickey for core from 147.75.109.163 port 58846 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:27.532511 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:27.542750 systemd-logind[1452]: New session 9 of user core. Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.443 [INFO][3933] k8s.go 608: Cleaning up netns ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.443 [INFO][3933] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" iface="eth0" netns="/var/run/netns/cni-bd7bed9b-4729-f54e-50a5-dab952e7e51f" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.445 [INFO][3933] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" iface="eth0" netns="/var/run/netns/cni-bd7bed9b-4729-f54e-50a5-dab952e7e51f" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.446 [INFO][3933] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" iface="eth0" netns="/var/run/netns/cni-bd7bed9b-4729-f54e-50a5-dab952e7e51f" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.446 [INFO][3933] k8s.go 615: Releasing IP address(es) ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.446 [INFO][3933] utils.go 188: Calico CNI releasing IP address ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.512 [INFO][3941] ipam_plugin.go 417: Releasing address using handleID ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.513 [INFO][3941] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.513 [INFO][3941] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.525 [WARNING][3941] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.525 [INFO][3941] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.530 [INFO][3941] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:27.543852 containerd[1477]: 2024-10-09 07:26:27.536 [INFO][3933] k8s.go 621: Teardown processing complete. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:27.546138 containerd[1477]: time="2024-10-09T07:26:27.545166630Z" level=info msg="TearDown network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\" successfully" Oct 9 07:26:27.546138 containerd[1477]: time="2024-10-09T07:26:27.545260802Z" level=info msg="StopPodSandbox for \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\" returns successfully" Oct 9 07:26:27.547635 kubelet[2531]: E1009 07:26:27.547197 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:27.550497 containerd[1477]: time="2024-10-09T07:26:27.548765161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5blrg,Uid:8c66a0a7-9b6e-49c4-8126-0cf24d852972,Namespace:kube-system,Attempt:1,}" Oct 9 07:26:27.550529 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:26:27.678622 systemd[1]: run-containerd-runc-k8s.io-8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc-runc.Ryn5MT.mount: Deactivated successfully. Oct 9 07:26:27.678780 systemd[1]: run-netns-cni\x2dbd7bed9b\x2d4729\x2df54e\x2d50a5\x2ddab952e7e51f.mount: Deactivated successfully. Oct 9 07:26:27.859366 sshd[3939]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:27.869612 systemd[1]: sshd@8-209.38.154.162:22-147.75.109.163:58846.service: Deactivated successfully. Oct 9 07:26:27.876363 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:26:27.884893 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:26:27.888547 systemd-logind[1452]: Removed session 9. Oct 9 07:26:27.897153 systemd-networkd[1375]: cali9ca5abd3e24: Link UP Oct 9 07:26:27.906332 systemd-networkd[1375]: cali9ca5abd3e24: Gained carrier Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.684 [INFO][3951] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0 coredns-76f75df574- kube-system 8c66a0a7-9b6e-49c4-8126-0cf24d852972 830 0 2024-10-09 07:25:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.2-3-9020298c9e coredns-76f75df574-5blrg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ca5abd3e24 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Namespace="kube-system" Pod="coredns-76f75df574-5blrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.684 [INFO][3951] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Namespace="kube-system" Pod="coredns-76f75df574-5blrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.779 [INFO][3969] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" HandleID="k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.800 [INFO][3969] ipam_plugin.go 270: Auto assigning IP ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" HandleID="k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000316990), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.2-3-9020298c9e", "pod":"coredns-76f75df574-5blrg", "timestamp":"2024-10-09 07:26:27.779488342 +0000 UTC"}, Hostname:"ci-3975.2.2-3-9020298c9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.800 [INFO][3969] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.802 [INFO][3969] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.802 [INFO][3969] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-3-9020298c9e' Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.806 [INFO][3969] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.818 [INFO][3969] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.834 [INFO][3969] ipam.go 489: Trying affinity for 192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.839 [INFO][3969] ipam.go 155: Attempting to load block cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.843 [INFO][3969] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.845 [INFO][3969] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.853 [INFO][3969] ipam.go 1685: Creating new handle: k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.863 [INFO][3969] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.879 [INFO][3969] ipam.go 1216: Successfully claimed IPs: [192.168.60.66/26] block=192.168.60.64/26 handle="k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.881 [INFO][3969] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.66/26] handle="k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.881 [INFO][3969] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:27.953471 containerd[1477]: 2024-10-09 07:26:27.883 [INFO][3969] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.66/26] IPv6=[] ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" HandleID="k8s-pod-network.4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.955349 containerd[1477]: 2024-10-09 07:26:27.891 [INFO][3951] k8s.go 386: Populated endpoint ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Namespace="kube-system" Pod="coredns-76f75df574-5blrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8c66a0a7-9b6e-49c4-8126-0cf24d852972", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"", Pod:"coredns-76f75df574-5blrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ca5abd3e24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:27.955349 containerd[1477]: 2024-10-09 07:26:27.892 [INFO][3951] k8s.go 387: Calico CNI using IPs: [192.168.60.66/32] ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Namespace="kube-system" Pod="coredns-76f75df574-5blrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.955349 containerd[1477]: 2024-10-09 07:26:27.892 [INFO][3951] dataplane_linux.go 68: Setting the host side veth name to cali9ca5abd3e24 ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Namespace="kube-system" Pod="coredns-76f75df574-5blrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.955349 containerd[1477]: 2024-10-09 07:26:27.897 [INFO][3951] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Namespace="kube-system" Pod="coredns-76f75df574-5blrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:27.955349 containerd[1477]: 2024-10-09 07:26:27.904 [INFO][3951] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Namespace="kube-system" Pod="coredns-76f75df574-5blrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8c66a0a7-9b6e-49c4-8126-0cf24d852972", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd", Pod:"coredns-76f75df574-5blrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ca5abd3e24", MAC:"6a:5e:73:18:f8:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:27.955349 containerd[1477]: 2024-10-09 07:26:27.947 [INFO][3951] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd" Namespace="kube-system" Pod="coredns-76f75df574-5blrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:28.016314 containerd[1477]: time="2024-10-09T07:26:28.015556539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:26:28.016314 containerd[1477]: time="2024-10-09T07:26:28.015691748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:28.016314 containerd[1477]: time="2024-10-09T07:26:28.015782298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:26:28.016314 containerd[1477]: time="2024-10-09T07:26:28.015809834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:28.054411 systemd[1]: Started cri-containerd-4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd.scope - libcontainer container 4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd. Oct 9 07:26:28.072298 systemd-networkd[1375]: cali5c54ac42df8: Gained IPv6LL Oct 9 07:26:28.125639 containerd[1477]: time="2024-10-09T07:26:28.125573591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5blrg,Uid:8c66a0a7-9b6e-49c4-8126-0cf24d852972,Namespace:kube-system,Attempt:1,} returns sandbox id \"4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd\"" Oct 9 07:26:28.127312 kubelet[2531]: E1009 07:26:28.127277 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:28.149966 containerd[1477]: time="2024-10-09T07:26:28.149288076Z" level=info msg="CreateContainer within sandbox \"4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:26:28.191147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736482831.mount: Deactivated successfully. Oct 9 07:26:28.191879 containerd[1477]: time="2024-10-09T07:26:28.191825850Z" level=info msg="CreateContainer within sandbox \"4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69646ecf2a821f6b296f46d9d9e8960d461da2b4c4123fa9f8337d49a0aeffed\"" Oct 9 07:26:28.194557 containerd[1477]: time="2024-10-09T07:26:28.192906738Z" level=info msg="StartContainer for \"69646ecf2a821f6b296f46d9d9e8960d461da2b4c4123fa9f8337d49a0aeffed\"" Oct 9 07:26:28.229402 systemd[1]: Started cri-containerd-69646ecf2a821f6b296f46d9d9e8960d461da2b4c4123fa9f8337d49a0aeffed.scope - libcontainer container 69646ecf2a821f6b296f46d9d9e8960d461da2b4c4123fa9f8337d49a0aeffed. Oct 9 07:26:28.286811 containerd[1477]: time="2024-10-09T07:26:28.286636161Z" level=info msg="StartContainer for \"69646ecf2a821f6b296f46d9d9e8960d461da2b4c4123fa9f8337d49a0aeffed\" returns successfully" Oct 9 07:26:28.364642 containerd[1477]: time="2024-10-09T07:26:28.363895120Z" level=info msg="StopPodSandbox for \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\"" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.470 [INFO][4079] k8s.go 608: Cleaning up netns ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.473 [INFO][4079] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" iface="eth0" netns="/var/run/netns/cni-03d4b03a-cdb6-5680-8411-7f5c7b4eaee6" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.474 [INFO][4079] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" iface="eth0" netns="/var/run/netns/cni-03d4b03a-cdb6-5680-8411-7f5c7b4eaee6" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.475 [INFO][4079] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" iface="eth0" netns="/var/run/netns/cni-03d4b03a-cdb6-5680-8411-7f5c7b4eaee6" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.475 [INFO][4079] k8s.go 615: Releasing IP address(es) ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.476 [INFO][4079] utils.go 188: Calico CNI releasing IP address ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.525 [INFO][4086] ipam_plugin.go 417: Releasing address using handleID ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.525 [INFO][4086] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.525 [INFO][4086] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.542 [WARNING][4086] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.542 [INFO][4086] ipam_plugin.go 445: Releasing address using workloadID ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.545 [INFO][4086] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:28.550519 containerd[1477]: 2024-10-09 07:26:28.547 [INFO][4079] k8s.go 621: Teardown processing complete. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:28.552560 containerd[1477]: time="2024-10-09T07:26:28.550680039Z" level=info msg="TearDown network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\" successfully" Oct 9 07:26:28.552560 containerd[1477]: time="2024-10-09T07:26:28.550714707Z" level=info msg="StopPodSandbox for \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\" returns successfully" Oct 9 07:26:28.552626 kubelet[2531]: E1009 07:26:28.551286 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:28.555615 containerd[1477]: time="2024-10-09T07:26:28.554979156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zlsrg,Uid:04bea8bb-885b-40da-9413-78b8300274e6,Namespace:kube-system,Attempt:1,}" Oct 9 07:26:28.670286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418260062.mount: Deactivated successfully. Oct 9 07:26:28.670427 systemd[1]: run-netns-cni\x2d03d4b03a\x2dcdb6\x2d5680\x2d8411\x2d7f5c7b4eaee6.mount: Deactivated successfully. Oct 9 07:26:28.748509 kubelet[2531]: E1009 07:26:28.748058 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:28.794435 systemd-networkd[1375]: cali9a574c73fd5: Link UP Oct 9 07:26:28.807822 systemd-networkd[1375]: cali9a574c73fd5: Gained carrier Oct 9 07:26:28.827762 kubelet[2531]: I1009 07:26:28.823353 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5blrg" podStartSLOduration=34.82329747 podStartE2EDuration="34.82329747s" podCreationTimestamp="2024-10-09 07:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:26:28.789656442 +0000 UTC m=+47.650205231" watchObservedRunningTime="2024-10-09 07:26:28.82329747 +0000 UTC m=+47.683846265" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.623 [INFO][4093] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0 coredns-76f75df574- kube-system 04bea8bb-885b-40da-9413-78b8300274e6 848 0 2024-10-09 07:25:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.2-3-9020298c9e coredns-76f75df574-zlsrg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9a574c73fd5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Namespace="kube-system" Pod="coredns-76f75df574-zlsrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.623 [INFO][4093] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Namespace="kube-system" Pod="coredns-76f75df574-zlsrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.683 [INFO][4104] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" HandleID="k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.696 [INFO][4104] ipam_plugin.go 270: Auto assigning IP ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" HandleID="k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.2-3-9020298c9e", "pod":"coredns-76f75df574-zlsrg", "timestamp":"2024-10-09 07:26:28.683556665 +0000 UTC"}, Hostname:"ci-3975.2.2-3-9020298c9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.696 [INFO][4104] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.696 [INFO][4104] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.696 [INFO][4104] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-3-9020298c9e' Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.700 [INFO][4104] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.709 [INFO][4104] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.721 [INFO][4104] ipam.go 489: Trying affinity for 192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.724 [INFO][4104] ipam.go 155: Attempting to load block cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.729 [INFO][4104] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.729 [INFO][4104] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.734 [INFO][4104] ipam.go 1685: Creating new handle: k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1 Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.745 [INFO][4104] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.762 [INFO][4104] ipam.go 1216: Successfully claimed IPs: [192.168.60.67/26] block=192.168.60.64/26 handle="k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.763 [INFO][4104] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.67/26] handle="k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.763 [INFO][4104] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:28.853346 containerd[1477]: 2024-10-09 07:26:28.763 [INFO][4104] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.67/26] IPv6=[] ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" HandleID="k8s-pod-network.cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.855538 containerd[1477]: 2024-10-09 07:26:28.781 [INFO][4093] k8s.go 386: Populated endpoint ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Namespace="kube-system" Pod="coredns-76f75df574-zlsrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04bea8bb-885b-40da-9413-78b8300274e6", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"", Pod:"coredns-76f75df574-zlsrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a574c73fd5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:28.855538 containerd[1477]: 2024-10-09 07:26:28.782 [INFO][4093] k8s.go 387: Calico CNI using IPs: [192.168.60.67/32] ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Namespace="kube-system" Pod="coredns-76f75df574-zlsrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.855538 containerd[1477]: 2024-10-09 07:26:28.782 [INFO][4093] dataplane_linux.go 68: Setting the host side veth name to cali9a574c73fd5 ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Namespace="kube-system" Pod="coredns-76f75df574-zlsrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.855538 containerd[1477]: 2024-10-09 07:26:28.797 [INFO][4093] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Namespace="kube-system" Pod="coredns-76f75df574-zlsrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.855538 containerd[1477]: 2024-10-09 07:26:28.801 [INFO][4093] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Namespace="kube-system" Pod="coredns-76f75df574-zlsrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04bea8bb-885b-40da-9413-78b8300274e6", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1", Pod:"coredns-76f75df574-zlsrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a574c73fd5", MAC:"8e:4b:1c:44:05:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:28.855538 containerd[1477]: 2024-10-09 07:26:28.844 [INFO][4093] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1" Namespace="kube-system" Pod="coredns-76f75df574-zlsrg" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:28.967579 systemd-networkd[1375]: cali9ca5abd3e24: Gained IPv6LL Oct 9 07:26:28.980654 containerd[1477]: time="2024-10-09T07:26:28.980454829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:26:28.980654 containerd[1477]: time="2024-10-09T07:26:28.980565691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:28.980654 containerd[1477]: time="2024-10-09T07:26:28.980590452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:26:28.980654 containerd[1477]: time="2024-10-09T07:26:28.980606190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:29.044388 systemd[1]: Started cri-containerd-cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1.scope - libcontainer container cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1. Oct 9 07:26:29.111429 containerd[1477]: time="2024-10-09T07:26:29.111340752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:29.113864 containerd[1477]: time="2024-10-09T07:26:29.113724194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:26:29.114816 containerd[1477]: time="2024-10-09T07:26:29.114686474Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:29.122352 containerd[1477]: time="2024-10-09T07:26:29.122139086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:29.123532 containerd[1477]: time="2024-10-09T07:26:29.123258803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.890657876s" Oct 9 07:26:29.123532 containerd[1477]: time="2024-10-09T07:26:29.123315656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:26:29.127636 containerd[1477]: time="2024-10-09T07:26:29.127519048Z" level=info msg="CreateContainer within sandbox \"8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:26:29.139593 containerd[1477]: time="2024-10-09T07:26:29.139513105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zlsrg,Uid:04bea8bb-885b-40da-9413-78b8300274e6,Namespace:kube-system,Attempt:1,} returns sandbox id \"cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1\"" Oct 9 07:26:29.142175 kubelet[2531]: E1009 07:26:29.142045 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:29.168177 containerd[1477]: time="2024-10-09T07:26:29.167916867Z" level=info msg="CreateContainer within sandbox \"cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:26:29.175351 containerd[1477]: time="2024-10-09T07:26:29.173846401Z" level=info msg="CreateContainer within sandbox \"8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1bc5dc29c4434ee73e1fee33eb0bea4a27cfaa383bbda2c868992b431d03eea4\"" Oct 9 07:26:29.177773 containerd[1477]: time="2024-10-09T07:26:29.177207392Z" level=info msg="StartContainer for \"1bc5dc29c4434ee73e1fee33eb0bea4a27cfaa383bbda2c868992b431d03eea4\"" Oct 9 07:26:29.192665 containerd[1477]: time="2024-10-09T07:26:29.192556610Z" level=info msg="CreateContainer within sandbox \"cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d78dc9153ba9daf3c5404ce575f410209a2eae69c6c55dc83003d0dfaedd63bf\"" Oct 9 07:26:29.195399 containerd[1477]: time="2024-10-09T07:26:29.194685695Z" level=info msg="StartContainer for \"d78dc9153ba9daf3c5404ce575f410209a2eae69c6c55dc83003d0dfaedd63bf\"" Oct 9 07:26:29.234324 systemd[1]: Started cri-containerd-1bc5dc29c4434ee73e1fee33eb0bea4a27cfaa383bbda2c868992b431d03eea4.scope - libcontainer container 1bc5dc29c4434ee73e1fee33eb0bea4a27cfaa383bbda2c868992b431d03eea4. Oct 9 07:26:29.252328 systemd[1]: Started cri-containerd-d78dc9153ba9daf3c5404ce575f410209a2eae69c6c55dc83003d0dfaedd63bf.scope - libcontainer container d78dc9153ba9daf3c5404ce575f410209a2eae69c6c55dc83003d0dfaedd63bf. Oct 9 07:26:29.312870 containerd[1477]: time="2024-10-09T07:26:29.312816681Z" level=info msg="StartContainer for \"1bc5dc29c4434ee73e1fee33eb0bea4a27cfaa383bbda2c868992b431d03eea4\" returns successfully" Oct 9 07:26:29.312870 containerd[1477]: time="2024-10-09T07:26:29.312816814Z" level=info msg="StartContainer for \"d78dc9153ba9daf3c5404ce575f410209a2eae69c6c55dc83003d0dfaedd63bf\" returns successfully" Oct 9 07:26:29.316114 containerd[1477]: time="2024-10-09T07:26:29.315499708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:26:29.371359 containerd[1477]: time="2024-10-09T07:26:29.371196488Z" level=info msg="StopPodSandbox for \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\"" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.452 [INFO][4264] k8s.go 608: Cleaning up netns ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.452 [INFO][4264] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" iface="eth0" netns="/var/run/netns/cni-1d5f27b5-d02f-9a1f-daef-f02eb7ab4259" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.452 [INFO][4264] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" iface="eth0" netns="/var/run/netns/cni-1d5f27b5-d02f-9a1f-daef-f02eb7ab4259" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.453 [INFO][4264] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" iface="eth0" netns="/var/run/netns/cni-1d5f27b5-d02f-9a1f-daef-f02eb7ab4259" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.453 [INFO][4264] k8s.go 615: Releasing IP address(es) ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.453 [INFO][4264] utils.go 188: Calico CNI releasing IP address ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.487 [INFO][4270] ipam_plugin.go 417: Releasing address using handleID ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.487 [INFO][4270] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.487 [INFO][4270] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.497 [WARNING][4270] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.497 [INFO][4270] ipam_plugin.go 445: Releasing address using workloadID ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.499 [INFO][4270] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:29.505004 containerd[1477]: 2024-10-09 07:26:29.502 [INFO][4264] k8s.go 621: Teardown processing complete. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:29.505876 containerd[1477]: time="2024-10-09T07:26:29.505291289Z" level=info msg="TearDown network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\" successfully" Oct 9 07:26:29.505876 containerd[1477]: time="2024-10-09T07:26:29.505328617Z" level=info msg="StopPodSandbox for \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\" returns successfully" Oct 9 07:26:29.506181 containerd[1477]: time="2024-10-09T07:26:29.506143794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc4f97766-xj7nc,Uid:a2a5b674-9417-4e0a-953e-24c56675ec4e,Namespace:calico-system,Attempt:1,}" Oct 9 07:26:29.675620 systemd[1]: run-netns-cni\x2d1d5f27b5\x2dd02f\x2d9a1f\x2ddaef\x2df02eb7ab4259.mount: Deactivated successfully. Oct 9 07:26:29.706292 systemd-networkd[1375]: cali207784f68ee: Link UP Oct 9 07:26:29.706841 systemd-networkd[1375]: cali207784f68ee: Gained carrier Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.575 [INFO][4277] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0 calico-kube-controllers-6dc4f97766- calico-system a2a5b674-9417-4e0a-953e-24c56675ec4e 876 0 2024-10-09 07:26:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dc4f97766 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.2-3-9020298c9e calico-kube-controllers-6dc4f97766-xj7nc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali207784f68ee [] []}} ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Namespace="calico-system" Pod="calico-kube-controllers-6dc4f97766-xj7nc" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.575 [INFO][4277] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Namespace="calico-system" Pod="calico-kube-controllers-6dc4f97766-xj7nc" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.618 [INFO][4287] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" HandleID="k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.631 [INFO][4287] ipam_plugin.go 270: Auto assigning IP ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" HandleID="k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.2-3-9020298c9e", "pod":"calico-kube-controllers-6dc4f97766-xj7nc", "timestamp":"2024-10-09 07:26:29.618748144 +0000 UTC"}, Hostname:"ci-3975.2.2-3-9020298c9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.631 [INFO][4287] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.631 [INFO][4287] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.631 [INFO][4287] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-3-9020298c9e' Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.635 [INFO][4287] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.645 [INFO][4287] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.654 [INFO][4287] ipam.go 489: Trying affinity for 192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.657 [INFO][4287] ipam.go 155: Attempting to load block cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.664 [INFO][4287] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.664 [INFO][4287] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.668 [INFO][4287] ipam.go 1685: Creating new handle: k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2 Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.682 [INFO][4287] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.697 [INFO][4287] ipam.go 1216: Successfully claimed IPs: [192.168.60.68/26] block=192.168.60.64/26 handle="k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.697 [INFO][4287] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.68/26] handle="k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.697 [INFO][4287] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:29.745747 containerd[1477]: 2024-10-09 07:26:29.697 [INFO][4287] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.68/26] IPv6=[] ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" HandleID="k8s-pod-network.8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.749517 containerd[1477]: 2024-10-09 07:26:29.701 [INFO][4277] k8s.go 386: Populated endpoint ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Namespace="calico-system" Pod="calico-kube-controllers-6dc4f97766-xj7nc" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0", GenerateName:"calico-kube-controllers-6dc4f97766-", Namespace:"calico-system", SelfLink:"", UID:"a2a5b674-9417-4e0a-953e-24c56675ec4e", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc4f97766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"", Pod:"calico-kube-controllers-6dc4f97766-xj7nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali207784f68ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:29.749517 containerd[1477]: 2024-10-09 07:26:29.702 [INFO][4277] k8s.go 387: Calico CNI using IPs: [192.168.60.68/32] ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Namespace="calico-system" Pod="calico-kube-controllers-6dc4f97766-xj7nc" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.749517 containerd[1477]: 2024-10-09 07:26:29.702 [INFO][4277] dataplane_linux.go 68: Setting the host side veth name to cali207784f68ee ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Namespace="calico-system" Pod="calico-kube-controllers-6dc4f97766-xj7nc" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.749517 containerd[1477]: 2024-10-09 07:26:29.705 [INFO][4277] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Namespace="calico-system" Pod="calico-kube-controllers-6dc4f97766-xj7nc" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.749517 containerd[1477]: 2024-10-09 07:26:29.708 [INFO][4277] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Namespace="calico-system" Pod="calico-kube-controllers-6dc4f97766-xj7nc" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0", GenerateName:"calico-kube-controllers-6dc4f97766-", Namespace:"calico-system", SelfLink:"", UID:"a2a5b674-9417-4e0a-953e-24c56675ec4e", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc4f97766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2", Pod:"calico-kube-controllers-6dc4f97766-xj7nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali207784f68ee", MAC:"46:8d:b0:62:24:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:29.749517 containerd[1477]: 2024-10-09 07:26:29.738 [INFO][4277] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2" Namespace="calico-system" Pod="calico-kube-controllers-6dc4f97766-xj7nc" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:29.760038 kubelet[2531]: E1009 07:26:29.759923 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:29.771400 kubelet[2531]: E1009 07:26:29.771306 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:29.815415 containerd[1477]: time="2024-10-09T07:26:29.815001772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:26:29.816314 containerd[1477]: time="2024-10-09T07:26:29.816008582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:29.816314 containerd[1477]: time="2024-10-09T07:26:29.816114911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:26:29.816314 containerd[1477]: time="2024-10-09T07:26:29.816158786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:29.837752 kubelet[2531]: I1009 07:26:29.837681 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zlsrg" podStartSLOduration=35.837634275 podStartE2EDuration="35.837634275s" podCreationTimestamp="2024-10-09 07:25:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:26:29.789563978 +0000 UTC m=+48.650112775" watchObservedRunningTime="2024-10-09 07:26:29.837634275 +0000 UTC m=+48.698183070" Oct 9 07:26:29.855738 systemd[1]: run-containerd-runc-k8s.io-8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2-runc.kFAl0I.mount: Deactivated successfully. Oct 9 07:26:29.871638 systemd[1]: Started cri-containerd-8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2.scope - libcontainer container 8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2. Oct 9 07:26:29.994135 containerd[1477]: time="2024-10-09T07:26:29.993020896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dc4f97766-xj7nc,Uid:a2a5b674-9417-4e0a-953e-24c56675ec4e,Namespace:calico-system,Attempt:1,} returns sandbox id \"8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2\"" Oct 9 07:26:30.696162 systemd-networkd[1375]: cali9a574c73fd5: Gained IPv6LL Oct 9 07:26:30.788740 kubelet[2531]: E1009 07:26:30.785916 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:30.791521 kubelet[2531]: E1009 07:26:30.789662 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:30.995228 containerd[1477]: time="2024-10-09T07:26:30.994107154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:30.995228 containerd[1477]: time="2024-10-09T07:26:30.995029265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:26:30.996487 containerd[1477]: time="2024-10-09T07:26:30.996438475Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:31.000184 containerd[1477]: time="2024-10-09T07:26:31.000117086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:31.001762 containerd[1477]: time="2024-10-09T07:26:31.001651052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.68609219s" Oct 9 07:26:31.002110 containerd[1477]: time="2024-10-09T07:26:31.002050366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:26:31.003929 containerd[1477]: time="2024-10-09T07:26:31.003877740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:26:31.008712 containerd[1477]: time="2024-10-09T07:26:31.008598208Z" level=info msg="CreateContainer within sandbox \"8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:26:31.037489 containerd[1477]: time="2024-10-09T07:26:31.037427470Z" level=info msg="CreateContainer within sandbox \"8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ed2ab309c9a518641eadd671053e923cee81269c20ec4131a8af9a812d9b9b89\"" Oct 9 07:26:31.038786 containerd[1477]: time="2024-10-09T07:26:31.038393177Z" level=info msg="StartContainer for \"ed2ab309c9a518641eadd671053e923cee81269c20ec4131a8af9a812d9b9b89\"" Oct 9 07:26:31.098404 systemd[1]: Started cri-containerd-ed2ab309c9a518641eadd671053e923cee81269c20ec4131a8af9a812d9b9b89.scope - libcontainer container ed2ab309c9a518641eadd671053e923cee81269c20ec4131a8af9a812d9b9b89. Oct 9 07:26:31.140786 containerd[1477]: time="2024-10-09T07:26:31.140730918Z" level=info msg="StartContainer for \"ed2ab309c9a518641eadd671053e923cee81269c20ec4131a8af9a812d9b9b89\" returns successfully" Oct 9 07:26:31.144615 systemd-networkd[1375]: cali207784f68ee: Gained IPv6LL Oct 9 07:26:31.674730 kubelet[2531]: I1009 07:26:31.674614 2531 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:26:31.684773 kubelet[2531]: I1009 07:26:31.684390 2531 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:26:31.790554 kubelet[2531]: E1009 07:26:31.790520 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:32.886085 systemd[1]: Started sshd@9-209.38.154.162:22-147.75.109.163:58858.service - OpenSSH per-connection server daemon (147.75.109.163:58858). Oct 9 07:26:33.021497 sshd[4405]: Accepted publickey for core from 147.75.109.163 port 58858 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:33.025401 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:33.037968 systemd-logind[1452]: New session 10 of user core. Oct 9 07:26:33.047636 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:26:33.473235 containerd[1477]: time="2024-10-09T07:26:33.473028185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:33.476465 containerd[1477]: time="2024-10-09T07:26:33.475906911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:26:33.479969 containerd[1477]: time="2024-10-09T07:26:33.479390007Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:33.493840 containerd[1477]: time="2024-10-09T07:26:33.493751949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:33.495761 containerd[1477]: time="2024-10-09T07:26:33.494936314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.490696861s" Oct 9 07:26:33.495761 containerd[1477]: time="2024-10-09T07:26:33.495006126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:26:33.535625 containerd[1477]: time="2024-10-09T07:26:33.534457060Z" level=info msg="CreateContainer within sandbox \"8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:26:33.559565 sshd[4405]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:33.568712 containerd[1477]: time="2024-10-09T07:26:33.568652414Z" level=info msg="CreateContainer within sandbox \"8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"98fab0cb6c20a82f8cc2ccadb9d09ef02f494e4506fc473fb1c43dba758fbe28\"" Oct 9 07:26:33.573992 containerd[1477]: time="2024-10-09T07:26:33.572091697Z" level=info msg="StartContainer for \"98fab0cb6c20a82f8cc2ccadb9d09ef02f494e4506fc473fb1c43dba758fbe28\"" Oct 9 07:26:33.580812 systemd[1]: sshd@9-209.38.154.162:22-147.75.109.163:58858.service: Deactivated successfully. Oct 9 07:26:33.588733 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:26:33.594338 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:26:33.607469 systemd[1]: Started sshd@10-209.38.154.162:22-147.75.109.163:58862.service - OpenSSH per-connection server daemon (147.75.109.163:58862). Oct 9 07:26:33.617151 systemd-logind[1452]: Removed session 10. Oct 9 07:26:33.714956 sshd[4423]: Accepted publickey for core from 147.75.109.163 port 58862 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:33.720823 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:33.741581 systemd-logind[1452]: New session 11 of user core. Oct 9 07:26:33.746719 systemd[1]: Started cri-containerd-98fab0cb6c20a82f8cc2ccadb9d09ef02f494e4506fc473fb1c43dba758fbe28.scope - libcontainer container 98fab0cb6c20a82f8cc2ccadb9d09ef02f494e4506fc473fb1c43dba758fbe28. Oct 9 07:26:33.749309 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:26:33.881359 containerd[1477]: time="2024-10-09T07:26:33.881304497Z" level=info msg="StartContainer for \"98fab0cb6c20a82f8cc2ccadb9d09ef02f494e4506fc473fb1c43dba758fbe28\" returns successfully" Oct 9 07:26:34.068554 sshd[4423]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:34.085371 systemd[1]: sshd@10-209.38.154.162:22-147.75.109.163:58862.service: Deactivated successfully. Oct 9 07:26:34.094983 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:26:34.102359 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:26:34.111576 systemd[1]: Started sshd@11-209.38.154.162:22-147.75.109.163:58872.service - OpenSSH per-connection server daemon (147.75.109.163:58872). Oct 9 07:26:34.116673 systemd-logind[1452]: Removed session 11. Oct 9 07:26:34.251473 sshd[4468]: Accepted publickey for core from 147.75.109.163 port 58872 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:34.259930 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:34.267308 systemd-logind[1452]: New session 12 of user core. Oct 9 07:26:34.271259 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:26:34.503349 sshd[4468]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:34.507980 systemd[1]: sshd@11-209.38.154.162:22-147.75.109.163:58872.service: Deactivated successfully. Oct 9 07:26:34.520320 systemd[1]: run-containerd-runc-k8s.io-98fab0cb6c20a82f8cc2ccadb9d09ef02f494e4506fc473fb1c43dba758fbe28-runc.0a3NKT.mount: Deactivated successfully. Oct 9 07:26:34.524496 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:26:34.528287 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:26:34.531953 systemd-logind[1452]: Removed session 12. Oct 9 07:26:34.615582 kubelet[2531]: E1009 07:26:34.615009 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:34.639640 kubelet[2531]: I1009 07:26:34.639579 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-n8tlj" podStartSLOduration=28.861027821 podStartE2EDuration="32.63936411s" podCreationTimestamp="2024-10-09 07:26:02 +0000 UTC" firstStartedPulling="2024-10-09 07:26:27.224833846 +0000 UTC m=+46.085382637" lastFinishedPulling="2024-10-09 07:26:31.003170134 +0000 UTC m=+49.863718926" observedRunningTime="2024-10-09 07:26:31.818771192 +0000 UTC m=+50.679319987" watchObservedRunningTime="2024-10-09 07:26:34.63936411 +0000 UTC m=+53.499912905" Oct 9 07:26:34.887888 kubelet[2531]: I1009 07:26:34.887099 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dc4f97766-xj7nc" podStartSLOduration=28.401284008 podStartE2EDuration="31.877890721s" podCreationTimestamp="2024-10-09 07:26:03 +0000 UTC" firstStartedPulling="2024-10-09 07:26:30.021254103 +0000 UTC m=+48.881802884" lastFinishedPulling="2024-10-09 07:26:33.497860811 +0000 UTC m=+52.358409597" observedRunningTime="2024-10-09 07:26:34.850899347 +0000 UTC m=+53.711448148" watchObservedRunningTime="2024-10-09 07:26:34.877890721 +0000 UTC m=+53.738439518" Oct 9 07:26:39.526728 systemd[1]: Started sshd@12-209.38.154.162:22-147.75.109.163:55480.service - OpenSSH per-connection server daemon (147.75.109.163:55480). Oct 9 07:26:39.603319 sshd[4543]: Accepted publickey for core from 147.75.109.163 port 55480 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:39.606428 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:39.616815 systemd-logind[1452]: New session 13 of user core. Oct 9 07:26:39.622370 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:26:39.886525 sshd[4543]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:39.891847 systemd[1]: sshd@12-209.38.154.162:22-147.75.109.163:55480.service: Deactivated successfully. Oct 9 07:26:39.896382 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:26:39.899786 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:26:39.901765 systemd-logind[1452]: Removed session 13. Oct 9 07:26:41.390568 containerd[1477]: time="2024-10-09T07:26:41.390474995Z" level=info msg="StopPodSandbox for \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\"" Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.442 [WARNING][4571] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0", GenerateName:"calico-kube-controllers-6dc4f97766-", Namespace:"calico-system", SelfLink:"", UID:"a2a5b674-9417-4e0a-953e-24c56675ec4e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc4f97766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2", Pod:"calico-kube-controllers-6dc4f97766-xj7nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali207784f68ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.442 [INFO][4571] k8s.go 608: Cleaning up netns ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.442 [INFO][4571] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" iface="eth0" netns="" Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.442 [INFO][4571] k8s.go 615: Releasing IP address(es) ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.442 [INFO][4571] utils.go 188: Calico CNI releasing IP address ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.469 [INFO][4577] ipam_plugin.go 417: Releasing address using handleID ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.469 [INFO][4577] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.469 [INFO][4577] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.477 [WARNING][4577] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.478 [INFO][4577] ipam_plugin.go 445: Releasing address using workloadID ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.480 [INFO][4577] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:41.488870 containerd[1477]: 2024-10-09 07:26:41.482 [INFO][4571] k8s.go 621: Teardown processing complete. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:41.488870 containerd[1477]: time="2024-10-09T07:26:41.488643731Z" level=info msg="TearDown network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\" successfully" Oct 9 07:26:41.488870 containerd[1477]: time="2024-10-09T07:26:41.488670225Z" level=info msg="StopPodSandbox for \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\" returns successfully" Oct 9 07:26:41.493160 containerd[1477]: time="2024-10-09T07:26:41.493048330Z" level=info msg="RemovePodSandbox for \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\"" Oct 9 07:26:41.495649 containerd[1477]: time="2024-10-09T07:26:41.495544948Z" level=info msg="Forcibly stopping sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\"" Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.568 [WARNING][4595] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0", GenerateName:"calico-kube-controllers-6dc4f97766-", Namespace:"calico-system", SelfLink:"", UID:"a2a5b674-9417-4e0a-953e-24c56675ec4e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dc4f97766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"8cf5e3c3600f62e2abb61ddc165dddc70ced3f408178ebd5bf7950348db998e2", Pod:"calico-kube-controllers-6dc4f97766-xj7nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.60.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali207784f68ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.569 [INFO][4595] k8s.go 608: Cleaning up netns ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.569 [INFO][4595] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" iface="eth0" netns="" Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.569 [INFO][4595] k8s.go 615: Releasing IP address(es) ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.569 [INFO][4595] utils.go 188: Calico CNI releasing IP address ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.599 [INFO][4601] ipam_plugin.go 417: Releasing address using handleID ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.599 [INFO][4601] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.599 [INFO][4601] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.606 [WARNING][4601] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.606 [INFO][4601] ipam_plugin.go 445: Releasing address using workloadID ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" HandleID="k8s-pod-network.02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--kube--controllers--6dc4f97766--xj7nc-eth0" Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.609 [INFO][4601] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:41.614226 containerd[1477]: 2024-10-09 07:26:41.611 [INFO][4595] k8s.go 621: Teardown processing complete. ContainerID="02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3" Oct 9 07:26:41.615316 containerd[1477]: time="2024-10-09T07:26:41.614259264Z" level=info msg="TearDown network for sandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\" successfully" Oct 9 07:26:41.627910 containerd[1477]: time="2024-10-09T07:26:41.627803403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:26:41.628444 containerd[1477]: time="2024-10-09T07:26:41.627969763Z" level=info msg="RemovePodSandbox \"02d9b8103affdcfbd007b7ca00ebaceb6c5958ac5b5b4d1a2ef6a030d2d9faa3\" returns successfully" Oct 9 07:26:41.629144 containerd[1477]: time="2024-10-09T07:26:41.628812943Z" level=info msg="StopPodSandbox for \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\"" Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.686 [WARNING][4619] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1dc5ee5-5114-4f0a-8bec-990b3efcd704", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc", Pod:"csi-node-driver-n8tlj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c54ac42df8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.686 [INFO][4619] k8s.go 608: Cleaning up netns ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.686 [INFO][4619] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" iface="eth0" netns="" Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.686 [INFO][4619] k8s.go 615: Releasing IP address(es) ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.686 [INFO][4619] utils.go 188: Calico CNI releasing IP address ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.719 [INFO][4625] ipam_plugin.go 417: Releasing address using handleID ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.719 [INFO][4625] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.719 [INFO][4625] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.725 [WARNING][4625] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.726 [INFO][4625] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.729 [INFO][4625] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:41.732983 containerd[1477]: 2024-10-09 07:26:41.730 [INFO][4619] k8s.go 621: Teardown processing complete. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:41.732983 containerd[1477]: time="2024-10-09T07:26:41.732841863Z" level=info msg="TearDown network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\" successfully" Oct 9 07:26:41.732983 containerd[1477]: time="2024-10-09T07:26:41.732868587Z" level=info msg="StopPodSandbox for \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\" returns successfully" Oct 9 07:26:41.733882 containerd[1477]: time="2024-10-09T07:26:41.733520709Z" level=info msg="RemovePodSandbox for \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\"" Oct 9 07:26:41.733882 containerd[1477]: time="2024-10-09T07:26:41.733566874Z" level=info msg="Forcibly stopping sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\"" Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.782 [WARNING][4644] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e1dc5ee5-5114-4f0a-8bec-990b3efcd704", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"8ef77a3a9d7097a60c519c709f7ac026090aceef160f6bae003168c86fe37edc", Pod:"csi-node-driver-n8tlj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.60.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali5c54ac42df8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.782 [INFO][4644] k8s.go 608: Cleaning up netns ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.782 [INFO][4644] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" iface="eth0" netns="" Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.782 [INFO][4644] k8s.go 615: Releasing IP address(es) ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.782 [INFO][4644] utils.go 188: Calico CNI releasing IP address ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.808 [INFO][4651] ipam_plugin.go 417: Releasing address using handleID ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.808 [INFO][4651] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.808 [INFO][4651] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.815 [WARNING][4651] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.815 [INFO][4651] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" HandleID="k8s-pod-network.5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Workload="ci--3975.2.2--3--9020298c9e-k8s-csi--node--driver--n8tlj-eth0" Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.817 [INFO][4651] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:41.823368 containerd[1477]: 2024-10-09 07:26:41.820 [INFO][4644] k8s.go 621: Teardown processing complete. ContainerID="5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c" Oct 9 07:26:41.823368 containerd[1477]: time="2024-10-09T07:26:41.822055218Z" level=info msg="TearDown network for sandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\" successfully" Oct 9 07:26:41.826161 containerd[1477]: time="2024-10-09T07:26:41.826119573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:26:41.826535 containerd[1477]: time="2024-10-09T07:26:41.826359845Z" level=info msg="RemovePodSandbox \"5955f1c0bb3d86b6078beb12751627af327b1c80041d9541ec80681a1ef4641c\" returns successfully" Oct 9 07:26:41.827562 containerd[1477]: time="2024-10-09T07:26:41.827049022Z" level=info msg="StopPodSandbox for \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\"" Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.879 [WARNING][4670] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04bea8bb-885b-40da-9413-78b8300274e6", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1", Pod:"coredns-76f75df574-zlsrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a574c73fd5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.880 [INFO][4670] k8s.go 608: Cleaning up netns ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.880 [INFO][4670] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" iface="eth0" netns="" Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.880 [INFO][4670] k8s.go 615: Releasing IP address(es) ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.880 [INFO][4670] utils.go 188: Calico CNI releasing IP address ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.933 [INFO][4676] ipam_plugin.go 417: Releasing address using handleID ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.934 [INFO][4676] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.934 [INFO][4676] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.941 [WARNING][4676] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.941 [INFO][4676] ipam_plugin.go 445: Releasing address using workloadID ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.944 [INFO][4676] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:41.947643 containerd[1477]: 2024-10-09 07:26:41.945 [INFO][4670] k8s.go 621: Teardown processing complete. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:41.948392 containerd[1477]: time="2024-10-09T07:26:41.947700561Z" level=info msg="TearDown network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\" successfully" Oct 9 07:26:41.948392 containerd[1477]: time="2024-10-09T07:26:41.947727652Z" level=info msg="StopPodSandbox for \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\" returns successfully" Oct 9 07:26:41.949278 containerd[1477]: time="2024-10-09T07:26:41.948784554Z" level=info msg="RemovePodSandbox for \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\"" Oct 9 07:26:41.949278 containerd[1477]: time="2024-10-09T07:26:41.948825744Z" level=info msg="Forcibly stopping sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\"" Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.006 [WARNING][4694] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"04bea8bb-885b-40da-9413-78b8300274e6", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"cce52c225c706d5d990cc7dc2756a0360236d3c42568f58b2bbc5f9c23b06ad1", Pod:"coredns-76f75df574-zlsrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9a574c73fd5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.006 [INFO][4694] k8s.go 608: Cleaning up netns ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.006 [INFO][4694] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" iface="eth0" netns="" Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.006 [INFO][4694] k8s.go 615: Releasing IP address(es) ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.007 [INFO][4694] utils.go 188: Calico CNI releasing IP address ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.034 [INFO][4700] ipam_plugin.go 417: Releasing address using handleID ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.034 [INFO][4700] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.034 [INFO][4700] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.042 [WARNING][4700] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.042 [INFO][4700] ipam_plugin.go 445: Releasing address using workloadID ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" HandleID="k8s-pod-network.92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--zlsrg-eth0" Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.044 [INFO][4700] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:42.049307 containerd[1477]: 2024-10-09 07:26:42.047 [INFO][4694] k8s.go 621: Teardown processing complete. ContainerID="92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7" Oct 9 07:26:42.050717 containerd[1477]: time="2024-10-09T07:26:42.050269292Z" level=info msg="TearDown network for sandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\" successfully" Oct 9 07:26:42.054200 containerd[1477]: time="2024-10-09T07:26:42.054143625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:26:42.054565 containerd[1477]: time="2024-10-09T07:26:42.054534929Z" level=info msg="RemovePodSandbox \"92c536ce5882e13b5277f02b95eb90d38cffbc60a3dfc8ac44a75cb793b633c7\" returns successfully" Oct 9 07:26:42.055632 containerd[1477]: time="2024-10-09T07:26:42.055596313Z" level=info msg="StopPodSandbox for \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\"" Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.105 [WARNING][4718] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8c66a0a7-9b6e-49c4-8126-0cf24d852972", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd", Pod:"coredns-76f75df574-5blrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ca5abd3e24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.105 [INFO][4718] k8s.go 608: Cleaning up netns ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.105 [INFO][4718] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" iface="eth0" netns="" Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.105 [INFO][4718] k8s.go 615: Releasing IP address(es) ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.105 [INFO][4718] utils.go 188: Calico CNI releasing IP address ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.135 [INFO][4724] ipam_plugin.go 417: Releasing address using handleID ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.135 [INFO][4724] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.135 [INFO][4724] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.144 [WARNING][4724] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.144 [INFO][4724] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.146 [INFO][4724] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:42.152122 containerd[1477]: 2024-10-09 07:26:42.149 [INFO][4718] k8s.go 621: Teardown processing complete. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:42.152122 containerd[1477]: time="2024-10-09T07:26:42.151964851Z" level=info msg="TearDown network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\" successfully" Oct 9 07:26:42.152122 containerd[1477]: time="2024-10-09T07:26:42.151999661Z" level=info msg="StopPodSandbox for \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\" returns successfully" Oct 9 07:26:42.153913 containerd[1477]: time="2024-10-09T07:26:42.153266839Z" level=info msg="RemovePodSandbox for \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\"" Oct 9 07:26:42.153913 containerd[1477]: time="2024-10-09T07:26:42.153317659Z" level=info msg="Forcibly stopping sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\"" Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.201 [WARNING][4742] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8c66a0a7-9b6e-49c4-8126-0cf24d852972", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 25, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"4791429a4240d091e922067f0b1b13b76480d0d9592a19fea9e47421796ef6bd", Pod:"coredns-76f75df574-5blrg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.60.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ca5abd3e24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.201 [INFO][4742] k8s.go 608: Cleaning up netns ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.201 [INFO][4742] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" iface="eth0" netns="" Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.201 [INFO][4742] k8s.go 615: Releasing IP address(es) ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.201 [INFO][4742] utils.go 188: Calico CNI releasing IP address ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.229 [INFO][4748] ipam_plugin.go 417: Releasing address using handleID ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.229 [INFO][4748] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.229 [INFO][4748] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.237 [WARNING][4748] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.237 [INFO][4748] ipam_plugin.go 445: Releasing address using workloadID ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" HandleID="k8s-pod-network.5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Workload="ci--3975.2.2--3--9020298c9e-k8s-coredns--76f75df574--5blrg-eth0" Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.240 [INFO][4748] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:42.245102 containerd[1477]: 2024-10-09 07:26:42.242 [INFO][4742] k8s.go 621: Teardown processing complete. ContainerID="5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208" Oct 9 07:26:42.245102 containerd[1477]: time="2024-10-09T07:26:42.245010596Z" level=info msg="TearDown network for sandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\" successfully" Oct 9 07:26:42.249307 containerd[1477]: time="2024-10-09T07:26:42.249217945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:26:42.249307 containerd[1477]: time="2024-10-09T07:26:42.249316002Z" level=info msg="RemovePodSandbox \"5776fd017ed180e4b233e871e22d828bcbc2f282e5c9de506e0b47fafdf7a208\" returns successfully" Oct 9 07:26:44.906617 systemd[1]: Started sshd@13-209.38.154.162:22-147.75.109.163:55492.service - OpenSSH per-connection server daemon (147.75.109.163:55492). Oct 9 07:26:45.042813 sshd[4774]: Accepted publickey for core from 147.75.109.163 port 55492 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:45.045953 sshd[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:45.056367 systemd-logind[1452]: New session 14 of user core. Oct 9 07:26:45.062664 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:26:45.347827 sshd[4774]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:45.358736 systemd[1]: sshd@13-209.38.154.162:22-147.75.109.163:55492.service: Deactivated successfully. Oct 9 07:26:45.366136 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:26:45.374458 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:26:45.378375 systemd-logind[1452]: Removed session 14. Oct 9 07:26:46.261100 kubelet[2531]: I1009 07:26:46.261031 2531 topology_manager.go:215] "Topology Admit Handler" podUID="ef749375-4810-46c3-86b1-6b05a35d89f5" podNamespace="calico-apiserver" podName="calico-apiserver-87cb8f7c8-khw5c" Oct 9 07:26:46.309435 systemd[1]: Created slice kubepods-besteffort-podef749375_4810_46c3_86b1_6b05a35d89f5.slice - libcontainer container kubepods-besteffort-podef749375_4810_46c3_86b1_6b05a35d89f5.slice. Oct 9 07:26:46.365786 kubelet[2531]: I1009 07:26:46.365613 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef749375-4810-46c3-86b1-6b05a35d89f5-calico-apiserver-certs\") pod \"calico-apiserver-87cb8f7c8-khw5c\" (UID: \"ef749375-4810-46c3-86b1-6b05a35d89f5\") " pod="calico-apiserver/calico-apiserver-87cb8f7c8-khw5c" Oct 9 07:26:46.367673 kubelet[2531]: I1009 07:26:46.367633 2531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znxn5\" (UniqueName: \"kubernetes.io/projected/ef749375-4810-46c3-86b1-6b05a35d89f5-kube-api-access-znxn5\") pod \"calico-apiserver-87cb8f7c8-khw5c\" (UID: \"ef749375-4810-46c3-86b1-6b05a35d89f5\") " pod="calico-apiserver/calico-apiserver-87cb8f7c8-khw5c" Oct 9 07:26:46.472544 kubelet[2531]: E1009 07:26:46.471958 2531 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:26:46.493330 kubelet[2531]: E1009 07:26:46.493273 2531 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef749375-4810-46c3-86b1-6b05a35d89f5-calico-apiserver-certs podName:ef749375-4810-46c3-86b1-6b05a35d89f5 nodeName:}" failed. No retries permitted until 2024-10-09 07:26:46.972101487 +0000 UTC m=+65.832650279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ef749375-4810-46c3-86b1-6b05a35d89f5-calico-apiserver-certs") pod "calico-apiserver-87cb8f7c8-khw5c" (UID: "ef749375-4810-46c3-86b1-6b05a35d89f5") : secret "calico-apiserver-certs" not found Oct 9 07:26:47.217085 containerd[1477]: time="2024-10-09T07:26:47.216210185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cb8f7c8-khw5c,Uid:ef749375-4810-46c3-86b1-6b05a35d89f5,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:26:47.532639 systemd-networkd[1375]: calib4389ba5249: Link UP Oct 9 07:26:47.532990 systemd-networkd[1375]: calib4389ba5249: Gained carrier Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.321 [INFO][4808] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0 calico-apiserver-87cb8f7c8- calico-apiserver ef749375-4810-46c3-86b1-6b05a35d89f5 1044 0 2024-10-09 07:26:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:87cb8f7c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.2-3-9020298c9e calico-apiserver-87cb8f7c8-khw5c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4389ba5249 [] []}} ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Namespace="calico-apiserver" Pod="calico-apiserver-87cb8f7c8-khw5c" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.321 [INFO][4808] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Namespace="calico-apiserver" Pod="calico-apiserver-87cb8f7c8-khw5c" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.410 [INFO][4815] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" HandleID="k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.444 [INFO][4815] ipam_plugin.go 270: Auto assigning IP ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" HandleID="k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366110), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.2-3-9020298c9e", "pod":"calico-apiserver-87cb8f7c8-khw5c", "timestamp":"2024-10-09 07:26:47.410545481 +0000 UTC"}, Hostname:"ci-3975.2.2-3-9020298c9e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.444 [INFO][4815] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.445 [INFO][4815] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.445 [INFO][4815] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.2-3-9020298c9e' Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.449 [INFO][4815] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.464 [INFO][4815] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.474 [INFO][4815] ipam.go 489: Trying affinity for 192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.480 [INFO][4815] ipam.go 155: Attempting to load block cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.486 [INFO][4815] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.60.64/26 host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.486 [INFO][4815] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.60.64/26 handle="k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.493 [INFO][4815] ipam.go 1685: Creating new handle: k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55 Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.508 [INFO][4815] ipam.go 1203: Writing block in order to claim IPs block=192.168.60.64/26 handle="k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.521 [INFO][4815] ipam.go 1216: Successfully claimed IPs: [192.168.60.69/26] block=192.168.60.64/26 handle="k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.521 [INFO][4815] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.60.69/26] handle="k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" host="ci-3975.2.2-3-9020298c9e" Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.521 [INFO][4815] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:26:47.565532 containerd[1477]: 2024-10-09 07:26:47.521 [INFO][4815] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.60.69/26] IPv6=[] ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" HandleID="k8s-pod-network.65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Workload="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" Oct 9 07:26:47.570936 containerd[1477]: 2024-10-09 07:26:47.525 [INFO][4808] k8s.go 386: Populated endpoint ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Namespace="calico-apiserver" Pod="calico-apiserver-87cb8f7c8-khw5c" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0", GenerateName:"calico-apiserver-87cb8f7c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef749375-4810-46c3-86b1-6b05a35d89f5", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87cb8f7c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"", Pod:"calico-apiserver-87cb8f7c8-khw5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4389ba5249", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:47.570936 containerd[1477]: 2024-10-09 07:26:47.525 [INFO][4808] k8s.go 387: Calico CNI using IPs: [192.168.60.69/32] ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Namespace="calico-apiserver" Pod="calico-apiserver-87cb8f7c8-khw5c" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" Oct 9 07:26:47.570936 containerd[1477]: 2024-10-09 07:26:47.525 [INFO][4808] dataplane_linux.go 68: Setting the host side veth name to calib4389ba5249 ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Namespace="calico-apiserver" Pod="calico-apiserver-87cb8f7c8-khw5c" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" Oct 9 07:26:47.570936 containerd[1477]: 2024-10-09 07:26:47.532 [INFO][4808] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Namespace="calico-apiserver" Pod="calico-apiserver-87cb8f7c8-khw5c" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" Oct 9 07:26:47.570936 containerd[1477]: 2024-10-09 07:26:47.533 [INFO][4808] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Namespace="calico-apiserver" Pod="calico-apiserver-87cb8f7c8-khw5c" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0", GenerateName:"calico-apiserver-87cb8f7c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef749375-4810-46c3-86b1-6b05a35d89f5", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 26, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"87cb8f7c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.2-3-9020298c9e", ContainerID:"65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55", Pod:"calico-apiserver-87cb8f7c8-khw5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.60.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4389ba5249", MAC:"f2:b6:57:08:dc:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:26:47.570936 containerd[1477]: 2024-10-09 07:26:47.556 [INFO][4808] k8s.go 500: Wrote updated endpoint to datastore ContainerID="65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55" Namespace="calico-apiserver" Pod="calico-apiserver-87cb8f7c8-khw5c" WorkloadEndpoint="ci--3975.2.2--3--9020298c9e-k8s-calico--apiserver--87cb8f7c8--khw5c-eth0" Oct 9 07:26:47.647618 containerd[1477]: time="2024-10-09T07:26:47.647237621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:26:47.647618 containerd[1477]: time="2024-10-09T07:26:47.647338250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:47.647618 containerd[1477]: time="2024-10-09T07:26:47.647360035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:26:47.647618 containerd[1477]: time="2024-10-09T07:26:47.647379492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:26:47.687376 systemd[1]: Started cri-containerd-65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55.scope - libcontainer container 65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55. Oct 9 07:26:47.767854 containerd[1477]: time="2024-10-09T07:26:47.767796493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-87cb8f7c8-khw5c,Uid:ef749375-4810-46c3-86b1-6b05a35d89f5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55\"" Oct 9 07:26:47.769949 containerd[1477]: time="2024-10-09T07:26:47.769900950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:26:48.551310 systemd-networkd[1375]: calib4389ba5249: Gained IPv6LL Oct 9 07:26:50.077339 containerd[1477]: time="2024-10-09T07:26:50.077244734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:50.079149 containerd[1477]: time="2024-10-09T07:26:50.079038447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:26:50.081904 containerd[1477]: time="2024-10-09T07:26:50.081843889Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:50.091213 containerd[1477]: time="2024-10-09T07:26:50.091112076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:26:50.093812 containerd[1477]: time="2024-10-09T07:26:50.093733349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.323766271s" Oct 9 07:26:50.093812 containerd[1477]: time="2024-10-09T07:26:50.093792457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:26:50.097415 containerd[1477]: time="2024-10-09T07:26:50.097304200Z" level=info msg="CreateContainer within sandbox \"65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:26:50.121639 containerd[1477]: time="2024-10-09T07:26:50.121557121Z" level=info msg="CreateContainer within sandbox \"65f9951aec90cf42fc5d863169beb19e0e98c9235fcdbd1dbd3e03100ff0de55\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3bcfbd3e7d9cf70020cd7c7b256cbdfb4bca69eb7c35f3a14953dbbce9ba21bb\"" Oct 9 07:26:50.123781 containerd[1477]: time="2024-10-09T07:26:50.123316538Z" level=info msg="StartContainer for \"3bcfbd3e7d9cf70020cd7c7b256cbdfb4bca69eb7c35f3a14953dbbce9ba21bb\"" Oct 9 07:26:50.188410 systemd[1]: Started cri-containerd-3bcfbd3e7d9cf70020cd7c7b256cbdfb4bca69eb7c35f3a14953dbbce9ba21bb.scope - libcontainer container 3bcfbd3e7d9cf70020cd7c7b256cbdfb4bca69eb7c35f3a14953dbbce9ba21bb. Oct 9 07:26:50.270320 containerd[1477]: time="2024-10-09T07:26:50.269507369Z" level=info msg="StartContainer for \"3bcfbd3e7d9cf70020cd7c7b256cbdfb4bca69eb7c35f3a14953dbbce9ba21bb\" returns successfully" Oct 9 07:26:50.368645 systemd[1]: Started sshd@14-209.38.154.162:22-147.75.109.163:51290.service - OpenSSH per-connection server daemon (147.75.109.163:51290). Oct 9 07:26:50.495850 sshd[4925]: Accepted publickey for core from 147.75.109.163 port 51290 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:50.500797 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:50.512411 systemd-logind[1452]: New session 15 of user core. Oct 9 07:26:50.518349 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:26:50.992550 sshd[4925]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:51.001589 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:26:51.004431 systemd[1]: sshd@14-209.38.154.162:22-147.75.109.163:51290.service: Deactivated successfully. Oct 9 07:26:51.008848 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:26:51.011594 systemd-logind[1452]: Removed session 15. Oct 9 07:26:51.111546 systemd[1]: run-containerd-runc-k8s.io-98fab0cb6c20a82f8cc2ccadb9d09ef02f494e4506fc473fb1c43dba758fbe28-runc.b3ucfF.mount: Deactivated successfully. Oct 9 07:26:51.347157 kubelet[2531]: I1009 07:26:51.347011 2531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-87cb8f7c8-khw5c" podStartSLOduration=3.022449483 podStartE2EDuration="5.346962837s" podCreationTimestamp="2024-10-09 07:26:46 +0000 UTC" firstStartedPulling="2024-10-09 07:26:47.769600546 +0000 UTC m=+66.630149319" lastFinishedPulling="2024-10-09 07:26:50.094113891 +0000 UTC m=+68.954662673" observedRunningTime="2024-10-09 07:26:50.928519582 +0000 UTC m=+69.789068381" watchObservedRunningTime="2024-10-09 07:26:51.346962837 +0000 UTC m=+70.207511631" Oct 9 07:26:56.012569 systemd[1]: Started sshd@15-209.38.154.162:22-147.75.109.163:51294.service - OpenSSH per-connection server daemon (147.75.109.163:51294). Oct 9 07:26:56.097118 sshd[4970]: Accepted publickey for core from 147.75.109.163 port 51294 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:56.098389 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:56.105509 systemd-logind[1452]: New session 16 of user core. Oct 9 07:26:56.113354 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:26:56.327734 sshd[4970]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:56.338957 systemd[1]: sshd@15-209.38.154.162:22-147.75.109.163:51294.service: Deactivated successfully. Oct 9 07:26:56.342404 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:26:56.347191 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:26:56.353294 systemd[1]: Started sshd@16-209.38.154.162:22-147.75.109.163:51298.service - OpenSSH per-connection server daemon (147.75.109.163:51298). Oct 9 07:26:56.355853 systemd-logind[1452]: Removed session 16. Oct 9 07:26:56.360941 kubelet[2531]: E1009 07:26:56.360904 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:56.437532 sshd[4983]: Accepted publickey for core from 147.75.109.163 port 51298 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:56.440023 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:56.447057 systemd-logind[1452]: New session 17 of user core. Oct 9 07:26:56.454371 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:26:56.805858 sshd[4983]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:56.817857 systemd[1]: sshd@16-209.38.154.162:22-147.75.109.163:51298.service: Deactivated successfully. Oct 9 07:26:56.820743 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:26:56.824554 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:26:56.837462 systemd[1]: Started sshd@17-209.38.154.162:22-147.75.109.163:51306.service - OpenSSH per-connection server daemon (147.75.109.163:51306). Oct 9 07:26:56.841316 systemd-logind[1452]: Removed session 17. Oct 9 07:26:56.898977 sshd[4999]: Accepted publickey for core from 147.75.109.163 port 51306 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:56.902090 sshd[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:56.908968 systemd-logind[1452]: New session 18 of user core. Oct 9 07:26:56.914511 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:26:59.059263 sshd[4999]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:59.075229 systemd[1]: sshd@17-209.38.154.162:22-147.75.109.163:51306.service: Deactivated successfully. Oct 9 07:26:59.079186 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:26:59.081381 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:26:59.091888 systemd[1]: Started sshd@18-209.38.154.162:22-147.75.109.163:37406.service - OpenSSH per-connection server daemon (147.75.109.163:37406). Oct 9 07:26:59.098239 systemd-logind[1452]: Removed session 18. Oct 9 07:26:59.186618 sshd[5019]: Accepted publickey for core from 147.75.109.163 port 37406 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:59.190204 sshd[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:59.197035 systemd-logind[1452]: New session 19 of user core. Oct 9 07:26:59.203409 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:26:59.373945 kubelet[2531]: E1009 07:26:59.373810 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:26:59.882518 sshd[5019]: pam_unix(sshd:session): session closed for user core Oct 9 07:26:59.896626 systemd[1]: sshd@18-209.38.154.162:22-147.75.109.163:37406.service: Deactivated successfully. Oct 9 07:26:59.901501 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:26:59.905242 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:26:59.918987 systemd[1]: Started sshd@19-209.38.154.162:22-147.75.109.163:37418.service - OpenSSH per-connection server daemon (147.75.109.163:37418). Oct 9 07:26:59.919824 systemd-logind[1452]: Removed session 19. Oct 9 07:26:59.969772 sshd[5031]: Accepted publickey for core from 147.75.109.163 port 37418 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:26:59.972157 sshd[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:26:59.977734 systemd-logind[1452]: New session 20 of user core. Oct 9 07:26:59.984328 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:27:00.158805 sshd[5031]: pam_unix(sshd:session): session closed for user core Oct 9 07:27:00.163665 systemd[1]: sshd@19-209.38.154.162:22-147.75.109.163:37418.service: Deactivated successfully. Oct 9 07:27:00.167616 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:27:00.171299 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:27:00.174207 systemd-logind[1452]: Removed session 20. Oct 9 07:27:05.183545 systemd[1]: Started sshd@20-209.38.154.162:22-147.75.109.163:37426.service - OpenSSH per-connection server daemon (147.75.109.163:37426). Oct 9 07:27:05.243233 sshd[5067]: Accepted publickey for core from 147.75.109.163 port 37426 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:27:05.245715 sshd[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:27:05.257677 systemd-logind[1452]: New session 21 of user core. Oct 9 07:27:05.263583 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:27:05.604163 sshd[5067]: pam_unix(sshd:session): session closed for user core Oct 9 07:27:05.610571 systemd[1]: sshd@20-209.38.154.162:22-147.75.109.163:37426.service: Deactivated successfully. Oct 9 07:27:05.616685 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:27:05.623667 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:27:05.626565 systemd-logind[1452]: Removed session 21. Oct 9 07:27:10.626641 systemd[1]: Started sshd@21-209.38.154.162:22-147.75.109.163:60744.service - OpenSSH per-connection server daemon (147.75.109.163:60744). Oct 9 07:27:10.746966 sshd[5097]: Accepted publickey for core from 147.75.109.163 port 60744 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:27:10.748092 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:27:10.761478 systemd-logind[1452]: New session 22 of user core. Oct 9 07:27:10.767361 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:27:11.017665 sshd[5097]: pam_unix(sshd:session): session closed for user core Oct 9 07:27:11.026675 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:27:11.026766 systemd[1]: sshd@21-209.38.154.162:22-147.75.109.163:60744.service: Deactivated successfully. Oct 9 07:27:11.030270 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:27:11.033956 systemd-logind[1452]: Removed session 22. Oct 9 07:27:14.364386 kubelet[2531]: E1009 07:27:14.364339 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:27:16.037658 systemd[1]: Started sshd@22-209.38.154.162:22-147.75.109.163:60746.service - OpenSSH per-connection server daemon (147.75.109.163:60746). Oct 9 07:27:16.135110 sshd[5129]: Accepted publickey for core from 147.75.109.163 port 60746 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:27:16.136707 sshd[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:27:16.143432 systemd-logind[1452]: New session 23 of user core. Oct 9 07:27:16.155434 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:27:16.361359 kubelet[2531]: E1009 07:27:16.360765 2531 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:27:16.488580 sshd[5129]: pam_unix(sshd:session): session closed for user core Oct 9 07:27:16.494563 systemd[1]: sshd@22-209.38.154.162:22-147.75.109.163:60746.service: Deactivated successfully. Oct 9 07:27:16.497965 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:27:16.499733 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:27:16.500771 systemd-logind[1452]: Removed session 23. Oct 9 07:27:21.508494 systemd[1]: Started sshd@23-209.38.154.162:22-147.75.109.163:41772.service - OpenSSH per-connection server daemon (147.75.109.163:41772). Oct 9 07:27:21.567266 sshd[5147]: Accepted publickey for core from 147.75.109.163 port 41772 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:27:21.569500 sshd[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:27:21.575989 systemd-logind[1452]: New session 24 of user core. Oct 9 07:27:21.581286 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 07:27:21.746820 sshd[5147]: pam_unix(sshd:session): session closed for user core Oct 9 07:27:21.753482 systemd[1]: sshd@23-209.38.154.162:22-147.75.109.163:41772.service: Deactivated successfully. Oct 9 07:27:21.757682 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 07:27:21.759027 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Oct 9 07:27:21.760799 systemd-logind[1452]: Removed session 24. Oct 9 07:27:26.767472 systemd[1]: Started sshd@24-209.38.154.162:22-147.75.109.163:41782.service - OpenSSH per-connection server daemon (147.75.109.163:41782). Oct 9 07:27:26.822739 sshd[5162]: Accepted publickey for core from 147.75.109.163 port 41782 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:27:26.823569 sshd[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:27:26.828796 systemd-logind[1452]: New session 25 of user core. Oct 9 07:27:26.832258 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 07:27:27.034079 sshd[5162]: pam_unix(sshd:session): session closed for user core Oct 9 07:27:27.045704 systemd[1]: sshd@24-209.38.154.162:22-147.75.109.163:41782.service: Deactivated successfully. Oct 9 07:27:27.050023 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 07:27:27.051034 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Oct 9 07:27:27.052220 systemd-logind[1452]: Removed session 25. Oct 9 07:27:32.056554 systemd[1]: Started sshd@25-209.38.154.162:22-147.75.109.163:50960.service - OpenSSH per-connection server daemon (147.75.109.163:50960). Oct 9 07:27:32.135052 sshd[5184]: Accepted publickey for core from 147.75.109.163 port 50960 ssh2: RSA SHA256:OOTuok04LPMhCB4st0aqyl5Dfz9DReS3qIQDSGH1S/w Oct 9 07:27:32.139960 sshd[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:27:32.152175 systemd-logind[1452]: New session 26 of user core. Oct 9 07:27:32.157278 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 9 07:27:32.313329 sshd[5184]: pam_unix(sshd:session): session closed for user core Oct 9 07:27:32.318576 systemd[1]: sshd@25-209.38.154.162:22-147.75.109.163:50960.service: Deactivated successfully. Oct 9 07:27:32.321897 systemd[1]: session-26.scope: Deactivated successfully. Oct 9 07:27:32.322936 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Oct 9 07:27:32.324307 systemd-logind[1452]: Removed session 26. Oct 9 07:27:34.135668 systemd[1]: run-containerd-runc-k8s.io-883e114a4b746e2c085b677095b0765346719b74375e5b207c133db16fb58aae-runc.zhc4iA.mount: Deactivated successfully.