Oct 9 07:52:06.043066 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 9 07:52:06.043118 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:52:06.043144 kernel: BIOS-provided physical RAM map: Oct 9 07:52:06.043161 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 07:52:06.043175 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 07:52:06.043193 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 07:52:06.043213 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Oct 9 07:52:06.043229 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Oct 9 07:52:06.043245 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 07:52:06.043349 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 07:52:06.043367 kernel: NX (Execute Disable) protection: active Oct 9 07:52:06.043383 kernel: APIC: Static calls initialized Oct 9 07:52:06.043399 kernel: SMBIOS 2.8 present. Oct 9 07:52:06.043416 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 9 07:52:06.043437 kernel: Hypervisor detected: KVM Oct 9 07:52:06.043459 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 07:52:06.043478 kernel: kvm-clock: using sched offset of 3787325902 cycles Oct 9 07:52:06.043502 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 07:52:06.043521 kernel: tsc: Detected 2294.608 MHz processor Oct 9 07:52:06.043540 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 07:52:06.043558 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 07:52:06.043577 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Oct 9 07:52:06.043595 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 07:52:06.043614 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 07:52:06.043636 kernel: ACPI: Early table checksum verification disabled Oct 9 07:52:06.043657 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Oct 9 07:52:06.043671 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:06.043689 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:06.043708 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:06.043726 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 07:52:06.043769 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:06.043782 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:06.043795 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:06.043813 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 07:52:06.043829 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 9 07:52:06.043841 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 9 07:52:06.043852 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 07:52:06.043864 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 9 07:52:06.043875 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 9 07:52:06.043888 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 9 07:52:06.043913 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 9 07:52:06.043931 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 07:52:06.043945 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 07:52:06.043962 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 07:52:06.043987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 07:52:06.044010 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Oct 9 07:52:06.044030 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Oct 9 07:52:06.044059 kernel: Zone ranges: Oct 9 07:52:06.044086 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 07:52:06.044106 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Oct 9 07:52:06.044126 kernel: Normal empty Oct 9 07:52:06.044154 kernel: Movable zone start for each node Oct 9 07:52:06.044174 kernel: Early memory node ranges Oct 9 07:52:06.044193 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 07:52:06.044213 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Oct 9 07:52:06.044232 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Oct 9 07:52:06.044257 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 07:52:06.044277 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 07:52:06.044297 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Oct 9 07:52:06.044317 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 07:52:06.044336 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 07:52:06.044356 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 07:52:06.044376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 07:52:06.044400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 07:52:06.044425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 07:52:06.044450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 07:52:06.044469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 07:52:06.044489 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 07:52:06.044513 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 07:52:06.044533 kernel: TSC deadline timer available Oct 9 07:52:06.044552 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 07:52:06.044572 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 07:52:06.044592 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 07:52:06.044605 kernel: Booting paravirtualized kernel on KVM Oct 9 07:52:06.044634 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 07:52:06.044654 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 07:52:06.044674 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 07:52:06.044694 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 07:52:06.044708 kernel: pcpu-alloc: [0] 0 1 Oct 9 07:52:06.044731 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 07:52:06.045864 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:52:06.045889 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 07:52:06.045918 kernel: random: crng init done Oct 9 07:52:06.045938 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 07:52:06.045957 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 07:52:06.045977 kernel: Fallback order for Node 0: 0 Oct 9 07:52:06.045997 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Oct 9 07:52:06.046017 kernel: Policy zone: DMA32 Oct 9 07:52:06.046037 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 07:52:06.046057 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 125148K reserved, 0K cma-reserved) Oct 9 07:52:06.046077 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 07:52:06.046101 kernel: Kernel/User page tables isolation: enabled Oct 9 07:52:06.046121 kernel: ftrace: allocating 37784 entries in 148 pages Oct 9 07:52:06.046141 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 07:52:06.046161 kernel: Dynamic Preempt: voluntary Oct 9 07:52:06.046181 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 07:52:06.046210 kernel: rcu: RCU event tracing is enabled. Oct 9 07:52:06.046230 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 07:52:06.046250 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 07:52:06.046270 kernel: Rude variant of Tasks RCU enabled. Oct 9 07:52:06.046295 kernel: Tracing variant of Tasks RCU enabled. Oct 9 07:52:06.046315 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 07:52:06.046335 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 07:52:06.046355 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 07:52:06.046375 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 07:52:06.046395 kernel: Console: colour VGA+ 80x25 Oct 9 07:52:06.046415 kernel: printk: console [tty0] enabled Oct 9 07:52:06.046435 kernel: printk: console [ttyS0] enabled Oct 9 07:52:06.046455 kernel: ACPI: Core revision 20230628 Oct 9 07:52:06.046479 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 07:52:06.046504 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 07:52:06.046524 kernel: x2apic enabled Oct 9 07:52:06.046544 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 07:52:06.046563 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 07:52:06.046583 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Oct 9 07:52:06.046627 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Oct 9 07:52:06.046647 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 07:52:06.046668 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 07:52:06.046709 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 07:52:06.046730 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 07:52:06.046768 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 07:52:06.046794 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 07:52:06.046815 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 07:52:06.046840 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 07:52:06.046877 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 07:52:06.046899 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 07:52:06.046921 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 07:52:06.046977 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 07:52:06.047011 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 07:52:06.047032 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 07:52:06.047054 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 07:52:06.047075 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 07:52:06.047096 kernel: Freeing SMP alternatives memory: 32K Oct 9 07:52:06.047118 kernel: pid_max: default: 32768 minimum: 301 Oct 9 07:52:06.047139 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 07:52:06.047165 kernel: landlock: Up and running. Oct 9 07:52:06.047200 kernel: SELinux: Initializing. Oct 9 07:52:06.047222 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:52:06.047243 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 07:52:06.047264 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 9 07:52:06.047285 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:52:06.047422 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:52:06.047444 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 07:52:06.047471 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 9 07:52:06.047492 kernel: signal: max sigframe size: 1776 Oct 9 07:52:06.047513 kernel: rcu: Hierarchical SRCU implementation. Oct 9 07:52:06.047535 kernel: rcu: Max phase no-delay instances is 400. Oct 9 07:52:06.047560 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 07:52:06.047581 kernel: smp: Bringing up secondary CPUs ... Oct 9 07:52:06.047607 kernel: smpboot: x86: Booting SMP configuration: Oct 9 07:52:06.047635 kernel: .... node #0, CPUs: #1 Oct 9 07:52:06.047662 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 07:52:06.047689 kernel: smpboot: Max logical packages: 1 Oct 9 07:52:06.047721 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Oct 9 07:52:06.047761 kernel: devtmpfs: initialized Oct 9 07:52:06.047787 kernel: x86/mm: Memory block size: 128MB Oct 9 07:52:06.047812 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 07:52:06.047838 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 07:52:06.047864 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 07:52:06.047890 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 07:52:06.047914 kernel: audit: initializing netlink subsys (disabled) Oct 9 07:52:06.047936 kernel: audit: type=2000 audit(1728460325.002:1): state=initialized audit_enabled=0 res=1 Oct 9 07:52:06.047964 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 07:52:06.047985 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 07:52:06.048007 kernel: cpuidle: using governor menu Oct 9 07:52:06.048028 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 07:52:06.048053 kernel: dca service started, version 1.12.1 Oct 9 07:52:06.048078 kernel: PCI: Using configuration type 1 for base access Oct 9 07:52:06.048106 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 07:52:06.048128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 07:52:06.048152 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 07:52:06.048179 kernel: ACPI: Added _OSI(Module Device) Oct 9 07:52:06.048204 kernel: ACPI: Added _OSI(Processor Device) Oct 9 07:52:06.048232 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 07:52:06.048256 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 07:52:06.048281 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 07:52:06.048306 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 07:52:06.048332 kernel: ACPI: Interpreter enabled Oct 9 07:52:06.048353 kernel: ACPI: PM: (supports S0 S5) Oct 9 07:52:06.048374 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 07:52:06.048401 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 07:52:06.048422 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 07:52:06.048443 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 07:52:06.048464 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 07:52:06.048814 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 07:52:06.049014 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 07:52:06.049165 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 07:52:06.049197 kernel: acpiphp: Slot [3] registered Oct 9 07:52:06.049219 kernel: acpiphp: Slot [4] registered Oct 9 07:52:06.049241 kernel: acpiphp: Slot [5] registered Oct 9 07:52:06.049262 kernel: acpiphp: Slot [6] registered Oct 9 07:52:06.049284 kernel: acpiphp: Slot [7] registered Oct 9 07:52:06.049305 kernel: acpiphp: Slot [8] registered Oct 9 07:52:06.049326 kernel: acpiphp: Slot [9] registered Oct 9 07:52:06.049347 kernel: acpiphp: Slot [10] registered Oct 9 07:52:06.049371 kernel: acpiphp: Slot [11] registered Oct 9 07:52:06.049404 kernel: acpiphp: Slot [12] registered Oct 9 07:52:06.049430 kernel: acpiphp: Slot [13] registered Oct 9 07:52:06.049457 kernel: acpiphp: Slot [14] registered Oct 9 07:52:06.049483 kernel: acpiphp: Slot [15] registered Oct 9 07:52:06.049511 kernel: acpiphp: Slot [16] registered Oct 9 07:52:06.049540 kernel: acpiphp: Slot [17] registered Oct 9 07:52:06.049566 kernel: acpiphp: Slot [18] registered Oct 9 07:52:06.049593 kernel: acpiphp: Slot [19] registered Oct 9 07:52:06.049620 kernel: acpiphp: Slot [20] registered Oct 9 07:52:06.049645 kernel: acpiphp: Slot [21] registered Oct 9 07:52:06.049674 kernel: acpiphp: Slot [22] registered Oct 9 07:52:06.049695 kernel: acpiphp: Slot [23] registered Oct 9 07:52:06.049716 kernel: acpiphp: Slot [24] registered Oct 9 07:52:06.049760 kernel: acpiphp: Slot [25] registered Oct 9 07:52:06.049779 kernel: acpiphp: Slot [26] registered Oct 9 07:52:06.049802 kernel: acpiphp: Slot [27] registered Oct 9 07:52:06.049823 kernel: acpiphp: Slot [28] registered Oct 9 07:52:06.049844 kernel: acpiphp: Slot [29] registered Oct 9 07:52:06.049867 kernel: acpiphp: Slot [30] registered Oct 9 07:52:06.049896 kernel: acpiphp: Slot [31] registered Oct 9 07:52:06.049918 kernel: PCI host bridge to bus 0000:00 Oct 9 07:52:06.050156 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 07:52:06.050299 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 07:52:06.050453 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 07:52:06.050586 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 07:52:06.050717 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 07:52:06.050894 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 07:52:06.051132 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 07:52:06.051300 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 07:52:06.051499 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 07:52:06.051731 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Oct 9 07:52:06.051919 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 07:52:06.052115 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 07:52:06.052295 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 07:52:06.052466 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 07:52:06.052688 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 9 07:52:06.052975 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Oct 9 07:52:06.053167 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 07:52:06.053361 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 07:52:06.053528 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 07:52:06.053694 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 07:52:06.054698 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 07:52:06.054992 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 07:52:06.055142 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Oct 9 07:52:06.055309 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 07:52:06.055471 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 07:52:06.055753 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:52:06.055943 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Oct 9 07:52:06.056182 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Oct 9 07:52:06.056359 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 07:52:06.056574 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 07:52:06.056856 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Oct 9 07:52:06.057028 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Oct 9 07:52:06.057185 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 07:52:06.057371 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Oct 9 07:52:06.057616 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Oct 9 07:52:06.057804 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Oct 9 07:52:06.057967 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 07:52:06.058163 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:52:06.058500 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 07:52:06.058660 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Oct 9 07:52:06.058948 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 07:52:06.059170 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Oct 9 07:52:06.059358 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Oct 9 07:52:06.059556 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Oct 9 07:52:06.059715 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Oct 9 07:52:06.059925 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 07:52:06.060116 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Oct 9 07:52:06.060323 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 9 07:52:06.060353 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 07:52:06.060374 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 07:52:06.060396 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 07:52:06.060417 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 07:52:06.060451 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 07:52:06.060472 kernel: iommu: Default domain type: Translated Oct 9 07:52:06.060487 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 07:52:06.060510 kernel: PCI: Using ACPI for IRQ routing Oct 9 07:52:06.060532 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 07:52:06.060553 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 07:52:06.060574 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Oct 9 07:52:06.060759 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 07:52:06.060968 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 07:52:06.061138 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 07:52:06.061164 kernel: vgaarb: loaded Oct 9 07:52:06.061186 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 07:52:06.061208 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 07:52:06.061229 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 07:52:06.061251 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 07:52:06.061275 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 07:52:06.061298 kernel: pnp: PnP ACPI init Oct 9 07:52:06.061326 kernel: pnp: PnP ACPI: found 4 devices Oct 9 07:52:06.061351 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 07:52:06.061367 kernel: NET: Registered PF_INET protocol family Oct 9 07:52:06.061390 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 07:52:06.061412 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 07:52:06.061441 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 07:52:06.061459 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 07:52:06.061473 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 07:52:06.061486 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 07:52:06.061506 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:52:06.061536 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 07:52:06.061558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 07:52:06.061589 kernel: NET: Registered PF_XDP protocol family Oct 9 07:52:06.061807 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 07:52:06.061979 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 07:52:06.062124 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 07:52:06.062259 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 07:52:06.062391 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 07:52:06.062601 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 07:52:06.062805 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 07:52:06.062835 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 07:52:06.063093 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 34688 usecs Oct 9 07:52:06.063124 kernel: PCI: CLS 0 bytes, default 64 Oct 9 07:52:06.063155 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 07:52:06.063172 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Oct 9 07:52:06.063191 kernel: Initialise system trusted keyrings Oct 9 07:52:06.063221 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 07:52:06.063243 kernel: Key type asymmetric registered Oct 9 07:52:06.063264 kernel: Asymmetric key parser 'x509' registered Oct 9 07:52:06.063285 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 07:52:06.063306 kernel: io scheduler mq-deadline registered Oct 9 07:52:06.063328 kernel: io scheduler kyber registered Oct 9 07:52:06.063350 kernel: io scheduler bfq registered Oct 9 07:52:06.063371 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 07:52:06.063393 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 07:52:06.063418 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 07:52:06.063440 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 07:52:06.063461 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 07:52:06.063482 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 07:52:06.063506 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 07:52:06.063522 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 07:52:06.063538 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 07:52:06.063850 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 07:52:06.063884 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 07:52:06.064101 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 07:52:06.064252 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T07:52:05 UTC (1728460325) Oct 9 07:52:06.064390 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 07:52:06.064416 kernel: intel_pstate: CPU model not supported Oct 9 07:52:06.064437 kernel: NET: Registered PF_INET6 protocol family Oct 9 07:52:06.064458 kernel: Segment Routing with IPv6 Oct 9 07:52:06.064480 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 07:52:06.064501 kernel: NET: Registered PF_PACKET protocol family Oct 9 07:52:06.064529 kernel: Key type dns_resolver registered Oct 9 07:52:06.064551 kernel: IPI shorthand broadcast: enabled Oct 9 07:52:06.064572 kernel: sched_clock: Marking stable (1140003613, 176878115)->(1366306219, -49424491) Oct 9 07:52:06.064593 kernel: registered taskstats version 1 Oct 9 07:52:06.064621 kernel: Loading compiled-in X.509 certificates Oct 9 07:52:06.064639 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 9 07:52:06.064655 kernel: Key type .fscrypt registered Oct 9 07:52:06.064677 kernel: Key type fscrypt-provisioning registered Oct 9 07:52:06.064699 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 07:52:06.064726 kernel: ima: Allocated hash algorithm: sha1 Oct 9 07:52:06.064768 kernel: ima: No architecture policies found Oct 9 07:52:06.064790 kernel: clk: Disabling unused clocks Oct 9 07:52:06.064812 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 9 07:52:06.064834 kernel: Write protecting the kernel read-only data: 36864k Oct 9 07:52:06.064889 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 9 07:52:06.064916 kernel: Run /init as init process Oct 9 07:52:06.064942 kernel: with arguments: Oct 9 07:52:06.064970 kernel: /init Oct 9 07:52:06.064996 kernel: with environment: Oct 9 07:52:06.065018 kernel: HOME=/ Oct 9 07:52:06.065040 kernel: TERM=linux Oct 9 07:52:06.065063 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 07:52:06.065090 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:52:06.065117 systemd[1]: Detected virtualization kvm. Oct 9 07:52:06.065140 systemd[1]: Detected architecture x86-64. Oct 9 07:52:06.065175 systemd[1]: Running in initrd. Oct 9 07:52:06.065194 systemd[1]: No hostname configured, using default hostname. Oct 9 07:52:06.065208 systemd[1]: Hostname set to . Oct 9 07:52:06.065225 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:52:06.065251 systemd[1]: Queued start job for default target initrd.target. Oct 9 07:52:06.065274 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:52:06.065297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:52:06.065330 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 07:52:06.065349 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:52:06.065380 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 07:52:06.065403 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 07:52:06.065430 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 07:52:06.065453 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 07:52:06.065479 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:52:06.065503 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:52:06.065531 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:52:06.065554 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:52:06.065578 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:52:06.065606 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:52:06.065631 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:52:06.065655 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:52:06.065686 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 07:52:06.065712 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 07:52:06.065731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:52:06.065956 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:52:06.065985 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:52:06.066002 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:52:06.066020 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 07:52:06.066040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:52:06.066066 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 07:52:06.066085 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 07:52:06.066113 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:52:06.066143 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:52:06.066744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:06.066814 systemd-journald[183]: Collecting audit messages is disabled. Oct 9 07:52:06.066938 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 07:52:06.066960 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:52:06.066979 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 07:52:06.067013 systemd-journald[183]: Journal started Oct 9 07:52:06.067072 systemd-journald[183]: Runtime Journal (/run/log/journal/0b13e6cd7680485c87be1b4b3aed077e) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:52:06.061086 systemd-modules-load[184]: Inserted module 'overlay' Oct 9 07:52:06.078779 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:52:06.082785 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:52:06.114785 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 07:52:06.114083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 07:52:06.174298 kernel: Bridge firewalling registered Oct 9 07:52:06.115842 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 9 07:52:06.173404 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:52:06.178757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:06.185503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:52:06.197165 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:52:06.206111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:52:06.209986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:52:06.214542 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:52:06.232185 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:52:06.243015 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:52:06.250394 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:52:06.259328 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:52:06.271176 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 07:52:06.296805 dracut-cmdline[219]: dracut-dracut-053 Oct 9 07:52:06.303777 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 9 07:52:06.312953 systemd-resolved[212]: Positive Trust Anchors: Oct 9 07:52:06.314116 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:52:06.314220 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 07:52:06.324475 systemd-resolved[212]: Defaulting to hostname 'linux'. Oct 9 07:52:06.327880 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:52:06.329884 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:52:06.423816 kernel: SCSI subsystem initialized Oct 9 07:52:06.436798 kernel: Loading iSCSI transport class v2.0-870. Oct 9 07:52:06.450804 kernel: iscsi: registered transport (tcp) Oct 9 07:52:06.481108 kernel: iscsi: registered transport (qla4xxx) Oct 9 07:52:06.481252 kernel: QLogic iSCSI HBA Driver Oct 9 07:52:06.553065 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 07:52:06.561034 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 07:52:06.608395 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 07:52:06.608488 kernel: device-mapper: uevent: version 1.0.3 Oct 9 07:52:06.608511 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 07:52:06.677829 kernel: raid6: avx2x4 gen() 22340 MB/s Oct 9 07:52:06.677914 kernel: raid6: avx2x2 gen() 24203 MB/s Oct 9 07:52:06.698023 kernel: raid6: avx2x1 gen() 20649 MB/s Oct 9 07:52:06.698108 kernel: raid6: using algorithm avx2x2 gen() 24203 MB/s Oct 9 07:52:06.717143 kernel: raid6: .... xor() 18474 MB/s, rmw enabled Oct 9 07:52:06.717241 kernel: raid6: using avx2x2 recovery algorithm Oct 9 07:52:06.742790 kernel: xor: automatically using best checksumming function avx Oct 9 07:52:06.922785 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 07:52:06.940664 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:52:06.954043 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:52:06.969616 systemd-udevd[402]: Using default interface naming scheme 'v255'. Oct 9 07:52:06.976203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:52:06.984954 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 07:52:07.009411 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Oct 9 07:52:07.052688 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:52:07.059997 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:52:07.120717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:52:07.130045 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 07:52:07.160090 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 07:52:07.162318 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:52:07.163844 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:52:07.165010 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:52:07.175015 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 07:52:07.202235 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:52:07.220784 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 9 07:52:07.233679 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 07:52:07.233969 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 07:52:07.255265 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 07:52:07.255336 kernel: GPT:9289727 != 125829119 Oct 9 07:52:07.255350 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 07:52:07.255363 kernel: scsi host0: Virtio SCSI HBA Oct 9 07:52:07.255588 kernel: GPT:9289727 != 125829119 Oct 9 07:52:07.255612 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 07:52:07.255624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:07.269874 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 07:52:07.269954 kernel: AES CTR mode by8 optimization enabled Oct 9 07:52:07.290314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:52:07.291451 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:52:07.295701 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:52:07.298959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:52:07.299277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:07.303453 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:07.315768 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 9 07:52:07.325045 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Oct 9 07:52:07.323680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:07.338782 kernel: libata version 3.00 loaded. Oct 9 07:52:07.356821 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 07:52:07.357865 kernel: scsi host1: ata_piix Oct 9 07:52:07.358233 kernel: ACPI: bus type USB registered Oct 9 07:52:07.358813 kernel: scsi host2: ata_piix Oct 9 07:52:07.359608 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Oct 9 07:52:07.359641 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Oct 9 07:52:07.359658 kernel: usbcore: registered new interface driver usbfs Oct 9 07:52:07.359688 kernel: usbcore: registered new interface driver hub Oct 9 07:52:07.359704 kernel: usbcore: registered new device driver usb Oct 9 07:52:07.396818 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (455) Oct 9 07:52:07.398792 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (448) Oct 9 07:52:07.423140 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 07:52:07.476253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:07.484962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 07:52:07.490148 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 07:52:07.490946 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 07:52:07.497980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:52:07.505009 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 07:52:07.508977 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 07:52:07.517512 disk-uuid[542]: Primary Header is updated. Oct 9 07:52:07.517512 disk-uuid[542]: Secondary Entries is updated. Oct 9 07:52:07.517512 disk-uuid[542]: Secondary Header is updated. Oct 9 07:52:07.528803 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:07.542775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:07.545512 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:52:07.587775 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 9 07:52:07.596491 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 9 07:52:07.619801 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 9 07:52:07.624897 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 9 07:52:07.625179 kernel: hub 1-0:1.0: USB hub found Oct 9 07:52:07.627786 kernel: hub 1-0:1.0: 2 ports detected Oct 9 07:52:08.539959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 07:52:08.540955 disk-uuid[543]: The operation has completed successfully. Oct 9 07:52:08.589332 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 07:52:08.589467 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 07:52:08.600016 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 07:52:08.606966 sh[562]: Success Oct 9 07:52:08.624956 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 07:52:08.694605 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 07:52:08.703156 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 07:52:08.721710 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 07:52:08.741088 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 9 07:52:08.741183 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:52:08.743314 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 07:52:08.745142 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 07:52:08.746977 kernel: BTRFS info (device dm-0): using free space tree Oct 9 07:52:08.758936 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 07:52:08.760947 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 07:52:08.767134 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 07:52:08.769727 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 07:52:08.803783 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:08.803869 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:52:08.807718 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:52:08.813769 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:52:08.826577 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 07:52:08.830789 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:08.837360 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 07:52:08.846058 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 07:52:08.968915 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:52:08.978115 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:52:09.008026 ignition[666]: Ignition 2.19.0 Oct 9 07:52:09.008041 ignition[666]: Stage: fetch-offline Oct 9 07:52:09.008096 ignition[666]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:09.008111 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:09.008279 ignition[666]: parsed url from cmdline: "" Oct 9 07:52:09.012048 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:52:09.008285 ignition[666]: no config URL provided Oct 9 07:52:09.012103 systemd-networkd[750]: lo: Link UP Oct 9 07:52:09.008294 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:52:09.012107 systemd-networkd[750]: lo: Gained carrier Oct 9 07:52:09.008307 ignition[666]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:52:09.015442 systemd-networkd[750]: Enumeration completed Oct 9 07:52:09.008315 ignition[666]: failed to fetch config: resource requires networking Oct 9 07:52:09.015902 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:52:09.008621 ignition[666]: Ignition finished successfully Oct 9 07:52:09.015906 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 9 07:52:09.017633 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:52:09.017637 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 07:52:09.018455 systemd-networkd[750]: eth0: Link UP Oct 9 07:52:09.018462 systemd-networkd[750]: eth0: Gained carrier Oct 9 07:52:09.018478 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 07:52:09.018951 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:52:09.019997 systemd[1]: Reached target network.target - Network. Oct 9 07:52:09.020337 systemd-networkd[750]: eth1: Link UP Oct 9 07:52:09.020345 systemd-networkd[750]: eth1: Gained carrier Oct 9 07:52:09.020362 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 07:52:09.029129 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 07:52:09.034208 systemd-networkd[750]: eth0: DHCPv4 address 209.38.129.97/19, gateway 209.38.128.1 acquired from 169.254.169.253 Oct 9 07:52:09.038050 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.12/20 acquired from 169.254.169.253 Oct 9 07:52:09.067455 ignition[755]: Ignition 2.19.0 Oct 9 07:52:09.067477 ignition[755]: Stage: fetch Oct 9 07:52:09.067919 ignition[755]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:09.067947 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:09.068218 ignition[755]: parsed url from cmdline: "" Oct 9 07:52:09.068227 ignition[755]: no config URL provided Oct 9 07:52:09.068241 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 07:52:09.068263 ignition[755]: no config at "/usr/lib/ignition/user.ign" Oct 9 07:52:09.068312 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 9 07:52:09.087783 ignition[755]: GET result: OK Oct 9 07:52:09.088054 ignition[755]: parsing config with SHA512: 2886f16d3070f6c618788a7123eb4fcda6b331ddae969cf19f2b92df3d23670d94313f4ec118c53829c092baef8d117be3cd50aa19afa211d7b04b66ae90f75b Oct 9 07:52:09.096943 unknown[755]: fetched base config from "system" Oct 9 07:52:09.096964 unknown[755]: fetched base config from "system" Oct 9 07:52:09.096973 unknown[755]: fetched user config from "digitalocean" Oct 9 07:52:09.097912 ignition[755]: fetch: fetch complete Oct 9 07:52:09.097920 ignition[755]: fetch: fetch passed Oct 9 07:52:09.098001 ignition[755]: Ignition finished successfully Oct 9 07:52:09.100524 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 07:52:09.105001 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 07:52:09.141289 ignition[763]: Ignition 2.19.0 Oct 9 07:52:09.142233 ignition[763]: Stage: kargs Oct 9 07:52:09.142540 ignition[763]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:09.142555 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:09.146777 ignition[763]: kargs: kargs passed Oct 9 07:52:09.146910 ignition[763]: Ignition finished successfully Oct 9 07:52:09.148491 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 07:52:09.155118 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 07:52:09.194571 ignition[769]: Ignition 2.19.0 Oct 9 07:52:09.194592 ignition[769]: Stage: disks Oct 9 07:52:09.194965 ignition[769]: no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:09.194982 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:09.196691 ignition[769]: disks: disks passed Oct 9 07:52:09.198443 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 07:52:09.197010 ignition[769]: Ignition finished successfully Oct 9 07:52:09.205062 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 07:52:09.206372 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 07:52:09.207839 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:52:09.209146 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:52:09.210554 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:52:09.220986 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 07:52:09.239589 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 07:52:09.243579 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 07:52:09.248899 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 07:52:09.385766 kernel: EXT4-fs (vda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 9 07:52:09.387724 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 07:52:09.389261 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 07:52:09.395946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:52:09.412081 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 07:52:09.415833 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Oct 9 07:52:09.425011 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 07:52:09.438392 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Oct 9 07:52:09.438423 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:09.438437 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:52:09.438450 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:52:09.430474 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 07:52:09.430527 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:52:09.449332 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 07:52:09.451808 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:52:09.455097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:52:09.472233 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 07:52:09.536769 coreos-metadata[787]: Oct 09 07:52:09.535 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:52:09.543138 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 07:52:09.545498 coreos-metadata[788]: Oct 09 07:52:09.543 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:52:09.547657 coreos-metadata[787]: Oct 09 07:52:09.547 INFO Fetch successful Oct 9 07:52:09.550159 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Oct 9 07:52:09.558716 coreos-metadata[788]: Oct 09 07:52:09.555 INFO Fetch successful Oct 9 07:52:09.559998 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Oct 9 07:52:09.560481 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Oct 9 07:52:09.565597 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 07:52:09.570064 coreos-metadata[788]: Oct 09 07:52:09.569 INFO wrote hostname ci-4081.1.0-0-871bb8dd75 to /sysroot/etc/hostname Oct 9 07:52:09.571241 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 07:52:09.571733 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:52:09.686344 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 07:52:09.691011 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 07:52:09.693977 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 07:52:09.707273 kernel: BTRFS info (device vda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:09.738244 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 07:52:09.740839 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 07:52:09.746343 ignition[905]: INFO : Ignition 2.19.0 Oct 9 07:52:09.747383 ignition[905]: INFO : Stage: mount Oct 9 07:52:09.747383 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:09.747383 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:09.749935 ignition[905]: INFO : mount: mount passed Oct 9 07:52:09.749935 ignition[905]: INFO : Ignition finished successfully Oct 9 07:52:09.750226 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 07:52:09.756994 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 07:52:09.783194 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 07:52:09.808787 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (918) Oct 9 07:52:09.812837 kernel: BTRFS info (device vda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 9 07:52:09.812918 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 07:52:09.814900 kernel: BTRFS info (device vda6): using free space tree Oct 9 07:52:09.820956 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 07:52:09.823582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 07:52:09.859021 ignition[935]: INFO : Ignition 2.19.0 Oct 9 07:52:09.859021 ignition[935]: INFO : Stage: files Oct 9 07:52:09.860582 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:09.860582 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:09.860582 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Oct 9 07:52:09.863402 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 07:52:09.863402 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 07:52:09.866458 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 07:52:09.867489 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 07:52:09.869303 unknown[935]: wrote ssh authorized keys file for user: core Oct 9 07:52:09.870375 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 07:52:09.871321 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:52:09.872418 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 07:52:09.905433 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 07:52:10.039190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 07:52:10.039190 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 07:52:10.041697 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 9 07:52:10.105045 systemd-networkd[750]: eth0: Gained IPv6LL Oct 9 07:52:10.495120 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 07:52:10.553324 systemd-networkd[750]: eth1: Gained IPv6LL Oct 9 07:52:10.753332 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 07:52:10.753332 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 07:52:10.756569 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:52:10.756569 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 07:52:10.756569 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 07:52:10.756569 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 07:52:10.756569 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 07:52:10.756569 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:52:10.756569 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 07:52:10.756569 ignition[935]: INFO : files: files passed Oct 9 07:52:10.756569 ignition[935]: INFO : Ignition finished successfully Oct 9 07:52:10.758177 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 07:52:10.768127 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 07:52:10.772186 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 07:52:10.773614 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 07:52:10.773825 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 07:52:10.796401 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:52:10.796401 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:52:10.798701 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 07:52:10.800401 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:52:10.802021 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 07:52:10.807990 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 07:52:10.850581 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 07:52:10.850777 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 07:52:10.852398 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 07:52:10.853820 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 07:52:10.855451 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 07:52:10.862028 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 07:52:10.884097 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:52:10.889023 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 07:52:10.917144 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:52:10.918751 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:52:10.920211 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 07:52:10.921488 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 07:52:10.921625 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 07:52:10.923669 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 07:52:10.924415 systemd[1]: Stopped target basic.target - Basic System. Oct 9 07:52:10.925466 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 07:52:10.926717 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 07:52:10.928230 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 07:52:10.929518 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 07:52:10.930776 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 07:52:10.932329 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 07:52:10.933853 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 07:52:10.935448 systemd[1]: Stopped target swap.target - Swaps. Oct 9 07:52:10.936554 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 07:52:10.936762 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 07:52:10.938155 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:52:10.939202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:52:10.940406 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 07:52:10.940511 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:52:10.941771 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 07:52:10.942025 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 07:52:10.943643 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 07:52:10.943860 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 07:52:10.945904 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 07:52:10.946032 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 07:52:10.947241 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 07:52:10.947382 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 07:52:10.954231 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 07:52:10.959101 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 07:52:10.959721 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 07:52:10.960028 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:52:10.963037 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 07:52:10.963202 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 07:52:10.973945 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 07:52:10.974045 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 07:52:10.983686 ignition[987]: INFO : Ignition 2.19.0 Oct 9 07:52:10.983686 ignition[987]: INFO : Stage: umount Oct 9 07:52:10.989063 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 07:52:10.989063 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 07:52:10.994168 ignition[987]: INFO : umount: umount passed Oct 9 07:52:10.996020 ignition[987]: INFO : Ignition finished successfully Oct 9 07:52:10.995484 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 07:52:10.996081 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 07:52:10.998713 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 07:52:11.009820 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 07:52:11.009962 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 07:52:11.029733 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 07:52:11.029824 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 07:52:11.035849 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 07:52:11.035937 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 07:52:11.036979 systemd[1]: Stopped target network.target - Network. Oct 9 07:52:11.038065 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 07:52:11.038142 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 07:52:11.039458 systemd[1]: Stopped target paths.target - Path Units. Oct 9 07:52:11.040605 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 07:52:11.043826 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:52:11.045368 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 07:52:11.047052 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 07:52:11.048297 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 07:52:11.048368 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 07:52:11.082962 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 07:52:11.083049 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 07:52:11.085214 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 07:52:11.085308 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 07:52:11.085967 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 07:52:11.086015 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 07:52:11.087694 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 07:52:11.089078 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 07:52:11.091832 systemd-networkd[750]: eth1: DHCPv6 lease lost Oct 9 07:52:11.096810 systemd-networkd[750]: eth0: DHCPv6 lease lost Oct 9 07:52:11.097431 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 07:52:11.097665 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 07:52:11.102089 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 07:52:11.102231 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 07:52:11.114752 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 07:52:11.114917 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:52:11.121951 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 07:52:11.122987 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 07:52:11.123105 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 07:52:11.124519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 07:52:11.124612 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:52:11.126331 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 07:52:11.126421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 07:52:11.128046 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 07:52:11.128130 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:52:11.132658 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:52:11.137030 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 07:52:11.137178 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 07:52:11.144034 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 07:52:11.144119 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 07:52:11.146557 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 07:52:11.147681 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:52:11.153728 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 07:52:11.153843 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 07:52:11.156878 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 07:52:11.156950 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:52:11.158455 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 07:52:11.158534 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 07:52:11.160665 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 07:52:11.160800 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 07:52:11.161990 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 07:52:11.162068 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 07:52:11.172057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 07:52:11.175195 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 07:52:11.175294 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:52:11.176836 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 07:52:11.176911 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:52:11.178061 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 07:52:11.178122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:52:11.180237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:52:11.180302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:11.183229 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 07:52:11.183346 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 07:52:11.185833 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 07:52:11.185936 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 07:52:11.187603 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 07:52:11.195768 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 07:52:11.205276 systemd[1]: Switching root. Oct 9 07:52:11.272930 systemd-journald[183]: Journal stopped Oct 9 07:52:12.680989 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 9 07:52:12.681065 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 07:52:12.681081 kernel: SELinux: policy capability open_perms=1 Oct 9 07:52:12.681094 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 07:52:12.681107 kernel: SELinux: policy capability always_check_network=0 Oct 9 07:52:12.681118 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 07:52:12.681140 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 07:52:12.681156 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 07:52:12.681169 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 07:52:12.681181 kernel: audit: type=1403 audit(1728460331.467:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 07:52:12.681199 systemd[1]: Successfully loaded SELinux policy in 45.756ms. Oct 9 07:52:12.681221 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.089ms. Oct 9 07:52:12.681236 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 07:52:12.681250 systemd[1]: Detected virtualization kvm. Oct 9 07:52:12.681267 systemd[1]: Detected architecture x86-64. Oct 9 07:52:12.681281 systemd[1]: Detected first boot. Oct 9 07:52:12.681294 systemd[1]: Hostname set to . Oct 9 07:52:12.681307 systemd[1]: Initializing machine ID from VM UUID. Oct 9 07:52:12.681321 zram_generator::config[1030]: No configuration found. Oct 9 07:52:12.681340 systemd[1]: Populated /etc with preset unit settings. Oct 9 07:52:12.681358 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 07:52:12.681374 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 07:52:12.681391 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 07:52:12.681405 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 07:52:12.681418 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 07:52:12.681432 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 07:52:12.681446 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 07:52:12.681459 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 07:52:12.681473 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 07:52:12.681486 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 07:52:12.681503 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 07:52:12.681516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 07:52:12.681529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 07:52:12.681541 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 07:52:12.681555 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 07:52:12.681569 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 07:52:12.681582 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 07:52:12.681595 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 07:52:12.681609 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 07:52:12.681624 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 07:52:12.681639 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 07:52:12.681653 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 07:52:12.681666 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 07:52:12.681680 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 07:52:12.681694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 07:52:12.681707 systemd[1]: Reached target slices.target - Slice Units. Oct 9 07:52:12.681723 systemd[1]: Reached target swap.target - Swaps. Oct 9 07:52:12.681746 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 07:52:12.681760 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 07:52:12.681773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 07:52:12.681787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 07:52:12.681800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 07:52:12.681814 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 07:52:12.681827 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 07:52:12.681840 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 07:52:12.681857 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 07:52:12.681870 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:12.681884 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 07:52:12.681898 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 07:52:12.681911 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 07:52:12.681926 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 07:52:12.681939 systemd[1]: Reached target machines.target - Containers. Oct 9 07:52:12.681952 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 07:52:12.681968 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:12.681982 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 07:52:12.681995 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 07:52:12.682009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:52:12.682022 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:52:12.682035 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:52:12.682048 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 07:52:12.682061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:52:12.682074 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 07:52:12.682092 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 07:52:12.682104 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 07:52:12.682117 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 07:52:12.682131 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 07:52:12.682144 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 07:52:12.682157 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 07:52:12.682170 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 07:52:12.682183 kernel: loop: module loaded Oct 9 07:52:12.682197 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 07:52:12.682213 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 07:52:12.682227 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 07:52:12.682241 systemd[1]: Stopped verity-setup.service. Oct 9 07:52:12.682254 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:12.682268 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 07:52:12.682281 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 07:52:12.682294 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 07:52:12.682307 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 07:52:12.682324 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 07:52:12.682337 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 07:52:12.682351 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 07:52:12.682364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 07:52:12.682381 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 07:52:12.682394 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 07:52:12.682408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:52:12.682422 kernel: fuse: init (API version 7.39) Oct 9 07:52:12.682434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:52:12.682447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:52:12.682464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:52:12.682477 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:52:12.682490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:52:12.682504 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 07:52:12.682517 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 07:52:12.682530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 07:52:12.682544 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 07:52:12.682558 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 07:52:12.682604 systemd-journald[1110]: Collecting audit messages is disabled. Oct 9 07:52:12.682633 systemd-journald[1110]: Journal started Oct 9 07:52:12.682664 systemd-journald[1110]: Runtime Journal (/run/log/journal/0b13e6cd7680485c87be1b4b3aed077e) is 4.9M, max 39.3M, 34.4M free. Oct 9 07:52:12.215656 systemd[1]: Queued start job for default target multi-user.target. Oct 9 07:52:12.234652 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 07:52:12.235279 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 07:52:12.692283 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 07:52:12.701887 kernel: ACPI: bus type drm_connector registered Oct 9 07:52:12.706463 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:52:12.706683 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:52:12.712319 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 07:52:12.721904 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 07:52:12.731899 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 07:52:12.732691 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 07:52:12.734065 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 07:52:12.735925 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 07:52:12.750025 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 07:52:12.753202 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 07:52:12.754280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:12.769415 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 07:52:12.777966 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 07:52:12.779055 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:52:12.785008 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 07:52:12.785810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:52:12.797009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 07:52:12.806027 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 07:52:12.816000 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 07:52:12.820861 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 07:52:12.822434 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 07:52:12.824656 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 07:52:12.826881 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 07:52:12.848768 kernel: loop0: detected capacity change from 0 to 140768 Oct 9 07:52:12.849004 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 07:52:12.852821 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 07:52:12.854364 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 07:52:12.866036 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 07:52:12.878543 systemd-journald[1110]: Time spent on flushing to /var/log/journal/0b13e6cd7680485c87be1b4b3aed077e is 65.918ms for 994 entries. Oct 9 07:52:12.878543 systemd-journald[1110]: System Journal (/var/log/journal/0b13e6cd7680485c87be1b4b3aed077e) is 8.0M, max 195.6M, 187.6M free. Oct 9 07:52:12.970354 systemd-journald[1110]: Received client request to flush runtime journal. Oct 9 07:52:12.970427 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 07:52:12.970458 kernel: loop1: detected capacity change from 0 to 8 Oct 9 07:52:12.942401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 07:52:12.948749 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 07:52:12.950501 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 07:52:12.958034 udevadm[1156]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 07:52:12.979589 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 07:52:12.985663 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Oct 9 07:52:12.986243 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Oct 9 07:52:12.991790 kernel: loop2: detected capacity change from 0 to 210664 Oct 9 07:52:12.993452 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 07:52:13.005084 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 07:52:13.039779 kernel: loop3: detected capacity change from 0 to 142488 Oct 9 07:52:13.097994 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 07:52:13.109998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 07:52:13.130063 kernel: loop4: detected capacity change from 0 to 140768 Oct 9 07:52:13.158161 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Oct 9 07:52:13.158182 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Oct 9 07:52:13.172475 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 07:52:13.186486 kernel: loop5: detected capacity change from 0 to 8 Oct 9 07:52:13.189363 kernel: loop6: detected capacity change from 0 to 210664 Oct 9 07:52:13.213094 kernel: loop7: detected capacity change from 0 to 142488 Oct 9 07:52:13.236288 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 9 07:52:13.238111 (sd-merge)[1178]: Merged extensions into '/usr'. Oct 9 07:52:13.246867 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 07:52:13.248352 systemd[1]: Reloading... Oct 9 07:52:13.549120 zram_generator::config[1206]: No configuration found. Oct 9 07:52:13.632329 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 07:52:13.829596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:52:13.926987 systemd[1]: Reloading finished in 677 ms. Oct 9 07:52:13.977589 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 07:52:13.982347 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 07:52:13.992937 systemd[1]: Starting ensure-sysext.service... Oct 9 07:52:14.000907 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 07:52:14.015876 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Oct 9 07:52:14.016287 systemd[1]: Reloading... Oct 9 07:52:14.080811 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 07:52:14.081592 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 07:52:14.085380 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 07:52:14.089311 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Oct 9 07:52:14.090056 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Oct 9 07:52:14.102138 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:52:14.104967 systemd-tmpfiles[1250]: Skipping /boot Oct 9 07:52:14.145791 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 07:52:14.147404 systemd-tmpfiles[1250]: Skipping /boot Oct 9 07:52:14.205761 zram_generator::config[1276]: No configuration found. Oct 9 07:52:14.393582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:52:14.454346 systemd[1]: Reloading finished in 437 ms. Oct 9 07:52:14.470883 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 07:52:14.478388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 07:52:14.491085 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:52:14.496126 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 07:52:14.505936 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 07:52:14.512017 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 07:52:14.524985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 07:52:14.527419 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 07:52:14.533142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:14.533578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:14.544151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:52:14.549933 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:52:14.556163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:52:14.557008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:14.557156 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:14.562525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:14.562967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:14.563233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:14.572052 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 07:52:14.573806 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:14.578086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:14.578331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:14.594656 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 07:52:14.595463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:14.595628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:14.596809 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 07:52:14.598918 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 07:52:14.605037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:52:14.605210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:52:14.606489 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:52:14.607797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:52:14.609411 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 07:52:14.609793 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 07:52:14.621440 systemd[1]: Finished ensure-sysext.service. Oct 9 07:52:14.623684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:52:14.624426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:52:14.634448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:52:14.634757 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:52:14.643963 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 07:52:14.653953 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 07:52:14.654590 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:52:14.655095 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 07:52:14.659170 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Oct 9 07:52:14.679081 augenrules[1357]: No rules Oct 9 07:52:14.680991 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:52:14.690125 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 07:52:14.691424 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 07:52:14.708055 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 07:52:14.719955 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 07:52:14.781342 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 07:52:14.782183 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 07:52:14.838114 systemd-networkd[1371]: lo: Link UP Oct 9 07:52:14.838126 systemd-networkd[1371]: lo: Gained carrier Oct 9 07:52:14.839048 systemd-networkd[1371]: Enumeration completed Oct 9 07:52:14.839169 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 07:52:14.843670 systemd-resolved[1328]: Positive Trust Anchors: Oct 9 07:52:14.844053 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 07:52:14.844160 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 07:52:14.851468 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 07:52:14.852628 systemd-resolved[1328]: Using system hostname 'ci-4081.1.0-0-871bb8dd75'. Oct 9 07:52:14.856368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 07:52:14.858971 systemd[1]: Reached target network.target - Network. Oct 9 07:52:14.860918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 07:52:14.875797 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1372) Oct 9 07:52:14.880882 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 07:52:14.905796 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1372) Oct 9 07:52:14.905773 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 9 07:52:14.906411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:14.906678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 07:52:14.908077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 07:52:14.915086 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 07:52:14.918855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 07:52:14.920973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 07:52:14.921043 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 07:52:14.921077 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 07:52:14.926209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 07:52:14.926643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 07:52:14.945868 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1375) Oct 9 07:52:14.947995 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 07:52:14.948536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 07:52:14.950801 kernel: ISO 9660 Extensions: RRIP_1991A Oct 9 07:52:14.960930 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 9 07:52:14.974669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 07:52:14.975888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 07:52:14.981364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 07:52:14.981485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 07:52:14.993454 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 07:52:15.003018 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 07:52:15.008770 systemd-networkd[1371]: eth0: Configuring with /run/systemd/network/10-6a:40:5d:cf:8a:ac.network. Oct 9 07:52:15.014519 systemd-networkd[1371]: eth0: Link UP Oct 9 07:52:15.014533 systemd-networkd[1371]: eth0: Gained carrier Oct 9 07:52:15.023378 systemd-networkd[1371]: eth1: Configuring with /run/systemd/network/10-b2:7a:9e:1d:1d:b8.network. Oct 9 07:52:15.027146 systemd-networkd[1371]: eth1: Link UP Oct 9 07:52:15.027158 systemd-networkd[1371]: eth1: Gained carrier Oct 9 07:52:15.032702 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Oct 9 07:52:15.036521 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Oct 9 07:52:15.055997 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 07:52:15.072769 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 07:52:15.094777 kernel: ACPI: button: Power Button [PWRF] Oct 9 07:52:15.108323 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 07:52:15.113070 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 07:52:15.163773 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 07:52:15.183824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:15.188985 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 07:52:15.190776 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 07:52:15.204073 kernel: Console: switching to colour dummy device 80x25 Oct 9 07:52:15.204159 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 07:52:15.204206 kernel: [drm] features: -context_init Oct 9 07:52:15.204222 kernel: [drm] number of scanouts: 1 Oct 9 07:52:15.204251 kernel: [drm] number of cap sets: 0 Oct 9 07:52:15.206797 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 07:52:15.225568 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 07:52:15.225690 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 07:52:15.242771 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 07:52:15.269232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:52:15.269621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:15.285931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:15.299539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 07:52:15.299823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:15.313876 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 07:52:15.340771 kernel: EDAC MC: Ver: 3.0.0 Oct 9 07:52:15.372479 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 07:52:15.380645 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 07:52:15.398117 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:52:15.425428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 07:52:15.428405 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 07:52:15.429377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 07:52:15.429522 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 07:52:15.429699 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 07:52:15.429993 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 07:52:15.432611 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 07:52:15.433353 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 07:52:15.433601 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 07:52:15.433722 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 07:52:15.433794 systemd[1]: Reached target paths.target - Path Units. Oct 9 07:52:15.433890 systemd[1]: Reached target timers.target - Timer Units. Oct 9 07:52:15.435361 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 07:52:15.438131 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 07:52:15.445564 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 07:52:15.449446 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 07:52:15.450649 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 07:52:15.452986 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 07:52:15.453540 systemd[1]: Reached target basic.target - Basic System. Oct 9 07:52:15.455630 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:52:15.455661 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 07:52:15.461917 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 07:52:15.465080 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 07:52:15.473075 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 07:52:15.478989 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 07:52:15.478936 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 07:52:15.485973 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 07:52:15.487996 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 07:52:15.499027 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 07:52:15.510872 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 07:52:15.522374 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 07:52:15.529997 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 07:52:15.546036 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 07:52:15.547212 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 07:52:15.550283 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 07:52:15.563905 jq[1437]: false Oct 9 07:52:15.553970 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 07:52:15.566257 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 07:52:15.568510 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 07:52:15.582529 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 07:52:15.584059 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 07:52:15.593125 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 07:52:15.594852 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 07:52:15.605206 jq[1449]: true Oct 9 07:52:15.636005 update_engine[1448]: I20241009 07:52:15.634511 1448 main.cc:92] Flatcar Update Engine starting Oct 9 07:52:15.654295 dbus-daemon[1436]: [system] SELinux support is enabled Oct 9 07:52:15.659214 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 07:52:15.660652 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 07:52:15.676871 extend-filesystems[1438]: Found loop4 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found loop5 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found loop6 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found loop7 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found vda Oct 9 07:52:15.676871 extend-filesystems[1438]: Found vda1 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found vda2 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found vda3 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found usr Oct 9 07:52:15.676871 extend-filesystems[1438]: Found vda4 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found vda6 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found vda7 Oct 9 07:52:15.676871 extend-filesystems[1438]: Found vda9 Oct 9 07:52:15.676871 extend-filesystems[1438]: Checking size of /dev/vda9 Oct 9 07:52:15.660865 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 07:52:15.748476 tar[1453]: linux-amd64/helm Oct 9 07:52:15.750814 coreos-metadata[1435]: Oct 09 07:52:15.707 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:52:15.750814 coreos-metadata[1435]: Oct 09 07:52:15.738 INFO Fetch successful Oct 9 07:52:15.751183 jq[1463]: true Oct 9 07:52:15.751259 update_engine[1448]: I20241009 07:52:15.676122 1448 update_check_scheduler.cc:74] Next update check in 9m37s Oct 9 07:52:15.664185 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 07:52:15.668373 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 07:52:15.668407 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 07:52:15.670696 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 07:52:15.671704 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 9 07:52:15.671787 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 07:52:15.681590 systemd[1]: Started update-engine.service - Update Engine. Oct 9 07:52:15.704947 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 07:52:15.769326 extend-filesystems[1438]: Resized partition /dev/vda9 Oct 9 07:52:15.791914 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Oct 9 07:52:15.808854 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 9 07:52:15.880313 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1385) Oct 9 07:52:15.874982 systemd-logind[1446]: New seat seat0. Oct 9 07:52:15.881833 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 07:52:15.881869 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 07:52:15.882204 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 07:52:15.904087 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 07:52:15.908546 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 07:52:16.006197 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:52:16.007783 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 07:52:16.030223 systemd[1]: Starting sshkeys.service... Oct 9 07:52:16.053823 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 07:52:16.072550 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 07:52:16.086422 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 07:52:16.107120 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 07:52:16.107120 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 07:52:16.107120 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 07:52:16.124501 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Oct 9 07:52:16.124501 extend-filesystems[1438]: Found vdb Oct 9 07:52:16.109297 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 07:52:16.110105 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 07:52:16.123177 systemd-networkd[1371]: eth0: Gained IPv6LL Oct 9 07:52:16.123655 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Oct 9 07:52:16.129720 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 07:52:16.141828 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 07:52:16.159302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:52:16.167465 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 07:52:16.188385 locksmithd[1473]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 07:52:16.276509 containerd[1464]: time="2024-10-09T07:52:16.274591924Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 9 07:52:16.277760 coreos-metadata[1509]: Oct 09 07:52:16.277 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 07:52:16.293826 coreos-metadata[1509]: Oct 09 07:52:16.292 INFO Fetch successful Oct 9 07:52:16.301301 unknown[1509]: wrote ssh authorized keys file for user: core Oct 9 07:52:16.303295 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 07:52:16.351171 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Oct 9 07:52:16.353782 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 07:52:16.362645 systemd[1]: Finished sshkeys.service. Oct 9 07:52:16.432762 containerd[1464]: time="2024-10-09T07:52:16.431795968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:52:16.437131 containerd[1464]: time="2024-10-09T07:52:16.436841008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:52:16.437131 containerd[1464]: time="2024-10-09T07:52:16.436896450Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 07:52:16.437131 containerd[1464]: time="2024-10-09T07:52:16.436921117Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 07:52:16.437131 containerd[1464]: time="2024-10-09T07:52:16.437109363Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 07:52:16.437391 containerd[1464]: time="2024-10-09T07:52:16.437145173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 07:52:16.437391 containerd[1464]: time="2024-10-09T07:52:16.437216015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:52:16.437391 containerd[1464]: time="2024-10-09T07:52:16.437233434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:52:16.439831 containerd[1464]: time="2024-10-09T07:52:16.438390372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:52:16.439831 containerd[1464]: time="2024-10-09T07:52:16.438433654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 07:52:16.439831 containerd[1464]: time="2024-10-09T07:52:16.438456523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:52:16.439831 containerd[1464]: time="2024-10-09T07:52:16.438471824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 07:52:16.439831 containerd[1464]: time="2024-10-09T07:52:16.438635071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:52:16.439831 containerd[1464]: time="2024-10-09T07:52:16.439344151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 07:52:16.440102 containerd[1464]: time="2024-10-09T07:52:16.439981953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 07:52:16.440102 containerd[1464]: time="2024-10-09T07:52:16.440010325Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 07:52:16.441571 containerd[1464]: time="2024-10-09T07:52:16.440224546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 07:52:16.441571 containerd[1464]: time="2024-10-09T07:52:16.440333980Z" level=info msg="metadata content store policy set" policy=shared Oct 9 07:52:16.448286 containerd[1464]: time="2024-10-09T07:52:16.448226159Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 07:52:16.448442 containerd[1464]: time="2024-10-09T07:52:16.448399884Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 07:52:16.448442 containerd[1464]: time="2024-10-09T07:52:16.448433710Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 07:52:16.448522 containerd[1464]: time="2024-10-09T07:52:16.448456523Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 07:52:16.448522 containerd[1464]: time="2024-10-09T07:52:16.448476744Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 07:52:16.449780 containerd[1464]: time="2024-10-09T07:52:16.448697654Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 07:52:16.449780 containerd[1464]: time="2024-10-09T07:52:16.449667766Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 07:52:16.449874 containerd[1464]: time="2024-10-09T07:52:16.449846428Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 07:52:16.449915 containerd[1464]: time="2024-10-09T07:52:16.449868270Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 07:52:16.449915 containerd[1464]: time="2024-10-09T07:52:16.449887496Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 07:52:16.449915 containerd[1464]: time="2024-10-09T07:52:16.449906126Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 07:52:16.450041 containerd[1464]: time="2024-10-09T07:52:16.449924471Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 07:52:16.450041 containerd[1464]: time="2024-10-09T07:52:16.449944115Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 07:52:16.450041 containerd[1464]: time="2024-10-09T07:52:16.449965789Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 07:52:16.450041 containerd[1464]: time="2024-10-09T07:52:16.449986630Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 07:52:16.450041 containerd[1464]: time="2024-10-09T07:52:16.450010495Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 07:52:16.450041 containerd[1464]: time="2024-10-09T07:52:16.450036455Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450057976Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450090273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450109039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450126088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450144365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450161105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450179111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450195313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450214429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450231947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450261 containerd[1464]: time="2024-10-09T07:52:16.450251476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450268146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450284581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450302847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450335099Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450367535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450387100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450401641Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450462721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450487834Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450504669Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450521609Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450536019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450551865Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 07:52:16.450909 containerd[1464]: time="2024-10-09T07:52:16.450588696Z" level=info msg="NRI interface is disabled by configuration." Oct 9 07:52:16.451444 containerd[1464]: time="2024-10-09T07:52:16.450626692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 07:52:16.453782 containerd[1464]: time="2024-10-09T07:52:16.452977546Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 07:52:16.454104 containerd[1464]: time="2024-10-09T07:52:16.453800130Z" level=info msg="Connect containerd service" Oct 9 07:52:16.454104 containerd[1464]: time="2024-10-09T07:52:16.453883617Z" level=info msg="using legacy CRI server" Oct 9 07:52:16.454104 containerd[1464]: time="2024-10-09T07:52:16.453901927Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 07:52:16.460753 containerd[1464]: time="2024-10-09T07:52:16.458427731Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 07:52:16.471070 containerd[1464]: time="2024-10-09T07:52:16.471017383Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 07:52:16.473008 containerd[1464]: time="2024-10-09T07:52:16.472966870Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 07:52:16.473103 containerd[1464]: time="2024-10-09T07:52:16.473085136Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 07:52:16.477031 containerd[1464]: time="2024-10-09T07:52:16.476958290Z" level=info msg="Start subscribing containerd event" Oct 9 07:52:16.478851 containerd[1464]: time="2024-10-09T07:52:16.478819383Z" level=info msg="Start recovering state" Oct 9 07:52:16.485322 containerd[1464]: time="2024-10-09T07:52:16.485280144Z" level=info msg="Start event monitor" Oct 9 07:52:16.485767 containerd[1464]: time="2024-10-09T07:52:16.485329856Z" level=info msg="Start snapshots syncer" Oct 9 07:52:16.485818 containerd[1464]: time="2024-10-09T07:52:16.485772555Z" level=info msg="Start cni network conf syncer for default" Oct 9 07:52:16.485818 containerd[1464]: time="2024-10-09T07:52:16.485785998Z" level=info msg="Start streaming server" Oct 9 07:52:16.486042 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 07:52:16.487941 containerd[1464]: time="2024-10-09T07:52:16.486323748Z" level=info msg="containerd successfully booted in 0.216042s" Oct 9 07:52:16.653034 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 07:52:16.734259 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 07:52:16.747938 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 07:52:16.760986 systemd-networkd[1371]: eth1: Gained IPv6LL Oct 9 07:52:16.764089 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Oct 9 07:52:16.776380 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 07:52:16.776658 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 07:52:16.790924 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 07:52:16.817868 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 07:52:16.830300 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 07:52:16.841420 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 07:52:16.846560 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 07:52:16.976986 tar[1453]: linux-amd64/LICENSE Oct 9 07:52:16.977413 tar[1453]: linux-amd64/README.md Oct 9 07:52:16.994013 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 07:52:17.624342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:52:17.625622 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 07:52:17.629220 systemd[1]: Startup finished in 1.290s (kernel) + 5.743s (initrd) + 6.206s (userspace) = 13.240s. Oct 9 07:52:17.637239 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:52:18.412818 kubelet[1559]: E1009 07:52:18.412664 1559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:52:18.415251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:52:18.415464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:52:18.415863 systemd[1]: kubelet.service: Consumed 1.332s CPU time. Oct 9 07:52:25.408270 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 07:52:25.417237 systemd[1]: Started sshd@0-209.38.129.97:22-139.178.89.65:57398.service - OpenSSH per-connection server daemon (139.178.89.65:57398). Oct 9 07:52:25.494764 sshd[1571]: Accepted publickey for core from 139.178.89.65 port 57398 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:25.497385 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:25.515571 systemd-logind[1446]: New session 1 of user core. Oct 9 07:52:25.517588 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 07:52:25.525256 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 07:52:25.541892 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 07:52:25.548190 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 07:52:25.562394 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 07:52:25.677323 systemd[1575]: Queued start job for default target default.target. Oct 9 07:52:25.684457 systemd[1575]: Created slice app.slice - User Application Slice. Oct 9 07:52:25.684508 systemd[1575]: Reached target paths.target - Paths. Oct 9 07:52:25.684529 systemd[1575]: Reached target timers.target - Timers. Oct 9 07:52:25.686089 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 07:52:25.703149 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 07:52:25.703286 systemd[1575]: Reached target sockets.target - Sockets. Oct 9 07:52:25.703304 systemd[1575]: Reached target basic.target - Basic System. Oct 9 07:52:25.703361 systemd[1575]: Reached target default.target - Main User Target. Oct 9 07:52:25.703396 systemd[1575]: Startup finished in 133ms. Oct 9 07:52:25.703625 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 07:52:25.711094 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 07:52:25.787064 systemd[1]: Started sshd@1-209.38.129.97:22-139.178.89.65:57404.service - OpenSSH per-connection server daemon (139.178.89.65:57404). Oct 9 07:52:25.828758 sshd[1586]: Accepted publickey for core from 139.178.89.65 port 57404 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:25.830978 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:25.838496 systemd-logind[1446]: New session 2 of user core. Oct 9 07:52:25.845018 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 07:52:25.908537 sshd[1586]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:25.911982 systemd[1]: sshd@1-209.38.129.97:22-139.178.89.65:57404.service: Deactivated successfully. Oct 9 07:52:25.914905 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 07:52:25.926194 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Oct 9 07:52:25.931415 systemd[1]: Started sshd@2-209.38.129.97:22-139.178.89.65:57420.service - OpenSSH per-connection server daemon (139.178.89.65:57420). Oct 9 07:52:25.935623 systemd-logind[1446]: Removed session 2. Oct 9 07:52:25.986836 sshd[1593]: Accepted publickey for core from 139.178.89.65 port 57420 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:25.989194 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:25.998208 systemd-logind[1446]: New session 3 of user core. Oct 9 07:52:26.004165 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 07:52:26.068477 sshd[1593]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:26.079437 systemd[1]: sshd@2-209.38.129.97:22-139.178.89.65:57420.service: Deactivated successfully. Oct 9 07:52:26.083224 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 07:52:26.086079 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Oct 9 07:52:26.099278 systemd[1]: Started sshd@3-209.38.129.97:22-139.178.89.65:57422.service - OpenSSH per-connection server daemon (139.178.89.65:57422). Oct 9 07:52:26.102321 systemd-logind[1446]: Removed session 3. Oct 9 07:52:26.146989 sshd[1600]: Accepted publickey for core from 139.178.89.65 port 57422 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:26.149114 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:26.156822 systemd-logind[1446]: New session 4 of user core. Oct 9 07:52:26.162020 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 07:52:26.226045 sshd[1600]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:26.241317 systemd[1]: sshd@3-209.38.129.97:22-139.178.89.65:57422.service: Deactivated successfully. Oct 9 07:52:26.244006 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 07:52:26.245963 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Oct 9 07:52:26.250196 systemd[1]: Started sshd@4-209.38.129.97:22-139.178.89.65:57432.service - OpenSSH per-connection server daemon (139.178.89.65:57432). Oct 9 07:52:26.252514 systemd-logind[1446]: Removed session 4. Oct 9 07:52:26.308028 sshd[1607]: Accepted publickey for core from 139.178.89.65 port 57432 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:26.310132 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:26.318375 systemd-logind[1446]: New session 5 of user core. Oct 9 07:52:26.325098 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 07:52:26.397302 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 07:52:26.398174 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:52:26.412382 sudo[1610]: pam_unix(sudo:session): session closed for user root Oct 9 07:52:26.416566 sshd[1607]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:26.430363 systemd[1]: sshd@4-209.38.129.97:22-139.178.89.65:57432.service: Deactivated successfully. Oct 9 07:52:26.433156 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 07:52:26.435539 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Oct 9 07:52:26.443184 systemd[1]: Started sshd@5-209.38.129.97:22-139.178.89.65:57448.service - OpenSSH per-connection server daemon (139.178.89.65:57448). Oct 9 07:52:26.445360 systemd-logind[1446]: Removed session 5. Oct 9 07:52:26.491000 sshd[1615]: Accepted publickey for core from 139.178.89.65 port 57448 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:26.492691 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:26.499396 systemd-logind[1446]: New session 6 of user core. Oct 9 07:52:26.505984 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 07:52:26.567494 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 07:52:26.567910 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:52:26.573987 sudo[1619]: pam_unix(sudo:session): session closed for user root Oct 9 07:52:26.582379 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 9 07:52:26.582940 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:52:26.603319 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 9 07:52:26.608093 auditctl[1622]: No rules Oct 9 07:52:26.608571 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 07:52:26.608848 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 9 07:52:26.617453 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 9 07:52:26.655950 augenrules[1640]: No rules Oct 9 07:52:26.657604 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 9 07:52:26.659407 sudo[1618]: pam_unix(sudo:session): session closed for user root Oct 9 07:52:26.663416 sshd[1615]: pam_unix(sshd:session): session closed for user core Oct 9 07:52:26.677062 systemd[1]: sshd@5-209.38.129.97:22-139.178.89.65:57448.service: Deactivated successfully. Oct 9 07:52:26.680200 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 07:52:26.682006 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Oct 9 07:52:26.691138 systemd[1]: Started sshd@6-209.38.129.97:22-139.178.89.65:57460.service - OpenSSH per-connection server daemon (139.178.89.65:57460). Oct 9 07:52:26.693320 systemd-logind[1446]: Removed session 6. Oct 9 07:52:26.737974 sshd[1648]: Accepted publickey for core from 139.178.89.65 port 57460 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:52:26.740012 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:52:26.746282 systemd-logind[1446]: New session 7 of user core. Oct 9 07:52:26.752025 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 07:52:26.813553 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 07:52:26.814466 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 07:52:27.309643 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 07:52:27.322431 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 07:52:27.800487 dockerd[1667]: time="2024-10-09T07:52:27.799808140Z" level=info msg="Starting up" Oct 9 07:52:27.958702 systemd[1]: var-lib-docker-metacopy\x2dcheck544150355-merged.mount: Deactivated successfully. Oct 9 07:52:27.982915 dockerd[1667]: time="2024-10-09T07:52:27.982820623Z" level=info msg="Loading containers: start." Oct 9 07:52:28.143776 kernel: Initializing XFRM netlink socket Oct 9 07:52:28.185977 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Oct 9 07:52:28.912740 systemd-timesyncd[1353]: Contacted time server 104.194.8.227:123 (2.flatcar.pool.ntp.org). Oct 9 07:52:28.912815 systemd-timesyncd[1353]: Initial clock synchronization to Wed 2024-10-09 07:52:28.912409 UTC. Oct 9 07:52:28.913161 systemd-resolved[1328]: Clock change detected. Flushing caches. Oct 9 07:52:29.011295 systemd-networkd[1371]: docker0: Link UP Oct 9 07:52:29.037812 dockerd[1667]: time="2024-10-09T07:52:29.037642154Z" level=info msg="Loading containers: done." Oct 9 07:52:29.066632 dockerd[1667]: time="2024-10-09T07:52:29.066545444Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 07:52:29.066848 dockerd[1667]: time="2024-10-09T07:52:29.066717127Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 9 07:52:29.066926 dockerd[1667]: time="2024-10-09T07:52:29.066888930Z" level=info msg="Daemon has completed initialization" Oct 9 07:52:29.119003 dockerd[1667]: time="2024-10-09T07:52:29.117979515Z" level=info msg="API listen on /run/docker.sock" Oct 9 07:52:29.118823 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 07:52:29.120405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 07:52:29.127470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:52:29.314295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:52:29.328568 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:52:29.427582 kubelet[1816]: E1009 07:52:29.427533 1816 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:52:29.433227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:52:29.433422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:52:30.327385 containerd[1464]: time="2024-10-09T07:52:30.326963353Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 9 07:52:30.970944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302655592.mount: Deactivated successfully. Oct 9 07:52:32.486253 containerd[1464]: time="2024-10-09T07:52:32.486181055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:32.487922 containerd[1464]: time="2024-10-09T07:52:32.487761557Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 9 07:52:32.489361 containerd[1464]: time="2024-10-09T07:52:32.489270871Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:32.495977 containerd[1464]: time="2024-10-09T07:52:32.495230837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:32.497978 containerd[1464]: time="2024-10-09T07:52:32.497918562Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 2.170888202s" Oct 9 07:52:32.498400 containerd[1464]: time="2024-10-09T07:52:32.498104662Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 9 07:52:32.536486 containerd[1464]: time="2024-10-09T07:52:32.536419761Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 9 07:52:34.298141 containerd[1464]: time="2024-10-09T07:52:34.298060828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:34.300142 containerd[1464]: time="2024-10-09T07:52:34.300056632Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 9 07:52:34.302086 containerd[1464]: time="2024-10-09T07:52:34.301944736Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:34.306665 containerd[1464]: time="2024-10-09T07:52:34.306558806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:34.309101 containerd[1464]: time="2024-10-09T07:52:34.308709862Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 1.772238655s" Oct 9 07:52:34.309101 containerd[1464]: time="2024-10-09T07:52:34.308797639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 9 07:52:34.342135 containerd[1464]: time="2024-10-09T07:52:34.342071315Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 07:52:35.633605 containerd[1464]: time="2024-10-09T07:52:35.633425810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:35.636284 containerd[1464]: time="2024-10-09T07:52:35.636126812Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 9 07:52:35.637149 containerd[1464]: time="2024-10-09T07:52:35.637069147Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:35.643827 containerd[1464]: time="2024-10-09T07:52:35.643652534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:35.646730 containerd[1464]: time="2024-10-09T07:52:35.646439182Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 1.304311075s" Oct 9 07:52:35.646730 containerd[1464]: time="2024-10-09T07:52:35.646506452Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 9 07:52:35.682360 containerd[1464]: time="2024-10-09T07:52:35.682307398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 9 07:52:35.685196 systemd-resolved[1328]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 9 07:52:36.936845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1842462878.mount: Deactivated successfully. Oct 9 07:52:37.533810 containerd[1464]: time="2024-10-09T07:52:37.533731244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:37.535446 containerd[1464]: time="2024-10-09T07:52:37.535363932Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 9 07:52:37.537143 containerd[1464]: time="2024-10-09T07:52:37.537066571Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:37.541837 containerd[1464]: time="2024-10-09T07:52:37.541732940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:37.543597 containerd[1464]: time="2024-10-09T07:52:37.542952585Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 1.860587268s" Oct 9 07:52:37.543597 containerd[1464]: time="2024-10-09T07:52:37.543013683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 9 07:52:37.577292 containerd[1464]: time="2024-10-09T07:52:37.577182575Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 07:52:38.177598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264375747.mount: Deactivated successfully. Oct 9 07:52:38.774422 systemd-resolved[1328]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 9 07:52:39.316420 containerd[1464]: time="2024-10-09T07:52:39.316351897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:39.319178 containerd[1464]: time="2024-10-09T07:52:39.319094074Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 07:52:39.321449 containerd[1464]: time="2024-10-09T07:52:39.321369687Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:39.331895 containerd[1464]: time="2024-10-09T07:52:39.331776353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:39.334523 containerd[1464]: time="2024-10-09T07:52:39.333889533Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.756651678s" Oct 9 07:52:39.334523 containerd[1464]: time="2024-10-09T07:52:39.333961004Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 07:52:39.369542 containerd[1464]: time="2024-10-09T07:52:39.369276769Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 07:52:39.603096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 07:52:39.617950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:52:39.860428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:52:39.861441 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 07:52:39.937970 kubelet[1978]: E1009 07:52:39.937903 1978 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 07:52:39.941056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 07:52:39.941267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 07:52:40.068928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540408138.mount: Deactivated successfully. Oct 9 07:52:40.076832 containerd[1464]: time="2024-10-09T07:52:40.076736975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:40.077961 containerd[1464]: time="2024-10-09T07:52:40.077894285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 07:52:40.079593 containerd[1464]: time="2024-10-09T07:52:40.079527600Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:40.082887 containerd[1464]: time="2024-10-09T07:52:40.082838901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:40.084863 containerd[1464]: time="2024-10-09T07:52:40.084557028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 715.224784ms" Oct 9 07:52:40.084863 containerd[1464]: time="2024-10-09T07:52:40.084607140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 07:52:40.120315 containerd[1464]: time="2024-10-09T07:52:40.120061333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 9 07:52:40.708338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051387845.mount: Deactivated successfully. Oct 9 07:52:42.874430 containerd[1464]: time="2024-10-09T07:52:42.874362966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:42.877813 containerd[1464]: time="2024-10-09T07:52:42.877699846Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 9 07:52:42.880211 containerd[1464]: time="2024-10-09T07:52:42.880149221Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:42.887029 containerd[1464]: time="2024-10-09T07:52:42.886913623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:52:42.890220 containerd[1464]: time="2024-10-09T07:52:42.889587126Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.769477865s" Oct 9 07:52:42.890220 containerd[1464]: time="2024-10-09T07:52:42.889655096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 9 07:52:46.151134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:52:46.163498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:52:46.197927 systemd[1]: Reloading requested from client PID 2106 ('systemctl') (unit session-7.scope)... Oct 9 07:52:46.197946 systemd[1]: Reloading... Oct 9 07:52:46.334237 zram_generator::config[2141]: No configuration found. Oct 9 07:52:46.529633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:52:46.613384 systemd[1]: Reloading finished in 414 ms. Oct 9 07:52:46.669831 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 07:52:46.670185 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 07:52:46.670659 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:52:46.678698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:52:46.835006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:52:46.848598 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:52:46.937451 kubelet[2198]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:52:46.937451 kubelet[2198]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:52:46.937451 kubelet[2198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:52:46.940062 kubelet[2198]: I1009 07:52:46.938863 2198 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:52:47.301689 kubelet[2198]: I1009 07:52:47.301646 2198 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 07:52:47.301870 kubelet[2198]: I1009 07:52:47.301860 2198 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:52:47.302356 kubelet[2198]: I1009 07:52:47.302325 2198 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 07:52:47.325427 kubelet[2198]: I1009 07:52:47.325388 2198 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:52:47.325947 kubelet[2198]: E1009 07:52:47.325923 2198 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://209.38.129.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.341961 kubelet[2198]: I1009 07:52:47.341901 2198 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:52:47.345071 kubelet[2198]: I1009 07:52:47.344643 2198 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:52:47.345071 kubelet[2198]: I1009 07:52:47.344729 2198 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.1.0-0-871bb8dd75","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:52:47.345400 kubelet[2198]: I1009 07:52:47.345170 2198 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:52:47.345400 kubelet[2198]: I1009 07:52:47.345191 2198 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:52:47.345400 kubelet[2198]: I1009 07:52:47.345376 2198 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:52:47.348406 kubelet[2198]: W1009 07:52:47.348275 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.129.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-0-871bb8dd75&limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.348406 kubelet[2198]: E1009 07:52:47.348368 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.129.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-0-871bb8dd75&limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.349574 kubelet[2198]: I1009 07:52:47.349498 2198 kubelet.go:400] "Attempting to sync node with API server" Oct 9 07:52:47.349574 kubelet[2198]: I1009 07:52:47.349546 2198 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:52:47.349574 kubelet[2198]: I1009 07:52:47.349582 2198 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:52:47.349803 kubelet[2198]: I1009 07:52:47.349608 2198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:52:47.353732 kubelet[2198]: W1009 07:52:47.352703 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.129.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.353732 kubelet[2198]: E1009 07:52:47.352779 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.129.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.353732 kubelet[2198]: I1009 07:52:47.353416 2198 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 9 07:52:47.355209 kubelet[2198]: I1009 07:52:47.355010 2198 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:52:47.355209 kubelet[2198]: W1009 07:52:47.355108 2198 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 07:52:47.356217 kubelet[2198]: I1009 07:52:47.356196 2198 server.go:1264] "Started kubelet" Oct 9 07:52:47.361278 kubelet[2198]: I1009 07:52:47.360735 2198 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:52:47.361831 kubelet[2198]: I1009 07:52:47.361778 2198 server.go:455] "Adding debug handlers to kubelet server" Oct 9 07:52:47.363029 kubelet[2198]: I1009 07:52:47.362889 2198 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:52:47.363717 kubelet[2198]: I1009 07:52:47.363305 2198 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:52:47.363816 kubelet[2198]: E1009 07:52:47.363528 2198 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://209.38.129.97:6443/api/v1/namespaces/default/events\": dial tcp 209.38.129.97:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.1.0-0-871bb8dd75.17fcb98bdc9af2fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.1.0-0-871bb8dd75,UID:ci-4081.1.0-0-871bb8dd75,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.1.0-0-871bb8dd75,},FirstTimestamp:2024-10-09 07:52:47.356162812 +0000 UTC m=+0.502580880,LastTimestamp:2024-10-09 07:52:47.356162812 +0000 UTC m=+0.502580880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.1.0-0-871bb8dd75,}" Oct 9 07:52:47.366608 kubelet[2198]: I1009 07:52:47.366446 2198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:52:47.368775 kubelet[2198]: I1009 07:52:47.368750 2198 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:52:47.370065 kubelet[2198]: I1009 07:52:47.369096 2198 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 07:52:47.370065 kubelet[2198]: I1009 07:52:47.369169 2198 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:52:47.370065 kubelet[2198]: W1009 07:52:47.369656 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.129.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.370065 kubelet[2198]: E1009 07:52:47.369725 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.129.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.370373 kubelet[2198]: E1009 07:52:47.370020 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.129.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-0-871bb8dd75?timeout=10s\": dial tcp 209.38.129.97:6443: connect: connection refused" interval="200ms" Oct 9 07:52:47.375321 kubelet[2198]: I1009 07:52:47.375278 2198 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:52:47.375470 kubelet[2198]: I1009 07:52:47.375448 2198 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:52:47.377385 kubelet[2198]: I1009 07:52:47.377340 2198 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:52:47.394690 kubelet[2198]: I1009 07:52:47.394617 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:52:47.396630 kubelet[2198]: I1009 07:52:47.396586 2198 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:52:47.396795 kubelet[2198]: I1009 07:52:47.396784 2198 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:52:47.397154 kubelet[2198]: I1009 07:52:47.396881 2198 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 07:52:47.397154 kubelet[2198]: E1009 07:52:47.396948 2198 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:52:47.413572 kubelet[2198]: W1009 07:52:47.413502 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.129.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.415118 kubelet[2198]: E1009 07:52:47.413922 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.129.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:47.415118 kubelet[2198]: E1009 07:52:47.414092 2198 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:52:47.419229 kubelet[2198]: I1009 07:52:47.419194 2198 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:52:47.419422 kubelet[2198]: I1009 07:52:47.419409 2198 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:52:47.419536 kubelet[2198]: I1009 07:52:47.419528 2198 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:52:47.422494 kubelet[2198]: I1009 07:52:47.422463 2198 policy_none.go:49] "None policy: Start" Oct 9 07:52:47.423858 kubelet[2198]: I1009 07:52:47.423836 2198 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:52:47.424062 kubelet[2198]: I1009 07:52:47.424032 2198 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:52:47.434345 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 07:52:47.453308 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 07:52:47.459620 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 07:52:47.471010 kubelet[2198]: I1009 07:52:47.470822 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.471316 kubelet[2198]: E1009 07:52:47.471247 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.129.97:6443/api/v1/nodes\": dial tcp 209.38.129.97:6443: connect: connection refused" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.472073 kubelet[2198]: I1009 07:52:47.471718 2198 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:52:47.472073 kubelet[2198]: I1009 07:52:47.471915 2198 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:52:47.472073 kubelet[2198]: I1009 07:52:47.472029 2198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:52:47.474057 kubelet[2198]: E1009 07:52:47.473582 2198 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.1.0-0-871bb8dd75\" not found" Oct 9 07:52:47.497885 kubelet[2198]: I1009 07:52:47.497812 2198 topology_manager.go:215] "Topology Admit Handler" podUID="e56066d3508b2cc1058298d7ce14b334" podNamespace="kube-system" podName="kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.500307 kubelet[2198]: I1009 07:52:47.499902 2198 topology_manager.go:215] "Topology Admit Handler" podUID="45a53c1a7802dc987a7856a56c2bdf19" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.504073 kubelet[2198]: I1009 07:52:47.502579 2198 topology_manager.go:215] "Topology Admit Handler" podUID="c451199cbd71057bd3212a2189bac2a8" podNamespace="kube-system" podName="kube-scheduler-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.513203 systemd[1]: Created slice kubepods-burstable-pode56066d3508b2cc1058298d7ce14b334.slice - libcontainer container kubepods-burstable-pode56066d3508b2cc1058298d7ce14b334.slice. Oct 9 07:52:47.540698 systemd[1]: Created slice kubepods-burstable-pod45a53c1a7802dc987a7856a56c2bdf19.slice - libcontainer container kubepods-burstable-pod45a53c1a7802dc987a7856a56c2bdf19.slice. Oct 9 07:52:47.547661 systemd[1]: Created slice kubepods-burstable-podc451199cbd71057bd3212a2189bac2a8.slice - libcontainer container kubepods-burstable-podc451199cbd71057bd3212a2189bac2a8.slice. Oct 9 07:52:47.571350 kubelet[2198]: I1009 07:52:47.569992 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c451199cbd71057bd3212a2189bac2a8-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-0-871bb8dd75\" (UID: \"c451199cbd71057bd3212a2189bac2a8\") " pod="kube-system/kube-scheduler-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.571350 kubelet[2198]: E1009 07:52:47.571176 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.129.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-0-871bb8dd75?timeout=10s\": dial tcp 209.38.129.97:6443: connect: connection refused" interval="400ms" Oct 9 07:52:47.672204 kubelet[2198]: I1009 07:52:47.672095 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.672863 kubelet[2198]: I1009 07:52:47.672148 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.672863 kubelet[2198]: I1009 07:52:47.672548 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e56066d3508b2cc1058298d7ce14b334-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-0-871bb8dd75\" (UID: \"e56066d3508b2cc1058298d7ce14b334\") " pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.672863 kubelet[2198]: I1009 07:52:47.672575 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e56066d3508b2cc1058298d7ce14b334-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-0-871bb8dd75\" (UID: \"e56066d3508b2cc1058298d7ce14b334\") " pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.672863 kubelet[2198]: I1009 07:52:47.672600 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.672863 kubelet[2198]: I1009 07:52:47.672627 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.673225 kubelet[2198]: I1009 07:52:47.672652 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.673225 kubelet[2198]: I1009 07:52:47.672713 2198 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e56066d3508b2cc1058298d7ce14b334-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-0-871bb8dd75\" (UID: \"e56066d3508b2cc1058298d7ce14b334\") " pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.673882 kubelet[2198]: I1009 07:52:47.673605 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.674024 kubelet[2198]: E1009 07:52:47.673987 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.129.97:6443/api/v1/nodes\": dial tcp 209.38.129.97:6443: connect: connection refused" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:47.835440 kubelet[2198]: E1009 07:52:47.835290 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:47.836567 containerd[1464]: time="2024-10-09T07:52:47.836512002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-0-871bb8dd75,Uid:e56066d3508b2cc1058298d7ce14b334,Namespace:kube-system,Attempt:0,}" Oct 9 07:52:47.839408 systemd-resolved[1328]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Oct 9 07:52:47.845452 kubelet[2198]: E1009 07:52:47.845400 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:47.851111 kubelet[2198]: E1009 07:52:47.850570 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:47.851340 containerd[1464]: time="2024-10-09T07:52:47.851291446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-0-871bb8dd75,Uid:c451199cbd71057bd3212a2189bac2a8,Namespace:kube-system,Attempt:0,}" Oct 9 07:52:47.851746 containerd[1464]: time="2024-10-09T07:52:47.851598310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-0-871bb8dd75,Uid:45a53c1a7802dc987a7856a56c2bdf19,Namespace:kube-system,Attempt:0,}" Oct 9 07:52:47.971896 kubelet[2198]: E1009 07:52:47.971795 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.129.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-0-871bb8dd75?timeout=10s\": dial tcp 209.38.129.97:6443: connect: connection refused" interval="800ms" Oct 9 07:52:48.076222 kubelet[2198]: I1009 07:52:48.075672 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:48.076222 kubelet[2198]: E1009 07:52:48.076098 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.129.97:6443/api/v1/nodes\": dial tcp 209.38.129.97:6443: connect: connection refused" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:48.406768 kubelet[2198]: W1009 07:52:48.406573 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://209.38.129.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:48.406768 kubelet[2198]: E1009 07:52:48.406683 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://209.38.129.97:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:48.411272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2007438967.mount: Deactivated successfully. Oct 9 07:52:48.415990 kubelet[2198]: W1009 07:52:48.415338 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://209.38.129.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:48.415990 kubelet[2198]: E1009 07:52:48.415410 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://209.38.129.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:48.415990 kubelet[2198]: W1009 07:52:48.415519 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://209.38.129.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-0-871bb8dd75&limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:48.415990 kubelet[2198]: E1009 07:52:48.415605 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://209.38.129.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.1.0-0-871bb8dd75&limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:48.424800 containerd[1464]: time="2024-10-09T07:52:48.424604522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:52:48.426915 containerd[1464]: time="2024-10-09T07:52:48.426791577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:52:48.428353 containerd[1464]: time="2024-10-09T07:52:48.428125802Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 07:52:48.428571 containerd[1464]: time="2024-10-09T07:52:48.428530853Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:52:48.429680 containerd[1464]: time="2024-10-09T07:52:48.429599795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:52:48.432319 containerd[1464]: time="2024-10-09T07:52:48.432111834Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:52:48.433005 containerd[1464]: time="2024-10-09T07:52:48.432923057Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 07:52:48.435996 containerd[1464]: time="2024-10-09T07:52:48.435934758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 07:52:48.440511 containerd[1464]: time="2024-10-09T07:52:48.440134616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.728342ms" Oct 9 07:52:48.446117 containerd[1464]: time="2024-10-09T07:52:48.445884332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.252843ms" Oct 9 07:52:48.448090 containerd[1464]: time="2024-10-09T07:52:48.447878116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.164538ms" Oct 9 07:52:48.651794 containerd[1464]: time="2024-10-09T07:52:48.651277451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:52:48.651794 containerd[1464]: time="2024-10-09T07:52:48.651373326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:52:48.651794 containerd[1464]: time="2024-10-09T07:52:48.651415401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:52:48.651794 containerd[1464]: time="2024-10-09T07:52:48.651628142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:52:48.658911 containerd[1464]: time="2024-10-09T07:52:48.655347517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:52:48.658911 containerd[1464]: time="2024-10-09T07:52:48.655432262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:52:48.658911 containerd[1464]: time="2024-10-09T07:52:48.655472691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:52:48.658911 containerd[1464]: time="2024-10-09T07:52:48.655592241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:52:48.664191 containerd[1464]: time="2024-10-09T07:52:48.663979307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:52:48.664191 containerd[1464]: time="2024-10-09T07:52:48.664061979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:52:48.664191 containerd[1464]: time="2024-10-09T07:52:48.664091807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:52:48.664475 containerd[1464]: time="2024-10-09T07:52:48.664352816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:52:48.693865 systemd[1]: Started cri-containerd-ebf058c7739a5a63f9e4db1b8f69dfcf4db060e388238ed5991f03922e99721d.scope - libcontainer container ebf058c7739a5a63f9e4db1b8f69dfcf4db060e388238ed5991f03922e99721d. Oct 9 07:52:48.706748 systemd[1]: Started cri-containerd-0307b65020cddef604bb87e72757f81820cff144d2d44aa705b9770c83861b53.scope - libcontainer container 0307b65020cddef604bb87e72757f81820cff144d2d44aa705b9770c83861b53. Oct 9 07:52:48.710395 systemd[1]: Started cri-containerd-8fd6b591ee0219f508e75af246e6502a9f49b2ef230eed7d9d543b63e6ede2ab.scope - libcontainer container 8fd6b591ee0219f508e75af246e6502a9f49b2ef230eed7d9d543b63e6ede2ab. Oct 9 07:52:48.773317 kubelet[2198]: E1009 07:52:48.773031 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://209.38.129.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.1.0-0-871bb8dd75?timeout=10s\": dial tcp 209.38.129.97:6443: connect: connection refused" interval="1.6s" Oct 9 07:52:48.784020 containerd[1464]: time="2024-10-09T07:52:48.783965983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.1.0-0-871bb8dd75,Uid:e56066d3508b2cc1058298d7ce14b334,Namespace:kube-system,Attempt:0,} returns sandbox id \"0307b65020cddef604bb87e72757f81820cff144d2d44aa705b9770c83861b53\"" Oct 9 07:52:48.791147 kubelet[2198]: E1009 07:52:48.791001 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:48.792095 containerd[1464]: time="2024-10-09T07:52:48.791885439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.1.0-0-871bb8dd75,Uid:45a53c1a7802dc987a7856a56c2bdf19,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebf058c7739a5a63f9e4db1b8f69dfcf4db060e388238ed5991f03922e99721d\"" Oct 9 07:52:48.793744 kubelet[2198]: E1009 07:52:48.793715 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:48.801791 containerd[1464]: time="2024-10-09T07:52:48.801687989Z" level=info msg="CreateContainer within sandbox \"0307b65020cddef604bb87e72757f81820cff144d2d44aa705b9770c83861b53\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 07:52:48.802598 containerd[1464]: time="2024-10-09T07:52:48.802496355Z" level=info msg="CreateContainer within sandbox \"ebf058c7739a5a63f9e4db1b8f69dfcf4db060e388238ed5991f03922e99721d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 07:52:48.809645 containerd[1464]: time="2024-10-09T07:52:48.809587739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.1.0-0-871bb8dd75,Uid:c451199cbd71057bd3212a2189bac2a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fd6b591ee0219f508e75af246e6502a9f49b2ef230eed7d9d543b63e6ede2ab\"" Oct 9 07:52:48.810706 kubelet[2198]: E1009 07:52:48.810667 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:48.814019 containerd[1464]: time="2024-10-09T07:52:48.813864387Z" level=info msg="CreateContainer within sandbox \"8fd6b591ee0219f508e75af246e6502a9f49b2ef230eed7d9d543b63e6ede2ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 07:52:48.827005 containerd[1464]: time="2024-10-09T07:52:48.826952507Z" level=info msg="CreateContainer within sandbox \"ebf058c7739a5a63f9e4db1b8f69dfcf4db060e388238ed5991f03922e99721d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3c337116666e92761ff1e42bbafee65f76e1c97d2c6809af765dcb89fc76e081\"" Oct 9 07:52:48.827961 containerd[1464]: time="2024-10-09T07:52:48.827929693Z" level=info msg="StartContainer for \"3c337116666e92761ff1e42bbafee65f76e1c97d2c6809af765dcb89fc76e081\"" Oct 9 07:52:48.837335 containerd[1464]: time="2024-10-09T07:52:48.837277509Z" level=info msg="CreateContainer within sandbox \"0307b65020cddef604bb87e72757f81820cff144d2d44aa705b9770c83861b53\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"157e7cc6afe01526d3247e839de663536178d65360da2a8029c3752d2556b902\"" Oct 9 07:52:48.838446 containerd[1464]: time="2024-10-09T07:52:48.838242963Z" level=info msg="StartContainer for \"157e7cc6afe01526d3247e839de663536178d65360da2a8029c3752d2556b902\"" Oct 9 07:52:48.842344 containerd[1464]: time="2024-10-09T07:52:48.842212199Z" level=info msg="CreateContainer within sandbox \"8fd6b591ee0219f508e75af246e6502a9f49b2ef230eed7d9d543b63e6ede2ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"09f9d2655cccc79ad22bfe2852ab76def571ac380247d9f00db9a95e28692e53\"" Oct 9 07:52:48.843694 containerd[1464]: time="2024-10-09T07:52:48.843658515Z" level=info msg="StartContainer for \"09f9d2655cccc79ad22bfe2852ab76def571ac380247d9f00db9a95e28692e53\"" Oct 9 07:52:48.879005 kubelet[2198]: I1009 07:52:48.878955 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:48.879522 kubelet[2198]: E1009 07:52:48.879489 2198 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://209.38.129.97:6443/api/v1/nodes\": dial tcp 209.38.129.97:6443: connect: connection refused" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:48.884403 systemd[1]: Started cri-containerd-3c337116666e92761ff1e42bbafee65f76e1c97d2c6809af765dcb89fc76e081.scope - libcontainer container 3c337116666e92761ff1e42bbafee65f76e1c97d2c6809af765dcb89fc76e081. Oct 9 07:52:48.894465 systemd[1]: Started cri-containerd-157e7cc6afe01526d3247e839de663536178d65360da2a8029c3752d2556b902.scope - libcontainer container 157e7cc6afe01526d3247e839de663536178d65360da2a8029c3752d2556b902. Oct 9 07:52:48.902326 systemd[1]: Started cri-containerd-09f9d2655cccc79ad22bfe2852ab76def571ac380247d9f00db9a95e28692e53.scope - libcontainer container 09f9d2655cccc79ad22bfe2852ab76def571ac380247d9f00db9a95e28692e53. Oct 9 07:52:48.924377 kubelet[2198]: W1009 07:52:48.923438 2198 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://209.38.129.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:48.924377 kubelet[2198]: E1009 07:52:48.923516 2198 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://209.38.129.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:48.984716 containerd[1464]: time="2024-10-09T07:52:48.984644138Z" level=info msg="StartContainer for \"157e7cc6afe01526d3247e839de663536178d65360da2a8029c3752d2556b902\" returns successfully" Oct 9 07:52:48.991210 containerd[1464]: time="2024-10-09T07:52:48.991129251Z" level=info msg="StartContainer for \"3c337116666e92761ff1e42bbafee65f76e1c97d2c6809af765dcb89fc76e081\" returns successfully" Oct 9 07:52:48.997373 containerd[1464]: time="2024-10-09T07:52:48.996938096Z" level=info msg="StartContainer for \"09f9d2655cccc79ad22bfe2852ab76def571ac380247d9f00db9a95e28692e53\" returns successfully" Oct 9 07:52:49.331980 kubelet[2198]: E1009 07:52:49.331440 2198 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://209.38.129.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 209.38.129.97:6443: connect: connection refused Oct 9 07:52:49.424827 kubelet[2198]: E1009 07:52:49.424584 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:49.433561 kubelet[2198]: E1009 07:52:49.432679 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:49.437403 kubelet[2198]: E1009 07:52:49.437300 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:50.435067 kubelet[2198]: E1009 07:52:50.434891 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:50.436495 kubelet[2198]: E1009 07:52:50.436418 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:50.480659 kubelet[2198]: I1009 07:52:50.480578 2198 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:51.251462 kubelet[2198]: E1009 07:52:51.251428 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:51.544201 kubelet[2198]: E1009 07:52:51.543792 2198 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.1.0-0-871bb8dd75\" not found" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:51.603233 kubelet[2198]: I1009 07:52:51.603098 2198 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:52.352547 kubelet[2198]: I1009 07:52:52.352469 2198 apiserver.go:52] "Watching apiserver" Oct 9 07:52:52.370280 kubelet[2198]: I1009 07:52:52.370199 2198 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 07:52:52.584103 kubelet[2198]: W1009 07:52:52.583094 2198 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:52:52.584720 kubelet[2198]: E1009 07:52:52.584693 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:53.105634 kubelet[2198]: W1009 07:52:53.105492 2198 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:52:53.106302 kubelet[2198]: E1009 07:52:53.106270 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:53.441488 kubelet[2198]: E1009 07:52:53.441139 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:53.442773 kubelet[2198]: E1009 07:52:53.442644 2198 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:53.853920 systemd[1]: Reloading requested from client PID 2472 ('systemctl') (unit session-7.scope)... Oct 9 07:52:53.853947 systemd[1]: Reloading... Oct 9 07:52:54.014126 zram_generator::config[2520]: No configuration found. Oct 9 07:52:54.218085 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 07:52:54.417622 systemd[1]: Reloading finished in 562 ms. Oct 9 07:52:54.489576 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:52:54.504939 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 07:52:54.505581 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:52:54.515676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 07:52:54.726386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 07:52:54.728615 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 07:52:54.837358 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:52:54.837358 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 07:52:54.837358 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 07:52:54.837358 kubelet[2562]: I1009 07:52:54.836159 2562 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 07:52:54.847814 kubelet[2562]: I1009 07:52:54.847748 2562 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 07:52:54.847814 kubelet[2562]: I1009 07:52:54.847792 2562 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 07:52:54.849824 kubelet[2562]: I1009 07:52:54.849587 2562 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 07:52:54.853171 kubelet[2562]: I1009 07:52:54.853026 2562 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 07:52:54.854875 kubelet[2562]: I1009 07:52:54.854653 2562 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 07:52:54.864119 kubelet[2562]: I1009 07:52:54.863507 2562 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 07:52:54.864119 kubelet[2562]: I1009 07:52:54.863800 2562 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 07:52:54.864119 kubelet[2562]: I1009 07:52:54.863842 2562 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.1.0-0-871bb8dd75","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 07:52:54.864119 kubelet[2562]: I1009 07:52:54.864102 2562 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 07:52:54.864461 kubelet[2562]: I1009 07:52:54.864115 2562 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 07:52:54.864461 kubelet[2562]: I1009 07:52:54.864166 2562 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:52:54.864461 kubelet[2562]: I1009 07:52:54.864285 2562 kubelet.go:400] "Attempting to sync node with API server" Oct 9 07:52:54.864461 kubelet[2562]: I1009 07:52:54.864298 2562 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 07:52:54.864461 kubelet[2562]: I1009 07:52:54.864324 2562 kubelet.go:312] "Adding apiserver pod source" Oct 9 07:52:54.864461 kubelet[2562]: I1009 07:52:54.864340 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 07:52:54.877080 kubelet[2562]: I1009 07:52:54.876159 2562 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 9 07:52:54.877080 kubelet[2562]: I1009 07:52:54.876402 2562 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 07:52:54.877080 kubelet[2562]: I1009 07:52:54.876862 2562 server.go:1264] "Started kubelet" Oct 9 07:52:54.880730 kubelet[2562]: I1009 07:52:54.880696 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 07:52:54.892377 kubelet[2562]: I1009 07:52:54.892295 2562 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 07:52:54.897758 kubelet[2562]: I1009 07:52:54.897723 2562 server.go:455] "Adding debug handlers to kubelet server" Oct 9 07:52:54.900139 kubelet[2562]: I1009 07:52:54.900070 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 07:52:54.900595 kubelet[2562]: I1009 07:52:54.900576 2562 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 07:52:54.906170 kubelet[2562]: I1009 07:52:54.906121 2562 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 07:52:54.909081 kubelet[2562]: I1009 07:52:54.908385 2562 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 07:52:54.909081 kubelet[2562]: I1009 07:52:54.908597 2562 reconciler.go:26] "Reconciler: start to sync state" Oct 9 07:52:54.929130 kubelet[2562]: I1009 07:52:54.927321 2562 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 07:52:54.933800 kubelet[2562]: I1009 07:52:54.933751 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 07:52:54.934376 kubelet[2562]: I1009 07:52:54.934345 2562 factory.go:221] Registration of the containerd container factory successfully Oct 9 07:52:54.934376 kubelet[2562]: I1009 07:52:54.934368 2562 factory.go:221] Registration of the systemd container factory successfully Oct 9 07:52:54.936036 kubelet[2562]: I1009 07:52:54.936005 2562 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 07:52:54.936286 kubelet[2562]: I1009 07:52:54.936269 2562 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 07:52:54.936902 kubelet[2562]: I1009 07:52:54.936394 2562 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 07:52:54.936902 kubelet[2562]: E1009 07:52:54.936485 2562 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 07:52:54.943573 kubelet[2562]: E1009 07:52:54.943534 2562 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 07:52:55.009321 kubelet[2562]: I1009 07:52:55.009116 2562 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.024321 kubelet[2562]: I1009 07:52:55.024252 2562 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 07:52:55.024321 kubelet[2562]: I1009 07:52:55.024273 2562 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 07:52:55.024321 kubelet[2562]: I1009 07:52:55.024296 2562 state_mem.go:36] "Initialized new in-memory state store" Oct 9 07:52:55.024540 kubelet[2562]: I1009 07:52:55.024486 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 07:52:55.024540 kubelet[2562]: I1009 07:52:55.024496 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 07:52:55.024540 kubelet[2562]: I1009 07:52:55.024515 2562 policy_none.go:49] "None policy: Start" Oct 9 07:52:55.026715 kubelet[2562]: I1009 07:52:55.025517 2562 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 07:52:55.026715 kubelet[2562]: I1009 07:52:55.025554 2562 state_mem.go:35] "Initializing new in-memory state store" Oct 9 07:52:55.026715 kubelet[2562]: I1009 07:52:55.025718 2562 state_mem.go:75] "Updated machine memory state" Oct 9 07:52:55.032418 kubelet[2562]: I1009 07:52:55.032373 2562 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.032585 kubelet[2562]: I1009 07:52:55.032466 2562 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.037512 kubelet[2562]: E1009 07:52:55.037480 2562 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 07:52:55.041720 kubelet[2562]: I1009 07:52:55.041685 2562 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 07:52:55.042058 kubelet[2562]: I1009 07:52:55.041866 2562 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 07:52:55.042058 kubelet[2562]: I1009 07:52:55.041971 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 07:52:55.238756 kubelet[2562]: I1009 07:52:55.238601 2562 topology_manager.go:215] "Topology Admit Handler" podUID="e56066d3508b2cc1058298d7ce14b334" podNamespace="kube-system" podName="kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.239541 kubelet[2562]: I1009 07:52:55.238797 2562 topology_manager.go:215] "Topology Admit Handler" podUID="45a53c1a7802dc987a7856a56c2bdf19" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.239541 kubelet[2562]: I1009 07:52:55.239533 2562 topology_manager.go:215] "Topology Admit Handler" podUID="c451199cbd71057bd3212a2189bac2a8" podNamespace="kube-system" podName="kube-scheduler-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.249999 kubelet[2562]: W1009 07:52:55.249597 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:52:55.249999 kubelet[2562]: W1009 07:52:55.249839 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:52:55.249999 kubelet[2562]: E1009 07:52:55.249894 2562 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.1.0-0-871bb8dd75\" already exists" pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.249999 kubelet[2562]: W1009 07:52:55.249947 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:52:55.249999 kubelet[2562]: E1009 07:52:55.249979 2562 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" already exists" pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.311918 kubelet[2562]: I1009 07:52:55.311546 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e56066d3508b2cc1058298d7ce14b334-ca-certs\") pod \"kube-apiserver-ci-4081.1.0-0-871bb8dd75\" (UID: \"e56066d3508b2cc1058298d7ce14b334\") " pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.311918 kubelet[2562]: I1009 07:52:55.311600 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.311918 kubelet[2562]: I1009 07:52:55.311630 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-k8s-certs\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.311918 kubelet[2562]: I1009 07:52:55.311654 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-kubeconfig\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.311918 kubelet[2562]: I1009 07:52:55.311679 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.312378 kubelet[2562]: I1009 07:52:55.311703 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e56066d3508b2cc1058298d7ce14b334-k8s-certs\") pod \"kube-apiserver-ci-4081.1.0-0-871bb8dd75\" (UID: \"e56066d3508b2cc1058298d7ce14b334\") " pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.312378 kubelet[2562]: I1009 07:52:55.311728 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e56066d3508b2cc1058298d7ce14b334-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.1.0-0-871bb8dd75\" (UID: \"e56066d3508b2cc1058298d7ce14b334\") " pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.312378 kubelet[2562]: I1009 07:52:55.311752 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45a53c1a7802dc987a7856a56c2bdf19-ca-certs\") pod \"kube-controller-manager-ci-4081.1.0-0-871bb8dd75\" (UID: \"45a53c1a7802dc987a7856a56c2bdf19\") " pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.312378 kubelet[2562]: I1009 07:52:55.311776 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c451199cbd71057bd3212a2189bac2a8-kubeconfig\") pod \"kube-scheduler-ci-4081.1.0-0-871bb8dd75\" (UID: \"c451199cbd71057bd3212a2189bac2a8\") " pod="kube-system/kube-scheduler-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:55.555207 kubelet[2562]: E1009 07:52:55.555076 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:55.556351 kubelet[2562]: E1009 07:52:55.556315 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:55.556478 kubelet[2562]: E1009 07:52:55.556399 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:55.867105 kubelet[2562]: I1009 07:52:55.867055 2562 apiserver.go:52] "Watching apiserver" Oct 9 07:52:55.909415 kubelet[2562]: I1009 07:52:55.909336 2562 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 07:52:55.990808 kubelet[2562]: E1009 07:52:55.990767 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:55.992983 kubelet[2562]: E1009 07:52:55.992918 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:56.011082 kubelet[2562]: W1009 07:52:56.011020 2562 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 07:52:56.011287 kubelet[2562]: E1009 07:52:56.011134 2562 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.1.0-0-871bb8dd75\" already exists" pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" Oct 9 07:52:56.013064 kubelet[2562]: E1009 07:52:56.011920 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:56.133373 kubelet[2562]: I1009 07:52:56.133161 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.1.0-0-871bb8dd75" podStartSLOduration=4.13313408 podStartE2EDuration="4.13313408s" podCreationTimestamp="2024-10-09 07:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:52:56.079088761 +0000 UTC m=+1.334625901" watchObservedRunningTime="2024-10-09 07:52:56.13313408 +0000 UTC m=+1.388671215" Oct 9 07:52:56.184745 kubelet[2562]: I1009 07:52:56.184685 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.1.0-0-871bb8dd75" podStartSLOduration=1.184662295 podStartE2EDuration="1.184662295s" podCreationTimestamp="2024-10-09 07:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:52:56.133881737 +0000 UTC m=+1.389418878" watchObservedRunningTime="2024-10-09 07:52:56.184662295 +0000 UTC m=+1.440199435" Oct 9 07:52:56.996925 kubelet[2562]: E1009 07:52:56.996523 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:58.000690 kubelet[2562]: E1009 07:52:58.000565 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:58.707667 kubelet[2562]: E1009 07:52:58.707613 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:58.723228 kubelet[2562]: I1009 07:52:58.723164 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.1.0-0-871bb8dd75" podStartSLOduration=5.723142098 podStartE2EDuration="5.723142098s" podCreationTimestamp="2024-10-09 07:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:52:56.185245068 +0000 UTC m=+1.440782209" watchObservedRunningTime="2024-10-09 07:52:58.723142098 +0000 UTC m=+3.978679229" Oct 9 07:52:59.001806 kubelet[2562]: E1009 07:52:59.001610 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:59.248172 kubelet[2562]: E1009 07:52:59.248011 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:52:59.433875 kubelet[2562]: E1009 07:52:59.433816 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:00.004597 kubelet[2562]: E1009 07:53:00.004555 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:00.007090 kubelet[2562]: E1009 07:53:00.006136 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:00.456389 sudo[1651]: pam_unix(sudo:session): session closed for user root Oct 9 07:53:00.463621 sshd[1648]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:00.469172 systemd[1]: sshd@6-209.38.129.97:22-139.178.89.65:57460.service: Deactivated successfully. Oct 9 07:53:00.473298 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 07:53:00.473818 systemd[1]: session-7.scope: Consumed 6.105s CPU time, 189.1M memory peak, 0B memory swap peak. Oct 9 07:53:00.476220 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Oct 9 07:53:00.478586 systemd-logind[1446]: Removed session 7. Oct 9 07:53:01.006412 kubelet[2562]: E1009 07:53:01.006374 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:02.092234 update_engine[1448]: I20241009 07:53:02.091886 1448 update_attempter.cc:509] Updating boot flags... Oct 9 07:53:02.151158 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2646) Oct 9 07:53:02.281154 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2650) Oct 9 07:53:09.934274 kubelet[2562]: I1009 07:53:09.934233 2562 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 07:53:09.935341 containerd[1464]: time="2024-10-09T07:53:09.935279623Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 07:53:09.935825 kubelet[2562]: I1009 07:53:09.935611 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 07:53:10.544615 kubelet[2562]: I1009 07:53:10.544537 2562 topology_manager.go:215] "Topology Admit Handler" podUID="c98eed43-da0f-4a06-a27d-6292eb3e0f9d" podNamespace="kube-system" podName="kube-proxy-vs92r" Oct 9 07:53:10.560268 systemd[1]: Created slice kubepods-besteffort-podc98eed43_da0f_4a06_a27d_6292eb3e0f9d.slice - libcontainer container kubepods-besteffort-podc98eed43_da0f_4a06_a27d_6292eb3e0f9d.slice. Oct 9 07:53:10.717667 kubelet[2562]: I1009 07:53:10.717458 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c98eed43-da0f-4a06-a27d-6292eb3e0f9d-kube-proxy\") pod \"kube-proxy-vs92r\" (UID: \"c98eed43-da0f-4a06-a27d-6292eb3e0f9d\") " pod="kube-system/kube-proxy-vs92r" Oct 9 07:53:10.717667 kubelet[2562]: I1009 07:53:10.717526 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c98eed43-da0f-4a06-a27d-6292eb3e0f9d-xtables-lock\") pod \"kube-proxy-vs92r\" (UID: \"c98eed43-da0f-4a06-a27d-6292eb3e0f9d\") " pod="kube-system/kube-proxy-vs92r" Oct 9 07:53:10.717667 kubelet[2562]: I1009 07:53:10.717556 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c98eed43-da0f-4a06-a27d-6292eb3e0f9d-lib-modules\") pod \"kube-proxy-vs92r\" (UID: \"c98eed43-da0f-4a06-a27d-6292eb3e0f9d\") " pod="kube-system/kube-proxy-vs92r" Oct 9 07:53:10.717667 kubelet[2562]: I1009 07:53:10.717645 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtr5q\" (UniqueName: \"kubernetes.io/projected/c98eed43-da0f-4a06-a27d-6292eb3e0f9d-kube-api-access-dtr5q\") pod \"kube-proxy-vs92r\" (UID: \"c98eed43-da0f-4a06-a27d-6292eb3e0f9d\") " pod="kube-system/kube-proxy-vs92r" Oct 9 07:53:10.870968 kubelet[2562]: E1009 07:53:10.870917 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:10.873730 containerd[1464]: time="2024-10-09T07:53:10.873676826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vs92r,Uid:c98eed43-da0f-4a06-a27d-6292eb3e0f9d,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:10.923680 containerd[1464]: time="2024-10-09T07:53:10.923267510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:10.923680 containerd[1464]: time="2024-10-09T07:53:10.923387192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:10.924191 containerd[1464]: time="2024-10-09T07:53:10.923431220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:10.924191 containerd[1464]: time="2024-10-09T07:53:10.923598174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:10.969244 systemd[1]: run-containerd-runc-k8s.io-ee2be4570badb5c918f5838d5adfe3d8d198af109863f1dcfa825927da82025c-runc.fwIfOI.mount: Deactivated successfully. Oct 9 07:53:10.978450 systemd[1]: Started cri-containerd-ee2be4570badb5c918f5838d5adfe3d8d198af109863f1dcfa825927da82025c.scope - libcontainer container ee2be4570badb5c918f5838d5adfe3d8d198af109863f1dcfa825927da82025c. Oct 9 07:53:11.027929 kubelet[2562]: I1009 07:53:11.027858 2562 topology_manager.go:215] "Topology Admit Handler" podUID="267f7508-ef49-4020-8598-e0a60dcad31b" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-bxrfs" Oct 9 07:53:11.048546 systemd[1]: Created slice kubepods-besteffort-pod267f7508_ef49_4020_8598_e0a60dcad31b.slice - libcontainer container kubepods-besteffort-pod267f7508_ef49_4020_8598_e0a60dcad31b.slice. Oct 9 07:53:11.054527 containerd[1464]: time="2024-10-09T07:53:11.054374451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vs92r,Uid:c98eed43-da0f-4a06-a27d-6292eb3e0f9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee2be4570badb5c918f5838d5adfe3d8d198af109863f1dcfa825927da82025c\"" Oct 9 07:53:11.056243 kubelet[2562]: E1009 07:53:11.056214 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:11.060962 containerd[1464]: time="2024-10-09T07:53:11.060907200Z" level=info msg="CreateContainer within sandbox \"ee2be4570badb5c918f5838d5adfe3d8d198af109863f1dcfa825927da82025c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 07:53:11.081615 containerd[1464]: time="2024-10-09T07:53:11.081515903Z" level=info msg="CreateContainer within sandbox \"ee2be4570badb5c918f5838d5adfe3d8d198af109863f1dcfa825927da82025c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"146b1b70d1273ebb27fe64f51f828193a4a649337080ff21d20b3377b2e6e070\"" Oct 9 07:53:11.082897 containerd[1464]: time="2024-10-09T07:53:11.082849441Z" level=info msg="StartContainer for \"146b1b70d1273ebb27fe64f51f828193a4a649337080ff21d20b3377b2e6e070\"" Oct 9 07:53:11.120310 kubelet[2562]: I1009 07:53:11.120275 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4htxx\" (UniqueName: \"kubernetes.io/projected/267f7508-ef49-4020-8598-e0a60dcad31b-kube-api-access-4htxx\") pod \"tigera-operator-77f994b5bb-bxrfs\" (UID: \"267f7508-ef49-4020-8598-e0a60dcad31b\") " pod="tigera-operator/tigera-operator-77f994b5bb-bxrfs" Oct 9 07:53:11.120682 kubelet[2562]: I1009 07:53:11.120498 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/267f7508-ef49-4020-8598-e0a60dcad31b-var-lib-calico\") pod \"tigera-operator-77f994b5bb-bxrfs\" (UID: \"267f7508-ef49-4020-8598-e0a60dcad31b\") " pod="tigera-operator/tigera-operator-77f994b5bb-bxrfs" Oct 9 07:53:11.126427 systemd[1]: Started cri-containerd-146b1b70d1273ebb27fe64f51f828193a4a649337080ff21d20b3377b2e6e070.scope - libcontainer container 146b1b70d1273ebb27fe64f51f828193a4a649337080ff21d20b3377b2e6e070. Oct 9 07:53:11.166196 containerd[1464]: time="2024-10-09T07:53:11.165475473Z" level=info msg="StartContainer for \"146b1b70d1273ebb27fe64f51f828193a4a649337080ff21d20b3377b2e6e070\" returns successfully" Oct 9 07:53:11.356821 containerd[1464]: time="2024-10-09T07:53:11.355879635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-bxrfs,Uid:267f7508-ef49-4020-8598-e0a60dcad31b,Namespace:tigera-operator,Attempt:0,}" Oct 9 07:53:11.407454 containerd[1464]: time="2024-10-09T07:53:11.407194676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:11.407454 containerd[1464]: time="2024-10-09T07:53:11.407307875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:11.408073 containerd[1464]: time="2024-10-09T07:53:11.407345514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:11.408729 containerd[1464]: time="2024-10-09T07:53:11.408274677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:11.452473 systemd[1]: Started cri-containerd-855b86f3cc04f94a6515939dc8c8836d7ead173123f9a259afb0ceea02d41e75.scope - libcontainer container 855b86f3cc04f94a6515939dc8c8836d7ead173123f9a259afb0ceea02d41e75. Oct 9 07:53:11.514915 containerd[1464]: time="2024-10-09T07:53:11.514768449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-bxrfs,Uid:267f7508-ef49-4020-8598-e0a60dcad31b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"855b86f3cc04f94a6515939dc8c8836d7ead173123f9a259afb0ceea02d41e75\"" Oct 9 07:53:11.525754 containerd[1464]: time="2024-10-09T07:53:11.525660316Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 07:53:12.036812 kubelet[2562]: E1009 07:53:12.036448 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:12.054391 kubelet[2562]: I1009 07:53:12.054315 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vs92r" podStartSLOduration=2.054285029 podStartE2EDuration="2.054285029s" podCreationTimestamp="2024-10-09 07:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:53:12.053958217 +0000 UTC m=+17.309495358" watchObservedRunningTime="2024-10-09 07:53:12.054285029 +0000 UTC m=+17.309822170" Oct 9 07:53:12.864926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468669772.mount: Deactivated successfully. Oct 9 07:53:13.561186 containerd[1464]: time="2024-10-09T07:53:13.560562083Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:13.564850 containerd[1464]: time="2024-10-09T07:53:13.564737777Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136541" Oct 9 07:53:13.568255 containerd[1464]: time="2024-10-09T07:53:13.568179414Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:13.582563 containerd[1464]: time="2024-10-09T07:53:13.582343577Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:13.584660 containerd[1464]: time="2024-10-09T07:53:13.583970736Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.058249046s" Oct 9 07:53:13.584660 containerd[1464]: time="2024-10-09T07:53:13.584030873Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 07:53:13.618099 containerd[1464]: time="2024-10-09T07:53:13.615953412Z" level=info msg="CreateContainer within sandbox \"855b86f3cc04f94a6515939dc8c8836d7ead173123f9a259afb0ceea02d41e75\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 07:53:13.707396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056910820.mount: Deactivated successfully. Oct 9 07:53:13.715699 containerd[1464]: time="2024-10-09T07:53:13.715568798Z" level=info msg="CreateContainer within sandbox \"855b86f3cc04f94a6515939dc8c8836d7ead173123f9a259afb0ceea02d41e75\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"62478b1745b13e0fa07740542db0dfb937467f72db1185057d2b4a4f8fd7b9d6\"" Oct 9 07:53:13.719094 containerd[1464]: time="2024-10-09T07:53:13.719022081Z" level=info msg="StartContainer for \"62478b1745b13e0fa07740542db0dfb937467f72db1185057d2b4a4f8fd7b9d6\"" Oct 9 07:53:13.786369 systemd[1]: Started cri-containerd-62478b1745b13e0fa07740542db0dfb937467f72db1185057d2b4a4f8fd7b9d6.scope - libcontainer container 62478b1745b13e0fa07740542db0dfb937467f72db1185057d2b4a4f8fd7b9d6. Oct 9 07:53:13.837732 containerd[1464]: time="2024-10-09T07:53:13.837607574Z" level=info msg="StartContainer for \"62478b1745b13e0fa07740542db0dfb937467f72db1185057d2b4a4f8fd7b9d6\" returns successfully" Oct 9 07:53:16.964159 kubelet[2562]: I1009 07:53:16.962860 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-bxrfs" podStartSLOduration=4.885123653 podStartE2EDuration="6.962836076s" podCreationTimestamp="2024-10-09 07:53:10 +0000 UTC" firstStartedPulling="2024-10-09 07:53:11.517137976 +0000 UTC m=+16.772675109" lastFinishedPulling="2024-10-09 07:53:13.594850385 +0000 UTC m=+18.850387532" observedRunningTime="2024-10-09 07:53:14.067725873 +0000 UTC m=+19.323263023" watchObservedRunningTime="2024-10-09 07:53:16.962836076 +0000 UTC m=+22.218373210" Oct 9 07:53:16.966425 kubelet[2562]: I1009 07:53:16.966363 2562 topology_manager.go:215] "Topology Admit Handler" podUID="743f113d-60fb-4da0-b2f2-4b7475eb5c03" podNamespace="calico-system" podName="calico-typha-6d755679d9-tnqqg" Oct 9 07:53:16.978442 systemd[1]: Created slice kubepods-besteffort-pod743f113d_60fb_4da0_b2f2_4b7475eb5c03.slice - libcontainer container kubepods-besteffort-pod743f113d_60fb_4da0_b2f2_4b7475eb5c03.slice. Oct 9 07:53:17.058242 kubelet[2562]: I1009 07:53:17.057933 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/743f113d-60fb-4da0-b2f2-4b7475eb5c03-typha-certs\") pod \"calico-typha-6d755679d9-tnqqg\" (UID: \"743f113d-60fb-4da0-b2f2-4b7475eb5c03\") " pod="calico-system/calico-typha-6d755679d9-tnqqg" Oct 9 07:53:17.058791 kubelet[2562]: I1009 07:53:17.058469 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lrr\" (UniqueName: \"kubernetes.io/projected/743f113d-60fb-4da0-b2f2-4b7475eb5c03-kube-api-access-b5lrr\") pod \"calico-typha-6d755679d9-tnqqg\" (UID: \"743f113d-60fb-4da0-b2f2-4b7475eb5c03\") " pod="calico-system/calico-typha-6d755679d9-tnqqg" Oct 9 07:53:17.059337 kubelet[2562]: I1009 07:53:17.059026 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/743f113d-60fb-4da0-b2f2-4b7475eb5c03-tigera-ca-bundle\") pod \"calico-typha-6d755679d9-tnqqg\" (UID: \"743f113d-60fb-4da0-b2f2-4b7475eb5c03\") " pod="calico-system/calico-typha-6d755679d9-tnqqg" Oct 9 07:53:17.099263 kubelet[2562]: I1009 07:53:17.099176 2562 topology_manager.go:215] "Topology Admit Handler" podUID="171ce80b-5505-40ec-a72c-7a88dda5f233" podNamespace="calico-system" podName="calico-node-79ssv" Oct 9 07:53:17.117530 systemd[1]: Created slice kubepods-besteffort-pod171ce80b_5505_40ec_a72c_7a88dda5f233.slice - libcontainer container kubepods-besteffort-pod171ce80b_5505_40ec_a72c_7a88dda5f233.slice. Oct 9 07:53:17.226216 kubelet[2562]: I1009 07:53:17.224909 2562 topology_manager.go:215] "Topology Admit Handler" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" podNamespace="calico-system" podName="csi-node-driver-s24v6" Oct 9 07:53:17.227214 kubelet[2562]: E1009 07:53:17.226702 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s24v6" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" Oct 9 07:53:17.261959 kubelet[2562]: I1009 07:53:17.261434 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-lib-modules\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.261959 kubelet[2562]: I1009 07:53:17.261502 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-var-lib-calico\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.261959 kubelet[2562]: I1009 07:53:17.261532 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm9pm\" (UniqueName: \"kubernetes.io/projected/171ce80b-5505-40ec-a72c-7a88dda5f233-kube-api-access-wm9pm\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.261959 kubelet[2562]: I1009 07:53:17.261560 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-policysync\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.261959 kubelet[2562]: I1009 07:53:17.261589 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-flexvol-driver-host\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.262416 kubelet[2562]: I1009 07:53:17.261620 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-cni-bin-dir\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.262416 kubelet[2562]: I1009 07:53:17.261651 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-var-run-calico\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.262416 kubelet[2562]: I1009 07:53:17.261688 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-xtables-lock\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.262416 kubelet[2562]: I1009 07:53:17.261715 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/171ce80b-5505-40ec-a72c-7a88dda5f233-tigera-ca-bundle\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.262416 kubelet[2562]: I1009 07:53:17.261742 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-cni-net-dir\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.262666 kubelet[2562]: I1009 07:53:17.261773 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/171ce80b-5505-40ec-a72c-7a88dda5f233-cni-log-dir\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.262666 kubelet[2562]: I1009 07:53:17.261803 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/171ce80b-5505-40ec-a72c-7a88dda5f233-node-certs\") pod \"calico-node-79ssv\" (UID: \"171ce80b-5505-40ec-a72c-7a88dda5f233\") " pod="calico-system/calico-node-79ssv" Oct 9 07:53:17.287152 kubelet[2562]: E1009 07:53:17.285118 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:17.287967 containerd[1464]: time="2024-10-09T07:53:17.287796395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d755679d9-tnqqg,Uid:743f113d-60fb-4da0-b2f2-4b7475eb5c03,Namespace:calico-system,Attempt:0,}" Oct 9 07:53:17.351655 containerd[1464]: time="2024-10-09T07:53:17.351292517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:17.351655 containerd[1464]: time="2024-10-09T07:53:17.351404479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:17.351655 containerd[1464]: time="2024-10-09T07:53:17.351430960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:17.351655 containerd[1464]: time="2024-10-09T07:53:17.351579336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:17.362997 kubelet[2562]: I1009 07:53:17.362935 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fd1b0b62-8fcc-49bd-9c52-8a285174cd0c-varrun\") pod \"csi-node-driver-s24v6\" (UID: \"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c\") " pod="calico-system/csi-node-driver-s24v6" Oct 9 07:53:17.365524 kubelet[2562]: I1009 07:53:17.365478 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd1b0b62-8fcc-49bd-9c52-8a285174cd0c-registration-dir\") pod \"csi-node-driver-s24v6\" (UID: \"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c\") " pod="calico-system/csi-node-driver-s24v6" Oct 9 07:53:17.367282 kubelet[2562]: I1009 07:53:17.367232 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd1b0b62-8fcc-49bd-9c52-8a285174cd0c-socket-dir\") pod \"csi-node-driver-s24v6\" (UID: \"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c\") " pod="calico-system/csi-node-driver-s24v6" Oct 9 07:53:17.369812 kubelet[2562]: I1009 07:53:17.369766 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd1b0b62-8fcc-49bd-9c52-8a285174cd0c-kubelet-dir\") pod \"csi-node-driver-s24v6\" (UID: \"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c\") " pod="calico-system/csi-node-driver-s24v6" Oct 9 07:53:17.372078 kubelet[2562]: I1009 07:53:17.370314 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjp44\" (UniqueName: \"kubernetes.io/projected/fd1b0b62-8fcc-49bd-9c52-8a285174cd0c-kube-api-access-rjp44\") pod \"csi-node-driver-s24v6\" (UID: \"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c\") " pod="calico-system/csi-node-driver-s24v6" Oct 9 07:53:17.374175 kubelet[2562]: E1009 07:53:17.374143 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.374669 kubelet[2562]: W1009 07:53:17.374372 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.374868 kubelet[2562]: E1009 07:53:17.374847 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.383302 kubelet[2562]: E1009 07:53:17.380615 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.383302 kubelet[2562]: W1009 07:53:17.383180 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.383302 kubelet[2562]: E1009 07:53:17.383238 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.421430 systemd[1]: Started cri-containerd-ef34f8954485f625e3617698f90b795473555f5803c73667666d3b7de314c631.scope - libcontainer container ef34f8954485f625e3617698f90b795473555f5803c73667666d3b7de314c631. Oct 9 07:53:17.427261 kubelet[2562]: E1009 07:53:17.426656 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.427261 kubelet[2562]: W1009 07:53:17.426683 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.427261 kubelet[2562]: E1009 07:53:17.426709 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.472094 kubelet[2562]: E1009 07:53:17.471814 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.472094 kubelet[2562]: W1009 07:53:17.471848 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.472094 kubelet[2562]: E1009 07:53:17.471872 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.472439 kubelet[2562]: E1009 07:53:17.472175 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.472439 kubelet[2562]: W1009 07:53:17.472185 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.472800 kubelet[2562]: E1009 07:53:17.472198 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.474163 kubelet[2562]: E1009 07:53:17.473011 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.474163 kubelet[2562]: W1009 07:53:17.473030 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.474163 kubelet[2562]: E1009 07:53:17.474120 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.474950 kubelet[2562]: E1009 07:53:17.474694 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.474950 kubelet[2562]: W1009 07:53:17.474721 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.474950 kubelet[2562]: E1009 07:53:17.474767 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.475525 kubelet[2562]: E1009 07:53:17.475335 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.475525 kubelet[2562]: W1009 07:53:17.475371 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.475525 kubelet[2562]: E1009 07:53:17.475415 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.475839 kubelet[2562]: E1009 07:53:17.475792 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.475839 kubelet[2562]: W1009 07:53:17.475803 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.476027 kubelet[2562]: E1009 07:53:17.475997 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.476557 kubelet[2562]: E1009 07:53:17.476369 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.476557 kubelet[2562]: W1009 07:53:17.476422 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.477205 kubelet[2562]: E1009 07:53:17.477118 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.481553 kubelet[2562]: E1009 07:53:17.481505 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.481553 kubelet[2562]: W1009 07:53:17.481537 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.481553 kubelet[2562]: E1009 07:53:17.481571 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.482403 kubelet[2562]: E1009 07:53:17.482278 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.482403 kubelet[2562]: W1009 07:53:17.482293 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.482403 kubelet[2562]: E1009 07:53:17.482384 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.484377 kubelet[2562]: E1009 07:53:17.484310 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.484377 kubelet[2562]: W1009 07:53:17.484326 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.484493 kubelet[2562]: E1009 07:53:17.484448 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.484620 kubelet[2562]: E1009 07:53:17.484605 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.484620 kubelet[2562]: W1009 07:53:17.484616 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.484951 kubelet[2562]: E1009 07:53:17.484731 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.485056 kubelet[2562]: E1009 07:53:17.484982 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.485056 kubelet[2562]: W1009 07:53:17.484995 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.485301 kubelet[2562]: E1009 07:53:17.485265 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.485301 kubelet[2562]: W1009 07:53:17.485276 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.485523 kubelet[2562]: E1009 07:53:17.485433 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.485523 kubelet[2562]: E1009 07:53:17.485486 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.486257 kubelet[2562]: E1009 07:53:17.486229 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.486257 kubelet[2562]: W1009 07:53:17.486247 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.486587 kubelet[2562]: E1009 07:53:17.486363 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.486587 kubelet[2562]: E1009 07:53:17.486506 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.486587 kubelet[2562]: W1009 07:53:17.486512 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.486920 kubelet[2562]: E1009 07:53:17.486762 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.486999 kubelet[2562]: E1009 07:53:17.486944 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.486999 kubelet[2562]: W1009 07:53:17.486955 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.487310 kubelet[2562]: E1009 07:53:17.487060 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.487310 kubelet[2562]: E1009 07:53:17.487254 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.487310 kubelet[2562]: W1009 07:53:17.487264 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.487827 kubelet[2562]: E1009 07:53:17.487620 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.487827 kubelet[2562]: E1009 07:53:17.487632 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.487827 kubelet[2562]: W1009 07:53:17.487671 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.487827 kubelet[2562]: E1009 07:53:17.487689 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.489090 kubelet[2562]: E1009 07:53:17.488156 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.489344 kubelet[2562]: W1009 07:53:17.489208 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.489344 kubelet[2562]: E1009 07:53:17.489247 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.490328 kubelet[2562]: E1009 07:53:17.490180 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.490328 kubelet[2562]: W1009 07:53:17.490199 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.490328 kubelet[2562]: E1009 07:53:17.490241 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.490715 kubelet[2562]: E1009 07:53:17.490589 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.490715 kubelet[2562]: W1009 07:53:17.490605 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.490715 kubelet[2562]: E1009 07:53:17.490639 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.491605 kubelet[2562]: E1009 07:53:17.491116 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.491605 kubelet[2562]: W1009 07:53:17.491133 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.491605 kubelet[2562]: E1009 07:53:17.491164 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.492027 kubelet[2562]: E1009 07:53:17.491861 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.492027 kubelet[2562]: W1009 07:53:17.491878 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.492027 kubelet[2562]: E1009 07:53:17.491913 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.493160 kubelet[2562]: E1009 07:53:17.493139 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.493282 kubelet[2562]: W1009 07:53:17.493266 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.493380 kubelet[2562]: E1009 07:53:17.493365 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.493783 kubelet[2562]: E1009 07:53:17.493758 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.493783 kubelet[2562]: W1009 07:53:17.493777 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.493911 kubelet[2562]: E1009 07:53:17.493793 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.512256 kubelet[2562]: E1009 07:53:17.512194 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:17.512256 kubelet[2562]: W1009 07:53:17.512240 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:17.512256 kubelet[2562]: E1009 07:53:17.512269 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:17.643677 containerd[1464]: time="2024-10-09T07:53:17.643544466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d755679d9-tnqqg,Uid:743f113d-60fb-4da0-b2f2-4b7475eb5c03,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef34f8954485f625e3617698f90b795473555f5803c73667666d3b7de314c631\"" Oct 9 07:53:17.644724 kubelet[2562]: E1009 07:53:17.644679 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:17.647854 containerd[1464]: time="2024-10-09T07:53:17.647629396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 07:53:17.723624 kubelet[2562]: E1009 07:53:17.723569 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:17.725997 containerd[1464]: time="2024-10-09T07:53:17.725422346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-79ssv,Uid:171ce80b-5505-40ec-a72c-7a88dda5f233,Namespace:calico-system,Attempt:0,}" Oct 9 07:53:17.769576 containerd[1464]: time="2024-10-09T07:53:17.769269115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:17.769576 containerd[1464]: time="2024-10-09T07:53:17.769345068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:17.769576 containerd[1464]: time="2024-10-09T07:53:17.769361112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:17.770613 containerd[1464]: time="2024-10-09T07:53:17.769480301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:17.806973 systemd[1]: Started cri-containerd-aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86.scope - libcontainer container aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86. Oct 9 07:53:17.868648 containerd[1464]: time="2024-10-09T07:53:17.868416122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-79ssv,Uid:171ce80b-5505-40ec-a72c-7a88dda5f233,Namespace:calico-system,Attempt:0,} returns sandbox id \"aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86\"" Oct 9 07:53:17.870429 kubelet[2562]: E1009 07:53:17.870389 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:18.939596 kubelet[2562]: E1009 07:53:18.939471 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s24v6" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" Oct 9 07:53:20.128171 containerd[1464]: time="2024-10-09T07:53:20.127672697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:20.132122 containerd[1464]: time="2024-10-09T07:53:20.132028194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 07:53:20.137886 containerd[1464]: time="2024-10-09T07:53:20.137811457Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:20.146505 containerd[1464]: time="2024-10-09T07:53:20.146350970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:20.148805 containerd[1464]: time="2024-10-09T07:53:20.148593076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.500874884s" Oct 9 07:53:20.148805 containerd[1464]: time="2024-10-09T07:53:20.148675774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 07:53:20.152940 containerd[1464]: time="2024-10-09T07:53:20.152866303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 07:53:20.183530 containerd[1464]: time="2024-10-09T07:53:20.183296111Z" level=info msg="CreateContainer within sandbox \"ef34f8954485f625e3617698f90b795473555f5803c73667666d3b7de314c631\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 07:53:20.213406 containerd[1464]: time="2024-10-09T07:53:20.213293111Z" level=info msg="CreateContainer within sandbox \"ef34f8954485f625e3617698f90b795473555f5803c73667666d3b7de314c631\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"db6383529c041d9fc9a73a0b0fadf93c373bc1629952a20cdc80c2ac59eac463\"" Oct 9 07:53:20.216438 containerd[1464]: time="2024-10-09T07:53:20.214670337Z" level=info msg="StartContainer for \"db6383529c041d9fc9a73a0b0fadf93c373bc1629952a20cdc80c2ac59eac463\"" Oct 9 07:53:20.296345 systemd[1]: Started cri-containerd-db6383529c041d9fc9a73a0b0fadf93c373bc1629952a20cdc80c2ac59eac463.scope - libcontainer container db6383529c041d9fc9a73a0b0fadf93c373bc1629952a20cdc80c2ac59eac463. Oct 9 07:53:20.405116 containerd[1464]: time="2024-10-09T07:53:20.403428771Z" level=info msg="StartContainer for \"db6383529c041d9fc9a73a0b0fadf93c373bc1629952a20cdc80c2ac59eac463\" returns successfully" Oct 9 07:53:20.937781 kubelet[2562]: E1009 07:53:20.937393 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s24v6" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" Oct 9 07:53:21.079853 kubelet[2562]: E1009 07:53:21.079764 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:21.095886 kubelet[2562]: I1009 07:53:21.095518 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d755679d9-tnqqg" podStartSLOduration=2.589535875 podStartE2EDuration="5.095494538s" podCreationTimestamp="2024-10-09 07:53:16 +0000 UTC" firstStartedPulling="2024-10-09 07:53:17.645727683 +0000 UTC m=+22.901264803" lastFinishedPulling="2024-10-09 07:53:20.151686324 +0000 UTC m=+25.407223466" observedRunningTime="2024-10-09 07:53:21.094872398 +0000 UTC m=+26.350409539" watchObservedRunningTime="2024-10-09 07:53:21.095494538 +0000 UTC m=+26.351031680" Oct 9 07:53:21.134476 kubelet[2562]: E1009 07:53:21.134429 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.134476 kubelet[2562]: W1009 07:53:21.134467 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.134760 kubelet[2562]: E1009 07:53:21.134494 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.135241 kubelet[2562]: E1009 07:53:21.135209 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.135241 kubelet[2562]: W1009 07:53:21.135231 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.135373 kubelet[2562]: E1009 07:53:21.135248 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.135936 kubelet[2562]: E1009 07:53:21.135839 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.135936 kubelet[2562]: W1009 07:53:21.135855 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.135936 kubelet[2562]: E1009 07:53:21.135869 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.136415 kubelet[2562]: E1009 07:53:21.136268 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.136415 kubelet[2562]: W1009 07:53:21.136280 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.136415 kubelet[2562]: E1009 07:53:21.136307 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.136772 kubelet[2562]: E1009 07:53:21.136655 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.136772 kubelet[2562]: W1009 07:53:21.136770 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.136861 kubelet[2562]: E1009 07:53:21.136784 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.137101 kubelet[2562]: E1009 07:53:21.137012 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.137101 kubelet[2562]: W1009 07:53:21.137030 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.137181 kubelet[2562]: E1009 07:53:21.137081 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.137639 kubelet[2562]: E1009 07:53:21.137620 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.137639 kubelet[2562]: W1009 07:53:21.137634 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.137737 kubelet[2562]: E1009 07:53:21.137646 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.137949 kubelet[2562]: E1009 07:53:21.137937 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.137949 kubelet[2562]: W1009 07:53:21.137948 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.138011 kubelet[2562]: E1009 07:53:21.137959 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.138368 kubelet[2562]: E1009 07:53:21.138352 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.138368 kubelet[2562]: W1009 07:53:21.138364 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.138847 kubelet[2562]: E1009 07:53:21.138374 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.138847 kubelet[2562]: E1009 07:53:21.138786 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.138847 kubelet[2562]: W1009 07:53:21.138796 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.138847 kubelet[2562]: E1009 07:53:21.138809 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.139486 kubelet[2562]: E1009 07:53:21.139249 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.139486 kubelet[2562]: W1009 07:53:21.139264 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.139486 kubelet[2562]: E1009 07:53:21.139279 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.140074 kubelet[2562]: E1009 07:53:21.139810 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.140074 kubelet[2562]: W1009 07:53:21.139824 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.140074 kubelet[2562]: E1009 07:53:21.139836 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.140704 kubelet[2562]: E1009 07:53:21.140122 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.140704 kubelet[2562]: W1009 07:53:21.140131 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.140704 kubelet[2562]: E1009 07:53:21.140142 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.142650 kubelet[2562]: E1009 07:53:21.141769 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.142650 kubelet[2562]: W1009 07:53:21.141793 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.142650 kubelet[2562]: E1009 07:53:21.141810 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.142650 kubelet[2562]: E1009 07:53:21.142521 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.142650 kubelet[2562]: W1009 07:53:21.142533 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.142650 kubelet[2562]: E1009 07:53:21.142546 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.215865 kubelet[2562]: E1009 07:53:21.215467 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.215865 kubelet[2562]: W1009 07:53:21.215518 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.215865 kubelet[2562]: E1009 07:53:21.215549 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.218123 kubelet[2562]: E1009 07:53:21.217620 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.218123 kubelet[2562]: W1009 07:53:21.217649 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.218123 kubelet[2562]: E1009 07:53:21.217684 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.219169 kubelet[2562]: E1009 07:53:21.218730 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.219169 kubelet[2562]: W1009 07:53:21.218751 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.219169 kubelet[2562]: E1009 07:53:21.218788 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.220284 kubelet[2562]: E1009 07:53:21.220258 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.220284 kubelet[2562]: W1009 07:53:21.220281 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.221238 kubelet[2562]: E1009 07:53:21.220322 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.221238 kubelet[2562]: E1009 07:53:21.220567 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.221238 kubelet[2562]: W1009 07:53:21.220577 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.221238 kubelet[2562]: E1009 07:53:21.220590 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.221238 kubelet[2562]: E1009 07:53:21.220752 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.221238 kubelet[2562]: W1009 07:53:21.220760 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.221238 kubelet[2562]: E1009 07:53:21.220770 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.221893 kubelet[2562]: E1009 07:53:21.221868 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.221893 kubelet[2562]: W1009 07:53:21.221886 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.222913 kubelet[2562]: E1009 07:53:21.221963 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.222913 kubelet[2562]: E1009 07:53:21.222114 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.222913 kubelet[2562]: W1009 07:53:21.222124 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.222913 kubelet[2562]: E1009 07:53:21.222288 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.222913 kubelet[2562]: W1009 07:53:21.222296 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.222913 kubelet[2562]: E1009 07:53:21.222309 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.222913 kubelet[2562]: E1009 07:53:21.222474 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.222913 kubelet[2562]: W1009 07:53:21.222483 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.222913 kubelet[2562]: E1009 07:53:21.222494 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.222913 kubelet[2562]: E1009 07:53:21.222674 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.223344 kubelet[2562]: W1009 07:53:21.222685 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.223344 kubelet[2562]: E1009 07:53:21.222695 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.223344 kubelet[2562]: E1009 07:53:21.222910 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.223344 kubelet[2562]: W1009 07:53:21.222919 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.223344 kubelet[2562]: E1009 07:53:21.222930 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.223344 kubelet[2562]: E1009 07:53:21.223341 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.223640 kubelet[2562]: W1009 07:53:21.223352 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.223640 kubelet[2562]: E1009 07:53:21.223365 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.223640 kubelet[2562]: E1009 07:53:21.223396 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.223640 kubelet[2562]: E1009 07:53:21.223560 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.223640 kubelet[2562]: W1009 07:53:21.223568 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.223640 kubelet[2562]: E1009 07:53:21.223578 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.223874 kubelet[2562]: E1009 07:53:21.223756 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.223874 kubelet[2562]: W1009 07:53:21.223768 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.223874 kubelet[2562]: E1009 07:53:21.223780 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.225071 kubelet[2562]: E1009 07:53:21.224020 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.225071 kubelet[2562]: W1009 07:53:21.224051 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.225071 kubelet[2562]: E1009 07:53:21.224064 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.225071 kubelet[2562]: E1009 07:53:21.224462 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.225071 kubelet[2562]: W1009 07:53:21.224472 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.225071 kubelet[2562]: E1009 07:53:21.224486 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.225381 kubelet[2562]: E1009 07:53:21.225187 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 07:53:21.225381 kubelet[2562]: W1009 07:53:21.225198 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 07:53:21.225381 kubelet[2562]: E1009 07:53:21.225209 2562 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 07:53:21.485456 containerd[1464]: time="2024-10-09T07:53:21.485280123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:21.489160 containerd[1464]: time="2024-10-09T07:53:21.489093152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 07:53:21.490630 containerd[1464]: time="2024-10-09T07:53:21.490587862Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:21.496602 containerd[1464]: time="2024-10-09T07:53:21.496546168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:21.497309 containerd[1464]: time="2024-10-09T07:53:21.497274524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.344357103s" Oct 9 07:53:21.497381 containerd[1464]: time="2024-10-09T07:53:21.497312475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 07:53:21.503403 containerd[1464]: time="2024-10-09T07:53:21.503348247Z" level=info msg="CreateContainer within sandbox \"aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 07:53:21.527944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533788184.mount: Deactivated successfully. Oct 9 07:53:21.541085 containerd[1464]: time="2024-10-09T07:53:21.540726026Z" level=info msg="CreateContainer within sandbox \"aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43\"" Oct 9 07:53:21.545160 containerd[1464]: time="2024-10-09T07:53:21.545095455Z" level=info msg="StartContainer for \"a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43\"" Oct 9 07:53:21.606323 systemd[1]: Started cri-containerd-a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43.scope - libcontainer container a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43. Oct 9 07:53:21.674077 containerd[1464]: time="2024-10-09T07:53:21.672838845Z" level=info msg="StartContainer for \"a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43\" returns successfully" Oct 9 07:53:21.702914 systemd[1]: cri-containerd-a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43.scope: Deactivated successfully. Oct 9 07:53:21.804740 containerd[1464]: time="2024-10-09T07:53:21.784454253Z" level=info msg="shim disconnected" id=a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43 namespace=k8s.io Oct 9 07:53:21.804740 containerd[1464]: time="2024-10-09T07:53:21.803658608Z" level=warning msg="cleaning up after shim disconnected" id=a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43 namespace=k8s.io Oct 9 07:53:21.804740 containerd[1464]: time="2024-10-09T07:53:21.803691963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:53:22.083729 kubelet[2562]: I1009 07:53:22.082846 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:53:22.083729 kubelet[2562]: E1009 07:53:22.083602 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:22.085530 kubelet[2562]: E1009 07:53:22.085441 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:22.088301 containerd[1464]: time="2024-10-09T07:53:22.088258662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 07:53:22.164091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a60baaec7fe7a593739b7820dbd8e0ac3e6a8afa0789318a9d2a4e84cf575e43-rootfs.mount: Deactivated successfully. Oct 9 07:53:22.938889 kubelet[2562]: E1009 07:53:22.937457 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s24v6" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" Oct 9 07:53:24.937678 kubelet[2562]: E1009 07:53:24.937606 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s24v6" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" Oct 9 07:53:26.327631 containerd[1464]: time="2024-10-09T07:53:26.327532672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:26.329860 containerd[1464]: time="2024-10-09T07:53:26.329746586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 07:53:26.332464 containerd[1464]: time="2024-10-09T07:53:26.332293154Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:26.339241 containerd[1464]: time="2024-10-09T07:53:26.339171632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:26.342488 containerd[1464]: time="2024-10-09T07:53:26.342266015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.253957661s" Oct 9 07:53:26.342488 containerd[1464]: time="2024-10-09T07:53:26.342339491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 07:53:26.346704 containerd[1464]: time="2024-10-09T07:53:26.346527502Z" level=info msg="CreateContainer within sandbox \"aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 07:53:26.468699 containerd[1464]: time="2024-10-09T07:53:26.468026751Z" level=info msg="CreateContainer within sandbox \"aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c\"" Oct 9 07:53:26.477542 containerd[1464]: time="2024-10-09T07:53:26.477472718Z" level=info msg="StartContainer for \"7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c\"" Oct 9 07:53:26.668388 systemd[1]: Started cri-containerd-7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c.scope - libcontainer container 7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c. Oct 9 07:53:26.774280 containerd[1464]: time="2024-10-09T07:53:26.774215106Z" level=info msg="StartContainer for \"7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c\" returns successfully" Oct 9 07:53:26.939081 kubelet[2562]: E1009 07:53:26.937286 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s24v6" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" Oct 9 07:53:27.117089 kubelet[2562]: E1009 07:53:27.114872 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:27.626870 systemd[1]: cri-containerd-7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c.scope: Deactivated successfully. Oct 9 07:53:27.660712 kubelet[2562]: I1009 07:53:27.659777 2562 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 07:53:27.672139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c-rootfs.mount: Deactivated successfully. Oct 9 07:53:27.677410 containerd[1464]: time="2024-10-09T07:53:27.677078512Z" level=info msg="shim disconnected" id=7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c namespace=k8s.io Oct 9 07:53:27.677410 containerd[1464]: time="2024-10-09T07:53:27.677162068Z" level=warning msg="cleaning up after shim disconnected" id=7ade630cf88cbd17f071d008b6b5bc35dcad3afbe37dd294fee904505023f26c namespace=k8s.io Oct 9 07:53:27.677410 containerd[1464]: time="2024-10-09T07:53:27.677177788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 07:53:27.709512 kubelet[2562]: I1009 07:53:27.709439 2562 topology_manager.go:215] "Topology Admit Handler" podUID="15cd672c-8913-4d18-8c1d-961f59e5572e" podNamespace="calico-system" podName="calico-kube-controllers-58bc88798c-plhm7" Oct 9 07:53:27.716164 kubelet[2562]: I1009 07:53:27.713097 2562 topology_manager.go:215] "Topology Admit Handler" podUID="b5519e13-c682-44c5-8276-45bab21b54a1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2qx8t" Oct 9 07:53:27.725248 kubelet[2562]: I1009 07:53:27.721625 2562 topology_manager.go:215] "Topology Admit Handler" podUID="b16fd7a4-7278-47f9-ac26-d1aa8683b5a6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lcc6p" Oct 9 07:53:27.736490 systemd[1]: Created slice kubepods-besteffort-pod15cd672c_8913_4d18_8c1d_961f59e5572e.slice - libcontainer container kubepods-besteffort-pod15cd672c_8913_4d18_8c1d_961f59e5572e.slice. Oct 9 07:53:27.763837 systemd[1]: Created slice kubepods-burstable-podb5519e13_c682_44c5_8276_45bab21b54a1.slice - libcontainer container kubepods-burstable-podb5519e13_c682_44c5_8276_45bab21b54a1.slice. Oct 9 07:53:27.772144 kubelet[2562]: I1009 07:53:27.772101 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b16fd7a4-7278-47f9-ac26-d1aa8683b5a6-config-volume\") pod \"coredns-7db6d8ff4d-lcc6p\" (UID: \"b16fd7a4-7278-47f9-ac26-d1aa8683b5a6\") " pod="kube-system/coredns-7db6d8ff4d-lcc6p" Oct 9 07:53:27.772312 kubelet[2562]: I1009 07:53:27.772176 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxvs\" (UniqueName: \"kubernetes.io/projected/15cd672c-8913-4d18-8c1d-961f59e5572e-kube-api-access-vwxvs\") pod \"calico-kube-controllers-58bc88798c-plhm7\" (UID: \"15cd672c-8913-4d18-8c1d-961f59e5572e\") " pod="calico-system/calico-kube-controllers-58bc88798c-plhm7" Oct 9 07:53:27.772312 kubelet[2562]: I1009 07:53:27.772227 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n454n\" (UniqueName: \"kubernetes.io/projected/b5519e13-c682-44c5-8276-45bab21b54a1-kube-api-access-n454n\") pod \"coredns-7db6d8ff4d-2qx8t\" (UID: \"b5519e13-c682-44c5-8276-45bab21b54a1\") " pod="kube-system/coredns-7db6d8ff4d-2qx8t" Oct 9 07:53:27.772312 kubelet[2562]: I1009 07:53:27.772262 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmrsw\" (UniqueName: \"kubernetes.io/projected/b16fd7a4-7278-47f9-ac26-d1aa8683b5a6-kube-api-access-pmrsw\") pod \"coredns-7db6d8ff4d-lcc6p\" (UID: \"b16fd7a4-7278-47f9-ac26-d1aa8683b5a6\") " pod="kube-system/coredns-7db6d8ff4d-lcc6p" Oct 9 07:53:27.772593 kubelet[2562]: I1009 07:53:27.772317 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5519e13-c682-44c5-8276-45bab21b54a1-config-volume\") pod \"coredns-7db6d8ff4d-2qx8t\" (UID: \"b5519e13-c682-44c5-8276-45bab21b54a1\") " pod="kube-system/coredns-7db6d8ff4d-2qx8t" Oct 9 07:53:27.772593 kubelet[2562]: I1009 07:53:27.772363 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15cd672c-8913-4d18-8c1d-961f59e5572e-tigera-ca-bundle\") pod \"calico-kube-controllers-58bc88798c-plhm7\" (UID: \"15cd672c-8913-4d18-8c1d-961f59e5572e\") " pod="calico-system/calico-kube-controllers-58bc88798c-plhm7" Oct 9 07:53:27.778748 systemd[1]: Created slice kubepods-burstable-podb16fd7a4_7278_47f9_ac26_d1aa8683b5a6.slice - libcontainer container kubepods-burstable-podb16fd7a4_7278_47f9_ac26_d1aa8683b5a6.slice. Oct 9 07:53:28.053428 containerd[1464]: time="2024-10-09T07:53:28.053243729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bc88798c-plhm7,Uid:15cd672c-8913-4d18-8c1d-961f59e5572e,Namespace:calico-system,Attempt:0,}" Oct 9 07:53:28.076497 kubelet[2562]: E1009 07:53:28.074472 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:28.080097 containerd[1464]: time="2024-10-09T07:53:28.078332407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2qx8t,Uid:b5519e13-c682-44c5-8276-45bab21b54a1,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:28.086577 kubelet[2562]: E1009 07:53:28.086510 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:28.091611 containerd[1464]: time="2024-10-09T07:53:28.090913811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lcc6p,Uid:b16fd7a4-7278-47f9-ac26-d1aa8683b5a6,Namespace:kube-system,Attempt:0,}" Oct 9 07:53:28.135460 kubelet[2562]: E1009 07:53:28.134011 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:28.159891 containerd[1464]: time="2024-10-09T07:53:28.158370054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 07:53:28.445933 containerd[1464]: time="2024-10-09T07:53:28.445848038Z" level=error msg="Failed to destroy network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.449316 containerd[1464]: time="2024-10-09T07:53:28.449233922Z" level=error msg="Failed to destroy network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.454575 containerd[1464]: time="2024-10-09T07:53:28.454490557Z" level=error msg="encountered an error cleaning up failed sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.454738 containerd[1464]: time="2024-10-09T07:53:28.454597068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2qx8t,Uid:b5519e13-c682-44c5-8276-45bab21b54a1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.455286 containerd[1464]: time="2024-10-09T07:53:28.455099185Z" level=error msg="encountered an error cleaning up failed sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.455286 containerd[1464]: time="2024-10-09T07:53:28.455182926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lcc6p,Uid:b16fd7a4-7278-47f9-ac26-d1aa8683b5a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.460542 containerd[1464]: time="2024-10-09T07:53:28.460277651Z" level=error msg="Failed to destroy network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.461138 containerd[1464]: time="2024-10-09T07:53:28.461081230Z" level=error msg="encountered an error cleaning up failed sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.461433 containerd[1464]: time="2024-10-09T07:53:28.461318406Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bc88798c-plhm7,Uid:15cd672c-8913-4d18-8c1d-961f59e5572e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.462187 kubelet[2562]: E1009 07:53:28.461668 2562 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.462187 kubelet[2562]: E1009 07:53:28.461726 2562 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.462187 kubelet[2562]: E1009 07:53:28.461750 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2qx8t" Oct 9 07:53:28.462187 kubelet[2562]: E1009 07:53:28.461775 2562 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2qx8t" Oct 9 07:53:28.462469 kubelet[2562]: E1009 07:53:28.461775 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lcc6p" Oct 9 07:53:28.462469 kubelet[2562]: E1009 07:53:28.461805 2562 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lcc6p" Oct 9 07:53:28.462469 kubelet[2562]: E1009 07:53:28.461823 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2qx8t_kube-system(b5519e13-c682-44c5-8276-45bab21b54a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2qx8t_kube-system(b5519e13-c682-44c5-8276-45bab21b54a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2qx8t" podUID="b5519e13-c682-44c5-8276-45bab21b54a1" Oct 9 07:53:28.462658 kubelet[2562]: E1009 07:53:28.461859 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lcc6p_kube-system(b16fd7a4-7278-47f9-ac26-d1aa8683b5a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lcc6p_kube-system(b16fd7a4-7278-47f9-ac26-d1aa8683b5a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lcc6p" podUID="b16fd7a4-7278-47f9-ac26-d1aa8683b5a6" Oct 9 07:53:28.462658 kubelet[2562]: E1009 07:53:28.461678 2562 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:28.462658 kubelet[2562]: E1009 07:53:28.461918 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58bc88798c-plhm7" Oct 9 07:53:28.462839 kubelet[2562]: E1009 07:53:28.461941 2562 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58bc88798c-plhm7" Oct 9 07:53:28.462839 kubelet[2562]: E1009 07:53:28.461976 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58bc88798c-plhm7_calico-system(15cd672c-8913-4d18-8c1d-961f59e5572e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58bc88798c-plhm7_calico-system(15cd672c-8913-4d18-8c1d-961f59e5572e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58bc88798c-plhm7" podUID="15cd672c-8913-4d18-8c1d-961f59e5572e" Oct 9 07:53:28.672926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c-shm.mount: Deactivated successfully. Oct 9 07:53:28.948584 systemd[1]: Created slice kubepods-besteffort-podfd1b0b62_8fcc_49bd_9c52_8a285174cd0c.slice - libcontainer container kubepods-besteffort-podfd1b0b62_8fcc_49bd_9c52_8a285174cd0c.slice. Oct 9 07:53:28.952783 containerd[1464]: time="2024-10-09T07:53:28.952727407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s24v6,Uid:fd1b0b62-8fcc-49bd-9c52-8a285174cd0c,Namespace:calico-system,Attempt:0,}" Oct 9 07:53:29.070093 containerd[1464]: time="2024-10-09T07:53:29.069924780Z" level=error msg="Failed to destroy network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:29.072469 containerd[1464]: time="2024-10-09T07:53:29.072363569Z" level=error msg="encountered an error cleaning up failed sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:29.072469 containerd[1464]: time="2024-10-09T07:53:29.072462747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s24v6,Uid:fd1b0b62-8fcc-49bd-9c52-8a285174cd0c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:29.073809 kubelet[2562]: E1009 07:53:29.073760 2562 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:29.073934 kubelet[2562]: E1009 07:53:29.073832 2562 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s24v6" Oct 9 07:53:29.073934 kubelet[2562]: E1009 07:53:29.073861 2562 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s24v6" Oct 9 07:53:29.073934 kubelet[2562]: E1009 07:53:29.073914 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s24v6_calico-system(fd1b0b62-8fcc-49bd-9c52-8a285174cd0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s24v6_calico-system(fd1b0b62-8fcc-49bd-9c52-8a285174cd0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s24v6" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" Oct 9 07:53:29.075022 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93-shm.mount: Deactivated successfully. Oct 9 07:53:29.138193 kubelet[2562]: I1009 07:53:29.137334 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:29.144006 kubelet[2562]: I1009 07:53:29.143703 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:29.145144 containerd[1464]: time="2024-10-09T07:53:29.144780202Z" level=info msg="StopPodSandbox for \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\"" Oct 9 07:53:29.147284 containerd[1464]: time="2024-10-09T07:53:29.146281663Z" level=info msg="StopPodSandbox for \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\"" Oct 9 07:53:29.148844 containerd[1464]: time="2024-10-09T07:53:29.147837081Z" level=info msg="Ensure that sandbox 78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb in task-service has been cleanup successfully" Oct 9 07:53:29.149236 kubelet[2562]: I1009 07:53:29.148398 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:29.149336 containerd[1464]: time="2024-10-09T07:53:29.147872761Z" level=info msg="Ensure that sandbox 98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c in task-service has been cleanup successfully" Oct 9 07:53:29.152192 containerd[1464]: time="2024-10-09T07:53:29.151470680Z" level=info msg="StopPodSandbox for \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\"" Oct 9 07:53:29.152192 containerd[1464]: time="2024-10-09T07:53:29.151741720Z" level=info msg="Ensure that sandbox cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c in task-service has been cleanup successfully" Oct 9 07:53:29.158651 kubelet[2562]: I1009 07:53:29.158603 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:29.161770 containerd[1464]: time="2024-10-09T07:53:29.160100609Z" level=info msg="StopPodSandbox for \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\"" Oct 9 07:53:29.163072 containerd[1464]: time="2024-10-09T07:53:29.162957103Z" level=info msg="Ensure that sandbox 6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93 in task-service has been cleanup successfully" Oct 9 07:53:29.260094 containerd[1464]: time="2024-10-09T07:53:29.259789973Z" level=error msg="StopPodSandbox for \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\" failed" error="failed to destroy network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:29.260879 kubelet[2562]: E1009 07:53:29.260149 2562 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:29.260879 kubelet[2562]: E1009 07:53:29.260270 2562 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb"} Oct 9 07:53:29.260879 kubelet[2562]: E1009 07:53:29.260346 2562 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b16fd7a4-7278-47f9-ac26-d1aa8683b5a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:53:29.260879 kubelet[2562]: E1009 07:53:29.260391 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b16fd7a4-7278-47f9-ac26-d1aa8683b5a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lcc6p" podUID="b16fd7a4-7278-47f9-ac26-d1aa8683b5a6" Oct 9 07:53:29.267365 containerd[1464]: time="2024-10-09T07:53:29.267120344Z" level=error msg="StopPodSandbox for \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\" failed" error="failed to destroy network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:29.267675 kubelet[2562]: E1009 07:53:29.267472 2562 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:29.267675 kubelet[2562]: E1009 07:53:29.267533 2562 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c"} Oct 9 07:53:29.267675 kubelet[2562]: E1009 07:53:29.267661 2562 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"15cd672c-8913-4d18-8c1d-961f59e5572e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:53:29.267984 kubelet[2562]: E1009 07:53:29.267705 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"15cd672c-8913-4d18-8c1d-961f59e5572e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58bc88798c-plhm7" podUID="15cd672c-8913-4d18-8c1d-961f59e5572e" Oct 9 07:53:29.281065 containerd[1464]: time="2024-10-09T07:53:29.280920165Z" level=error msg="StopPodSandbox for \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\" failed" error="failed to destroy network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:29.281473 kubelet[2562]: E1009 07:53:29.281205 2562 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:29.281473 kubelet[2562]: E1009 07:53:29.281262 2562 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c"} Oct 9 07:53:29.281473 kubelet[2562]: E1009 07:53:29.281300 2562 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5519e13-c682-44c5-8276-45bab21b54a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:53:29.281473 kubelet[2562]: E1009 07:53:29.281324 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5519e13-c682-44c5-8276-45bab21b54a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2qx8t" podUID="b5519e13-c682-44c5-8276-45bab21b54a1" Oct 9 07:53:29.287493 containerd[1464]: time="2024-10-09T07:53:29.287246794Z" level=error msg="StopPodSandbox for \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\" failed" error="failed to destroy network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 07:53:29.287693 kubelet[2562]: E1009 07:53:29.287526 2562 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:29.287693 kubelet[2562]: E1009 07:53:29.287596 2562 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93"} Oct 9 07:53:29.287693 kubelet[2562]: E1009 07:53:29.287642 2562 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 07:53:29.288011 kubelet[2562]: E1009 07:53:29.287687 2562 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s24v6" podUID="fd1b0b62-8fcc-49bd-9c52-8a285174cd0c" Oct 9 07:53:33.637343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038594359.mount: Deactivated successfully. Oct 9 07:53:33.809951 containerd[1464]: time="2024-10-09T07:53:33.809855916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:33.844831 containerd[1464]: time="2024-10-09T07:53:33.844746978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 07:53:33.917514 containerd[1464]: time="2024-10-09T07:53:33.916958575Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:34.024200 containerd[1464]: time="2024-10-09T07:53:34.024102481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:34.026092 containerd[1464]: time="2024-10-09T07:53:34.025769069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.867130913s" Oct 9 07:53:34.026092 containerd[1464]: time="2024-10-09T07:53:34.025838658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 07:53:34.125597 containerd[1464]: time="2024-10-09T07:53:34.125524268Z" level=info msg="CreateContainer within sandbox \"aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 07:53:34.182943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843540058.mount: Deactivated successfully. Oct 9 07:53:34.203006 containerd[1464]: time="2024-10-09T07:53:34.202893076Z" level=info msg="CreateContainer within sandbox \"aaddc3f538728adeca9f42716c5a9038d8b17e7b07a57050ae3eaae55f364f86\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e10d0fc4a5ae5ecdb79b5c28529f447f0c757a7e3fd208aa90426181865b29d8\"" Oct 9 07:53:34.206104 containerd[1464]: time="2024-10-09T07:53:34.205805474Z" level=info msg="StartContainer for \"e10d0fc4a5ae5ecdb79b5c28529f447f0c757a7e3fd208aa90426181865b29d8\"" Oct 9 07:53:34.356497 systemd[1]: Started cri-containerd-e10d0fc4a5ae5ecdb79b5c28529f447f0c757a7e3fd208aa90426181865b29d8.scope - libcontainer container e10d0fc4a5ae5ecdb79b5c28529f447f0c757a7e3fd208aa90426181865b29d8. Oct 9 07:53:34.461608 containerd[1464]: time="2024-10-09T07:53:34.460805976Z" level=info msg="StartContainer for \"e10d0fc4a5ae5ecdb79b5c28529f447f0c757a7e3fd208aa90426181865b29d8\" returns successfully" Oct 9 07:53:34.576599 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 07:53:34.579457 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 07:53:35.210007 kubelet[2562]: E1009 07:53:35.209712 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:35.257514 systemd[1]: Started sshd@7-209.38.129.97:22-139.178.89.65:53862.service - OpenSSH per-connection server daemon (139.178.89.65:53862). Oct 9 07:53:35.375729 sshd[3559]: Accepted publickey for core from 139.178.89.65 port 53862 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:35.379692 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:35.392162 systemd-logind[1446]: New session 8 of user core. Oct 9 07:53:35.399215 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 07:53:35.599797 sshd[3559]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:35.606584 systemd[1]: sshd@7-209.38.129.97:22-139.178.89.65:53862.service: Deactivated successfully. Oct 9 07:53:35.610343 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 07:53:35.611854 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Oct 9 07:53:35.613692 systemd-logind[1446]: Removed session 8. Oct 9 07:53:36.210250 kubelet[2562]: E1009 07:53:36.208445 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:37.214476 kubelet[2562]: E1009 07:53:37.212433 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:39.810221 kubelet[2562]: I1009 07:53:39.809843 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 07:53:39.811735 kubelet[2562]: E1009 07:53:39.811172 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:39.860632 kubelet[2562]: I1009 07:53:39.841574 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-79ssv" podStartSLOduration=6.676360824 podStartE2EDuration="22.837942129s" podCreationTimestamp="2024-10-09 07:53:17 +0000 UTC" firstStartedPulling="2024-10-09 07:53:17.871870248 +0000 UTC m=+23.127407377" lastFinishedPulling="2024-10-09 07:53:34.03345155 +0000 UTC m=+39.288988682" observedRunningTime="2024-10-09 07:53:35.260071641 +0000 UTC m=+40.515608779" watchObservedRunningTime="2024-10-09 07:53:39.837942129 +0000 UTC m=+45.093479295" Oct 9 07:53:40.170090 kernel: bpftool[3773]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 07:53:40.216874 kubelet[2562]: E1009 07:53:40.216494 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:40.612205 systemd-networkd[1371]: vxlan.calico: Link UP Oct 9 07:53:40.612215 systemd-networkd[1371]: vxlan.calico: Gained carrier Oct 9 07:53:40.629397 systemd[1]: Started sshd@8-209.38.129.97:22-139.178.89.65:53870.service - OpenSSH per-connection server daemon (139.178.89.65:53870). Oct 9 07:53:40.758224 sshd[3830]: Accepted publickey for core from 139.178.89.65 port 53870 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:40.761938 sshd[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:40.767698 systemd-logind[1446]: New session 9 of user core. Oct 9 07:53:40.774343 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 07:53:40.947628 containerd[1464]: time="2024-10-09T07:53:40.945318557Z" level=info msg="StopPodSandbox for \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\"" Oct 9 07:53:41.037462 sshd[3830]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:41.045357 systemd[1]: sshd@8-209.38.129.97:22-139.178.89.65:53870.service: Deactivated successfully. Oct 9 07:53:41.050626 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 07:53:41.057611 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Oct 9 07:53:41.059614 systemd-logind[1446]: Removed session 9. Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.097 [INFO][3874] k8s.go 608: Cleaning up netns ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.097 [INFO][3874] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" iface="eth0" netns="/var/run/netns/cni-e6796854-25b3-e4c6-81d4-f81b2cc044ad" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.099 [INFO][3874] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" iface="eth0" netns="/var/run/netns/cni-e6796854-25b3-e4c6-81d4-f81b2cc044ad" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.101 [INFO][3874] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" iface="eth0" netns="/var/run/netns/cni-e6796854-25b3-e4c6-81d4-f81b2cc044ad" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.101 [INFO][3874] k8s.go 615: Releasing IP address(es) ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.101 [INFO][3874] utils.go 188: Calico CNI releasing IP address ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.309 [INFO][3903] ipam_plugin.go 417: Releasing address using handleID ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.310 [INFO][3903] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.311 [INFO][3903] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.324 [WARNING][3903] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.324 [INFO][3903] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.329 [INFO][3903] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:41.335121 containerd[1464]: 2024-10-09 07:53:41.331 [INFO][3874] k8s.go 621: Teardown processing complete. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:41.335121 containerd[1464]: time="2024-10-09T07:53:41.334732338Z" level=info msg="TearDown network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\" successfully" Oct 9 07:53:41.335121 containerd[1464]: time="2024-10-09T07:53:41.334774275Z" level=info msg="StopPodSandbox for \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\" returns successfully" Oct 9 07:53:41.346354 systemd[1]: run-netns-cni\x2de6796854\x2d25b3\x2de4c6\x2d81d4\x2df81b2cc044ad.mount: Deactivated successfully. Oct 9 07:53:41.357728 containerd[1464]: time="2024-10-09T07:53:41.357653137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bc88798c-plhm7,Uid:15cd672c-8913-4d18-8c1d-961f59e5572e,Namespace:calico-system,Attempt:1,}" Oct 9 07:53:41.638755 systemd-networkd[1371]: cali344dfdb9a9b: Link UP Oct 9 07:53:41.643721 systemd-networkd[1371]: cali344dfdb9a9b: Gained carrier Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.504 [INFO][3920] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0 calico-kube-controllers-58bc88798c- calico-system 15cd672c-8913-4d18-8c1d-961f59e5572e 806 0 2024-10-09 07:53:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58bc88798c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.1.0-0-871bb8dd75 calico-kube-controllers-58bc88798c-plhm7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali344dfdb9a9b [] []}} ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Namespace="calico-system" Pod="calico-kube-controllers-58bc88798c-plhm7" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.504 [INFO][3920] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Namespace="calico-system" Pod="calico-kube-controllers-58bc88798c-plhm7" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.557 [INFO][3931] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" HandleID="k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.571 [INFO][3931] ipam_plugin.go 270: Auto assigning IP ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" HandleID="k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002927f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.1.0-0-871bb8dd75", "pod":"calico-kube-controllers-58bc88798c-plhm7", "timestamp":"2024-10-09 07:53:41.55714276 +0000 UTC"}, Hostname:"ci-4081.1.0-0-871bb8dd75", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.571 [INFO][3931] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.571 [INFO][3931] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.571 [INFO][3931] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-0-871bb8dd75' Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.575 [INFO][3931] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.586 [INFO][3931] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.594 [INFO][3931] ipam.go 489: Trying affinity for 192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.597 [INFO][3931] ipam.go 155: Attempting to load block cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.601 [INFO][3931] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.602 [INFO][3931] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.605 [INFO][3931] ipam.go 1685: Creating new handle: k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.612 [INFO][3931] ipam.go 1203: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.627 [INFO][3931] ipam.go 1216: Successfully claimed IPs: [192.168.41.1/26] block=192.168.41.0/26 handle="k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.628 [INFO][3931] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.1/26] handle="k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.628 [INFO][3931] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:41.663267 containerd[1464]: 2024-10-09 07:53:41.628 [INFO][3931] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.41.1/26] IPv6=[] ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" HandleID="k8s-pod-network.611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.666598 containerd[1464]: 2024-10-09 07:53:41.633 [INFO][3920] k8s.go 386: Populated endpoint ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Namespace="calico-system" Pod="calico-kube-controllers-58bc88798c-plhm7" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0", GenerateName:"calico-kube-controllers-58bc88798c-", Namespace:"calico-system", SelfLink:"", UID:"15cd672c-8913-4d18-8c1d-961f59e5572e", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bc88798c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"", Pod:"calico-kube-controllers-58bc88798c-plhm7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali344dfdb9a9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:41.666598 containerd[1464]: 2024-10-09 07:53:41.633 [INFO][3920] k8s.go 387: Calico CNI using IPs: [192.168.41.1/32] ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Namespace="calico-system" Pod="calico-kube-controllers-58bc88798c-plhm7" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.666598 containerd[1464]: 2024-10-09 07:53:41.634 [INFO][3920] dataplane_linux.go 68: Setting the host side veth name to cali344dfdb9a9b ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Namespace="calico-system" Pod="calico-kube-controllers-58bc88798c-plhm7" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.666598 containerd[1464]: 2024-10-09 07:53:41.637 [INFO][3920] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Namespace="calico-system" Pod="calico-kube-controllers-58bc88798c-plhm7" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.666598 containerd[1464]: 2024-10-09 07:53:41.637 [INFO][3920] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Namespace="calico-system" Pod="calico-kube-controllers-58bc88798c-plhm7" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0", GenerateName:"calico-kube-controllers-58bc88798c-", Namespace:"calico-system", SelfLink:"", UID:"15cd672c-8913-4d18-8c1d-961f59e5572e", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bc88798c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb", Pod:"calico-kube-controllers-58bc88798c-plhm7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali344dfdb9a9b", MAC:"3e:9f:38:6c:e2:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:41.666598 containerd[1464]: 2024-10-09 07:53:41.657 [INFO][3920] k8s.go 500: Wrote updated endpoint to datastore ContainerID="611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb" Namespace="calico-system" Pod="calico-kube-controllers-58bc88798c-plhm7" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:41.722693 containerd[1464]: time="2024-10-09T07:53:41.721832046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:41.722693 containerd[1464]: time="2024-10-09T07:53:41.721924559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:41.722693 containerd[1464]: time="2024-10-09T07:53:41.721947046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:41.723545 containerd[1464]: time="2024-10-09T07:53:41.723443714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:41.780376 systemd[1]: Started cri-containerd-611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb.scope - libcontainer container 611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb. Oct 9 07:53:41.868229 containerd[1464]: time="2024-10-09T07:53:41.868166849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58bc88798c-plhm7,Uid:15cd672c-8913-4d18-8c1d-961f59e5572e,Namespace:calico-system,Attempt:1,} returns sandbox id \"611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb\"" Oct 9 07:53:41.898582 containerd[1464]: time="2024-10-09T07:53:41.898446435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 07:53:42.134339 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Oct 9 07:53:42.343573 systemd[1]: run-containerd-runc-k8s.io-611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb-runc.LOUhrn.mount: Deactivated successfully. Oct 9 07:53:43.159376 systemd-networkd[1371]: cali344dfdb9a9b: Gained IPv6LL Oct 9 07:53:43.939922 containerd[1464]: time="2024-10-09T07:53:43.938491793Z" level=info msg="StopPodSandbox for \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\"" Oct 9 07:53:43.939922 containerd[1464]: time="2024-10-09T07:53:43.938675163Z" level=info msg="StopPodSandbox for \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\"" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.025 [INFO][4017] k8s.go 608: Cleaning up netns ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.026 [INFO][4017] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" iface="eth0" netns="/var/run/netns/cni-ddb76571-21fe-1df3-dbb1-b1a841ce9718" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.027 [INFO][4017] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" iface="eth0" netns="/var/run/netns/cni-ddb76571-21fe-1df3-dbb1-b1a841ce9718" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.027 [INFO][4017] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" iface="eth0" netns="/var/run/netns/cni-ddb76571-21fe-1df3-dbb1-b1a841ce9718" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.027 [INFO][4017] k8s.go 615: Releasing IP address(es) ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.027 [INFO][4017] utils.go 188: Calico CNI releasing IP address ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.070 [INFO][4032] ipam_plugin.go 417: Releasing address using handleID ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.070 [INFO][4032] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.070 [INFO][4032] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.078 [WARNING][4032] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.078 [INFO][4032] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.081 [INFO][4032] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:44.087382 containerd[1464]: 2024-10-09 07:53:44.084 [INFO][4017] k8s.go 621: Teardown processing complete. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:44.089428 containerd[1464]: time="2024-10-09T07:53:44.088611699Z" level=info msg="TearDown network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\" successfully" Oct 9 07:53:44.089428 containerd[1464]: time="2024-10-09T07:53:44.088646251Z" level=info msg="StopPodSandbox for \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\" returns successfully" Oct 9 07:53:44.090528 containerd[1464]: time="2024-10-09T07:53:44.090135343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s24v6,Uid:fd1b0b62-8fcc-49bd-9c52-8a285174cd0c,Namespace:calico-system,Attempt:1,}" Oct 9 07:53:44.095880 systemd[1]: run-netns-cni\x2dddb76571\x2d21fe\x2d1df3\x2ddbb1\x2db1a841ce9718.mount: Deactivated successfully. Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.026 [INFO][4018] k8s.go 608: Cleaning up netns ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.026 [INFO][4018] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" iface="eth0" netns="/var/run/netns/cni-dbc15816-1c04-189a-7d8e-136e88235843" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.027 [INFO][4018] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" iface="eth0" netns="/var/run/netns/cni-dbc15816-1c04-189a-7d8e-136e88235843" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.027 [INFO][4018] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" iface="eth0" netns="/var/run/netns/cni-dbc15816-1c04-189a-7d8e-136e88235843" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.027 [INFO][4018] k8s.go 615: Releasing IP address(es) ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.027 [INFO][4018] utils.go 188: Calico CNI releasing IP address ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.075 [INFO][4033] ipam_plugin.go 417: Releasing address using handleID ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.076 [INFO][4033] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.081 [INFO][4033] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.093 [WARNING][4033] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.093 [INFO][4033] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.098 [INFO][4033] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:44.105309 containerd[1464]: 2024-10-09 07:53:44.102 [INFO][4018] k8s.go 621: Teardown processing complete. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:44.110473 containerd[1464]: time="2024-10-09T07:53:44.105478203Z" level=info msg="TearDown network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\" successfully" Oct 9 07:53:44.110473 containerd[1464]: time="2024-10-09T07:53:44.105511751Z" level=info msg="StopPodSandbox for \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\" returns successfully" Oct 9 07:53:44.110548 kubelet[2562]: E1009 07:53:44.106556 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:44.113750 containerd[1464]: time="2024-10-09T07:53:44.112114168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2qx8t,Uid:b5519e13-c682-44c5-8276-45bab21b54a1,Namespace:kube-system,Attempt:1,}" Oct 9 07:53:44.118401 systemd[1]: run-netns-cni\x2ddbc15816\x2d1c04\x2d189a\x2d7d8e\x2d136e88235843.mount: Deactivated successfully. Oct 9 07:53:44.387878 systemd-networkd[1371]: cali3419515e0d7: Link UP Oct 9 07:53:44.388522 systemd-networkd[1371]: cali3419515e0d7: Gained carrier Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.216 [INFO][4046] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0 csi-node-driver- calico-system fd1b0b62-8fcc-49bd-9c52-8a285174cd0c 825 0 2024-10-09 07:53:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081.1.0-0-871bb8dd75 csi-node-driver-s24v6 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali3419515e0d7 [] []}} ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Namespace="calico-system" Pod="csi-node-driver-s24v6" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.216 [INFO][4046] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Namespace="calico-system" Pod="csi-node-driver-s24v6" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.289 [INFO][4069] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" HandleID="k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.302 [INFO][4069] ipam_plugin.go 270: Auto assigning IP ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" HandleID="k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a360), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.1.0-0-871bb8dd75", "pod":"csi-node-driver-s24v6", "timestamp":"2024-10-09 07:53:44.289568196 +0000 UTC"}, Hostname:"ci-4081.1.0-0-871bb8dd75", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.303 [INFO][4069] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.303 [INFO][4069] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.303 [INFO][4069] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-0-871bb8dd75' Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.307 [INFO][4069] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.317 [INFO][4069] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.326 [INFO][4069] ipam.go 489: Trying affinity for 192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.329 [INFO][4069] ipam.go 155: Attempting to load block cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.333 [INFO][4069] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.333 [INFO][4069] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.336 [INFO][4069] ipam.go 1685: Creating new handle: k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.347 [INFO][4069] ipam.go 1203: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.369 [INFO][4069] ipam.go 1216: Successfully claimed IPs: [192.168.41.2/26] block=192.168.41.0/26 handle="k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.370 [INFO][4069] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.2/26] handle="k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.370 [INFO][4069] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:44.431811 containerd[1464]: 2024-10-09 07:53:44.370 [INFO][4069] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.41.2/26] IPv6=[] ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" HandleID="k8s-pod-network.311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.435136 containerd[1464]: 2024-10-09 07:53:44.381 [INFO][4046] k8s.go 386: Populated endpoint ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Namespace="calico-system" Pod="csi-node-driver-s24v6" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"", Pod:"csi-node-driver-s24v6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3419515e0d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:44.435136 containerd[1464]: 2024-10-09 07:53:44.381 [INFO][4046] k8s.go 387: Calico CNI using IPs: [192.168.41.2/32] ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Namespace="calico-system" Pod="csi-node-driver-s24v6" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.435136 containerd[1464]: 2024-10-09 07:53:44.381 [INFO][4046] dataplane_linux.go 68: Setting the host side veth name to cali3419515e0d7 ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Namespace="calico-system" Pod="csi-node-driver-s24v6" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.435136 containerd[1464]: 2024-10-09 07:53:44.388 [INFO][4046] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Namespace="calico-system" Pod="csi-node-driver-s24v6" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.435136 containerd[1464]: 2024-10-09 07:53:44.391 [INFO][4046] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Namespace="calico-system" Pod="csi-node-driver-s24v6" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef", Pod:"csi-node-driver-s24v6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3419515e0d7", MAC:"26:7c:46:ec:4f:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:44.435136 containerd[1464]: 2024-10-09 07:53:44.419 [INFO][4046] k8s.go 500: Wrote updated endpoint to datastore ContainerID="311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef" Namespace="calico-system" Pod="csi-node-driver-s24v6" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:44.481997 systemd-networkd[1371]: cali4f64fb5a44d: Link UP Oct 9 07:53:44.482184 systemd-networkd[1371]: cali4f64fb5a44d: Gained carrier Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.212 [INFO][4055] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0 coredns-7db6d8ff4d- kube-system b5519e13-c682-44c5-8276-45bab21b54a1 824 0 2024-10-09 07:53:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.1.0-0-871bb8dd75 coredns-7db6d8ff4d-2qx8t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f64fb5a44d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2qx8t" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.212 [INFO][4055] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2qx8t" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.303 [INFO][4068] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" HandleID="k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.319 [INFO][4068] ipam_plugin.go 270: Auto assigning IP ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" HandleID="k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.1.0-0-871bb8dd75", "pod":"coredns-7db6d8ff4d-2qx8t", "timestamp":"2024-10-09 07:53:44.30314987 +0000 UTC"}, Hostname:"ci-4081.1.0-0-871bb8dd75", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.320 [INFO][4068] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.370 [INFO][4068] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.371 [INFO][4068] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-0-871bb8dd75' Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.377 [INFO][4068] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.390 [INFO][4068] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.416 [INFO][4068] ipam.go 489: Trying affinity for 192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.427 [INFO][4068] ipam.go 155: Attempting to load block cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.435 [INFO][4068] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.435 [INFO][4068] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.442 [INFO][4068] ipam.go 1685: Creating new handle: k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06 Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.450 [INFO][4068] ipam.go 1203: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.465 [INFO][4068] ipam.go 1216: Successfully claimed IPs: [192.168.41.3/26] block=192.168.41.0/26 handle="k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.466 [INFO][4068] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.3/26] handle="k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.466 [INFO][4068] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:44.517114 containerd[1464]: 2024-10-09 07:53:44.466 [INFO][4068] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.41.3/26] IPv6=[] ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" HandleID="k8s-pod-network.c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.518697 containerd[1464]: 2024-10-09 07:53:44.474 [INFO][4055] k8s.go 386: Populated endpoint ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2qx8t" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b5519e13-c682-44c5-8276-45bab21b54a1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"", Pod:"coredns-7db6d8ff4d-2qx8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f64fb5a44d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:44.518697 containerd[1464]: 2024-10-09 07:53:44.477 [INFO][4055] k8s.go 387: Calico CNI using IPs: [192.168.41.3/32] ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2qx8t" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.518697 containerd[1464]: 2024-10-09 07:53:44.477 [INFO][4055] dataplane_linux.go 68: Setting the host side veth name to cali4f64fb5a44d ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2qx8t" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.518697 containerd[1464]: 2024-10-09 07:53:44.481 [INFO][4055] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2qx8t" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.518697 containerd[1464]: 2024-10-09 07:53:44.485 [INFO][4055] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2qx8t" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b5519e13-c682-44c5-8276-45bab21b54a1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06", Pod:"coredns-7db6d8ff4d-2qx8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f64fb5a44d", MAC:"16:14:63:26:63:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:44.518697 containerd[1464]: 2024-10-09 07:53:44.507 [INFO][4055] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2qx8t" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:44.551234 containerd[1464]: time="2024-10-09T07:53:44.550016818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:44.551234 containerd[1464]: time="2024-10-09T07:53:44.550361227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:44.551234 containerd[1464]: time="2024-10-09T07:53:44.550380709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:44.551234 containerd[1464]: time="2024-10-09T07:53:44.550482999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:44.599933 systemd[1]: Started cri-containerd-311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef.scope - libcontainer container 311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef. Oct 9 07:53:44.618838 containerd[1464]: time="2024-10-09T07:53:44.618337950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:44.620672 containerd[1464]: time="2024-10-09T07:53:44.620163719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:44.620672 containerd[1464]: time="2024-10-09T07:53:44.620229821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:44.620672 containerd[1464]: time="2024-10-09T07:53:44.620419135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:44.660444 systemd[1]: Started cri-containerd-c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06.scope - libcontainer container c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06. Oct 9 07:53:44.675171 containerd[1464]: time="2024-10-09T07:53:44.674721954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s24v6,Uid:fd1b0b62-8fcc-49bd-9c52-8a285174cd0c,Namespace:calico-system,Attempt:1,} returns sandbox id \"311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef\"" Oct 9 07:53:44.748108 containerd[1464]: time="2024-10-09T07:53:44.748063542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2qx8t,Uid:b5519e13-c682-44c5-8276-45bab21b54a1,Namespace:kube-system,Attempt:1,} returns sandbox id \"c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06\"" Oct 9 07:53:44.749607 kubelet[2562]: E1009 07:53:44.749210 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:44.772858 containerd[1464]: time="2024-10-09T07:53:44.772529720Z" level=info msg="CreateContainer within sandbox \"c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:53:44.799656 containerd[1464]: time="2024-10-09T07:53:44.799602646Z" level=info msg="CreateContainer within sandbox \"c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6adcb329bebd0898bf9731ddaca098fd0fe50aac9b4bd8e243efb269db6824d4\"" Oct 9 07:53:44.801337 containerd[1464]: time="2024-10-09T07:53:44.801297328Z" level=info msg="StartContainer for \"6adcb329bebd0898bf9731ddaca098fd0fe50aac9b4bd8e243efb269db6824d4\"" Oct 9 07:53:44.859262 systemd[1]: Started cri-containerd-6adcb329bebd0898bf9731ddaca098fd0fe50aac9b4bd8e243efb269db6824d4.scope - libcontainer container 6adcb329bebd0898bf9731ddaca098fd0fe50aac9b4bd8e243efb269db6824d4. Oct 9 07:53:44.912944 containerd[1464]: time="2024-10-09T07:53:44.912163272Z" level=info msg="StartContainer for \"6adcb329bebd0898bf9731ddaca098fd0fe50aac9b4bd8e243efb269db6824d4\" returns successfully" Oct 9 07:53:44.941197 containerd[1464]: time="2024-10-09T07:53:44.939645205Z" level=info msg="StopPodSandbox for \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\"" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.060 [INFO][4241] k8s.go 608: Cleaning up netns ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.060 [INFO][4241] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" iface="eth0" netns="/var/run/netns/cni-52fa106c-2bc0-873f-fda1-a6f775eae42b" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.062 [INFO][4241] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" iface="eth0" netns="/var/run/netns/cni-52fa106c-2bc0-873f-fda1-a6f775eae42b" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.065 [INFO][4241] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" iface="eth0" netns="/var/run/netns/cni-52fa106c-2bc0-873f-fda1-a6f775eae42b" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.065 [INFO][4241] k8s.go 615: Releasing IP address(es) ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.065 [INFO][4241] utils.go 188: Calico CNI releasing IP address ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.129 [INFO][4248] ipam_plugin.go 417: Releasing address using handleID ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.129 [INFO][4248] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.130 [INFO][4248] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.139 [WARNING][4248] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.139 [INFO][4248] ipam_plugin.go 445: Releasing address using workloadID ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.141 [INFO][4248] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:45.152989 containerd[1464]: 2024-10-09 07:53:45.146 [INFO][4241] k8s.go 621: Teardown processing complete. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:45.157352 containerd[1464]: time="2024-10-09T07:53:45.155475225Z" level=info msg="TearDown network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\" successfully" Oct 9 07:53:45.157352 containerd[1464]: time="2024-10-09T07:53:45.155506028Z" level=info msg="StopPodSandbox for \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\" returns successfully" Oct 9 07:53:45.158369 kubelet[2562]: E1009 07:53:45.157955 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:45.161331 containerd[1464]: time="2024-10-09T07:53:45.161264596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lcc6p,Uid:b16fd7a4-7278-47f9-ac26-d1aa8683b5a6,Namespace:kube-system,Attempt:1,}" Oct 9 07:53:45.163489 systemd[1]: run-netns-cni\x2d52fa106c\x2d2bc0\x2d873f\x2dfda1\x2da6f775eae42b.mount: Deactivated successfully. Oct 9 07:53:45.260145 kubelet[2562]: E1009 07:53:45.260098 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:45.520615 systemd-networkd[1371]: calida9e10be1fd: Link UP Oct 9 07:53:45.523730 systemd-networkd[1371]: calida9e10be1fd: Gained carrier Oct 9 07:53:45.544528 kubelet[2562]: I1009 07:53:45.543889 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2qx8t" podStartSLOduration=35.543854382 podStartE2EDuration="35.543854382s" podCreationTimestamp="2024-10-09 07:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:53:45.34058833 +0000 UTC m=+50.596125555" watchObservedRunningTime="2024-10-09 07:53:45.543854382 +0000 UTC m=+50.799391526" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.278 [INFO][4257] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0 coredns-7db6d8ff4d- kube-system b16fd7a4-7278-47f9-ac26-d1aa8683b5a6 843 0 2024-10-09 07:53:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.1.0-0-871bb8dd75 coredns-7db6d8ff4d-lcc6p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida9e10be1fd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lcc6p" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.278 [INFO][4257] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lcc6p" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.418 [INFO][4268] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" HandleID="k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.438 [INFO][4268] ipam_plugin.go 270: Auto assigning IP ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" HandleID="k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b25c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.1.0-0-871bb8dd75", "pod":"coredns-7db6d8ff4d-lcc6p", "timestamp":"2024-10-09 07:53:45.41863973 +0000 UTC"}, Hostname:"ci-4081.1.0-0-871bb8dd75", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.438 [INFO][4268] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.438 [INFO][4268] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.438 [INFO][4268] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-0-871bb8dd75' Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.445 [INFO][4268] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.459 [INFO][4268] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.469 [INFO][4268] ipam.go 489: Trying affinity for 192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.477 [INFO][4268] ipam.go 155: Attempting to load block cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.482 [INFO][4268] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.482 [INFO][4268] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.485 [INFO][4268] ipam.go 1685: Creating new handle: k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2 Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.495 [INFO][4268] ipam.go 1203: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.504 [INFO][4268] ipam.go 1216: Successfully claimed IPs: [192.168.41.4/26] block=192.168.41.0/26 handle="k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.505 [INFO][4268] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.4/26] handle="k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.505 [INFO][4268] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:45.552281 containerd[1464]: 2024-10-09 07:53:45.505 [INFO][4268] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.41.4/26] IPv6=[] ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" HandleID="k8s-pod-network.a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.554977 containerd[1464]: 2024-10-09 07:53:45.509 [INFO][4257] k8s.go 386: Populated endpoint ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lcc6p" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b16fd7a4-7278-47f9-ac26-d1aa8683b5a6", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"", Pod:"coredns-7db6d8ff4d-lcc6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida9e10be1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:45.554977 containerd[1464]: 2024-10-09 07:53:45.511 [INFO][4257] k8s.go 387: Calico CNI using IPs: [192.168.41.4/32] ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lcc6p" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.554977 containerd[1464]: 2024-10-09 07:53:45.511 [INFO][4257] dataplane_linux.go 68: Setting the host side veth name to calida9e10be1fd ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lcc6p" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.554977 containerd[1464]: 2024-10-09 07:53:45.522 [INFO][4257] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lcc6p" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.554977 containerd[1464]: 2024-10-09 07:53:45.524 [INFO][4257] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lcc6p" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b16fd7a4-7278-47f9-ac26-d1aa8683b5a6", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2", Pod:"coredns-7db6d8ff4d-lcc6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida9e10be1fd", MAC:"12:7e:26:17:f9:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:45.554977 containerd[1464]: 2024-10-09 07:53:45.545 [INFO][4257] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lcc6p" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:45.600171 containerd[1464]: time="2024-10-09T07:53:45.599628008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:53:45.601153 containerd[1464]: time="2024-10-09T07:53:45.600169340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:53:45.601153 containerd[1464]: time="2024-10-09T07:53:45.600212411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:45.601153 containerd[1464]: time="2024-10-09T07:53:45.600369256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:53:45.661626 systemd[1]: Started cri-containerd-a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2.scope - libcontainer container a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2. Oct 9 07:53:45.754576 containerd[1464]: time="2024-10-09T07:53:45.754520316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lcc6p,Uid:b16fd7a4-7278-47f9-ac26-d1aa8683b5a6,Namespace:kube-system,Attempt:1,} returns sandbox id \"a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2\"" Oct 9 07:53:45.755465 kubelet[2562]: E1009 07:53:45.755435 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:45.760185 containerd[1464]: time="2024-10-09T07:53:45.759981886Z" level=info msg="CreateContainer within sandbox \"a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 07:53:45.793918 containerd[1464]: time="2024-10-09T07:53:45.793772936Z" level=info msg="CreateContainer within sandbox \"a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62895ec37e3f8224e61f695c450b3c69ca08247f03bca97b17e2fc1894f9bed2\"" Oct 9 07:53:45.797788 containerd[1464]: time="2024-10-09T07:53:45.797735969Z" level=info msg="StartContainer for \"62895ec37e3f8224e61f695c450b3c69ca08247f03bca97b17e2fc1894f9bed2\"" Oct 9 07:53:45.829792 containerd[1464]: time="2024-10-09T07:53:45.829265706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:45.831219 containerd[1464]: time="2024-10-09T07:53:45.831156909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 07:53:45.835313 containerd[1464]: time="2024-10-09T07:53:45.835253508Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:45.840955 containerd[1464]: time="2024-10-09T07:53:45.840899754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:45.855552 systemd[1]: Started cri-containerd-62895ec37e3f8224e61f695c450b3c69ca08247f03bca97b17e2fc1894f9bed2.scope - libcontainer container 62895ec37e3f8224e61f695c450b3c69ca08247f03bca97b17e2fc1894f9bed2. Oct 9 07:53:45.865941 containerd[1464]: time="2024-10-09T07:53:45.865618939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.967118657s" Oct 9 07:53:45.865941 containerd[1464]: time="2024-10-09T07:53:45.865793716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 07:53:45.870250 containerd[1464]: time="2024-10-09T07:53:45.869509905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 07:53:45.900157 containerd[1464]: time="2024-10-09T07:53:45.900092681Z" level=info msg="CreateContainer within sandbox \"611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 07:53:45.924923 containerd[1464]: time="2024-10-09T07:53:45.924734723Z" level=info msg="CreateContainer within sandbox \"611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"bec8cbee2bf6aaf0b01e672233927051b8b95845b87ec4404eda10b617eed423\"" Oct 9 07:53:45.929213 containerd[1464]: time="2024-10-09T07:53:45.928745513Z" level=info msg="StartContainer for \"bec8cbee2bf6aaf0b01e672233927051b8b95845b87ec4404eda10b617eed423\"" Oct 9 07:53:45.968671 containerd[1464]: time="2024-10-09T07:53:45.968619470Z" level=info msg="StartContainer for \"62895ec37e3f8224e61f695c450b3c69ca08247f03bca97b17e2fc1894f9bed2\" returns successfully" Oct 9 07:53:46.023591 systemd[1]: Started cri-containerd-bec8cbee2bf6aaf0b01e672233927051b8b95845b87ec4404eda10b617eed423.scope - libcontainer container bec8cbee2bf6aaf0b01e672233927051b8b95845b87ec4404eda10b617eed423. Oct 9 07:53:46.058527 systemd[1]: Started sshd@9-209.38.129.97:22-139.178.89.65:51516.service - OpenSSH per-connection server daemon (139.178.89.65:51516). Oct 9 07:53:46.175407 containerd[1464]: time="2024-10-09T07:53:46.175355546Z" level=info msg="StartContainer for \"bec8cbee2bf6aaf0b01e672233927051b8b95845b87ec4404eda10b617eed423\" returns successfully" Oct 9 07:53:46.230197 systemd-networkd[1371]: cali3419515e0d7: Gained IPv6LL Oct 9 07:53:46.246525 sshd[4397]: Accepted publickey for core from 139.178.89.65 port 51516 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:46.253349 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:46.269256 systemd-logind[1446]: New session 10 of user core. Oct 9 07:53:46.275272 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 07:53:46.289740 kubelet[2562]: E1009 07:53:46.289691 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:46.326051 kubelet[2562]: E1009 07:53:46.325078 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:46.358243 systemd-networkd[1371]: cali4f64fb5a44d: Gained IPv6LL Oct 9 07:53:46.439084 kubelet[2562]: I1009 07:53:46.438207 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lcc6p" podStartSLOduration=36.438169311 podStartE2EDuration="36.438169311s" podCreationTimestamp="2024-10-09 07:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 07:53:46.413430124 +0000 UTC m=+51.668967264" watchObservedRunningTime="2024-10-09 07:53:46.438169311 +0000 UTC m=+51.693706443" Oct 9 07:53:46.502400 kubelet[2562]: I1009 07:53:46.501839 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58bc88798c-plhm7" podStartSLOduration=25.521125727 podStartE2EDuration="29.501818591s" podCreationTimestamp="2024-10-09 07:53:17 +0000 UTC" firstStartedPulling="2024-10-09 07:53:41.887557444 +0000 UTC m=+47.143094565" lastFinishedPulling="2024-10-09 07:53:45.868250288 +0000 UTC m=+51.123787429" observedRunningTime="2024-10-09 07:53:46.463400151 +0000 UTC m=+51.718937292" watchObservedRunningTime="2024-10-09 07:53:46.501818591 +0000 UTC m=+51.757355725" Oct 9 07:53:46.660216 sshd[4397]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:46.675203 systemd[1]: sshd@9-209.38.129.97:22-139.178.89.65:51516.service: Deactivated successfully. Oct 9 07:53:46.679647 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 07:53:46.681883 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Oct 9 07:53:46.691822 systemd[1]: Started sshd@10-209.38.129.97:22-139.178.89.65:51524.service - OpenSSH per-connection server daemon (139.178.89.65:51524). Oct 9 07:53:46.697805 systemd-logind[1446]: Removed session 10. Oct 9 07:53:46.761016 sshd[4453]: Accepted publickey for core from 139.178.89.65 port 51524 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:46.764858 sshd[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:46.781124 systemd-logind[1446]: New session 11 of user core. Oct 9 07:53:46.788688 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 07:53:47.181440 sshd[4453]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:47.190513 systemd-networkd[1371]: calida9e10be1fd: Gained IPv6LL Oct 9 07:53:47.200994 systemd[1]: sshd@10-209.38.129.97:22-139.178.89.65:51524.service: Deactivated successfully. Oct 9 07:53:47.214468 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 07:53:47.244588 systemd[1]: Started sshd@11-209.38.129.97:22-139.178.89.65:51532.service - OpenSSH per-connection server daemon (139.178.89.65:51532). Oct 9 07:53:47.254384 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Oct 9 07:53:47.258922 systemd-logind[1446]: Removed session 11. Oct 9 07:53:47.334665 kubelet[2562]: E1009 07:53:47.332701 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:47.334665 kubelet[2562]: E1009 07:53:47.334479 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:47.342085 sshd[4470]: Accepted publickey for core from 139.178.89.65 port 51532 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:47.343383 sshd[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:47.362379 systemd-logind[1446]: New session 12 of user core. Oct 9 07:53:47.367428 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 07:53:47.509269 containerd[1464]: time="2024-10-09T07:53:47.509034119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:47.515656 containerd[1464]: time="2024-10-09T07:53:47.514701252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 07:53:47.518784 containerd[1464]: time="2024-10-09T07:53:47.517679839Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:47.527017 containerd[1464]: time="2024-10-09T07:53:47.526943469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:47.527544 containerd[1464]: time="2024-10-09T07:53:47.527501933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.657943536s" Oct 9 07:53:47.527544 containerd[1464]: time="2024-10-09T07:53:47.527542449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 07:53:47.536583 containerd[1464]: time="2024-10-09T07:53:47.536533966Z" level=info msg="CreateContainer within sandbox \"311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 07:53:47.566301 containerd[1464]: time="2024-10-09T07:53:47.565601813Z" level=info msg="CreateContainer within sandbox \"311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"41743bad1dac8c5d8c51452fe7ee1b98f991a147f10a5a75253f489d473aaf4e\"" Oct 9 07:53:47.568435 containerd[1464]: time="2024-10-09T07:53:47.567273939Z" level=info msg="StartContainer for \"41743bad1dac8c5d8c51452fe7ee1b98f991a147f10a5a75253f489d473aaf4e\"" Oct 9 07:53:47.655246 sshd[4470]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:47.665720 systemd[1]: sshd@11-209.38.129.97:22-139.178.89.65:51532.service: Deactivated successfully. Oct 9 07:53:47.668124 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 07:53:47.676429 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Oct 9 07:53:47.683390 systemd[1]: Started cri-containerd-41743bad1dac8c5d8c51452fe7ee1b98f991a147f10a5a75253f489d473aaf4e.scope - libcontainer container 41743bad1dac8c5d8c51452fe7ee1b98f991a147f10a5a75253f489d473aaf4e. Oct 9 07:53:47.685984 systemd-logind[1446]: Removed session 12. Oct 9 07:53:47.844234 containerd[1464]: time="2024-10-09T07:53:47.843270550Z" level=info msg="StartContainer for \"41743bad1dac8c5d8c51452fe7ee1b98f991a147f10a5a75253f489d473aaf4e\" returns successfully" Oct 9 07:53:47.846178 containerd[1464]: time="2024-10-09T07:53:47.845131816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 07:53:48.336922 kubelet[2562]: E1009 07:53:48.336651 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:48.337678 kubelet[2562]: E1009 07:53:48.337496 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:49.429106 containerd[1464]: time="2024-10-09T07:53:49.429008909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:49.433592 containerd[1464]: time="2024-10-09T07:53:49.432085753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 07:53:49.433592 containerd[1464]: time="2024-10-09T07:53:49.432220900Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:49.435943 containerd[1464]: time="2024-10-09T07:53:49.435895188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:53:49.437295 containerd[1464]: time="2024-10-09T07:53:49.437247184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.592048933s" Oct 9 07:53:49.437960 containerd[1464]: time="2024-10-09T07:53:49.437415726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 07:53:49.442904 containerd[1464]: time="2024-10-09T07:53:49.442618084Z" level=info msg="CreateContainer within sandbox \"311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 07:53:49.478166 containerd[1464]: time="2024-10-09T07:53:49.476285637Z" level=info msg="CreateContainer within sandbox \"311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d41b7a493b939650ab5fd3172a6bf32f04d4bfa4d882f0bd3491b53bf8f4f855\"" Oct 9 07:53:49.481999 containerd[1464]: time="2024-10-09T07:53:49.479799923Z" level=info msg="StartContainer for \"d41b7a493b939650ab5fd3172a6bf32f04d4bfa4d882f0bd3491b53bf8f4f855\"" Oct 9 07:53:49.486141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112155271.mount: Deactivated successfully. Oct 9 07:53:49.533401 systemd[1]: Started cri-containerd-d41b7a493b939650ab5fd3172a6bf32f04d4bfa4d882f0bd3491b53bf8f4f855.scope - libcontainer container d41b7a493b939650ab5fd3172a6bf32f04d4bfa4d882f0bd3491b53bf8f4f855. Oct 9 07:53:49.578008 containerd[1464]: time="2024-10-09T07:53:49.577849877Z" level=info msg="StartContainer for \"d41b7a493b939650ab5fd3172a6bf32f04d4bfa4d882f0bd3491b53bf8f4f855\" returns successfully" Oct 9 07:53:50.184582 kubelet[2562]: I1009 07:53:50.184509 2562 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 07:53:50.187000 kubelet[2562]: I1009 07:53:50.186949 2562 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 07:53:51.242245 systemd[1]: run-containerd-runc-k8s.io-e10d0fc4a5ae5ecdb79b5c28529f447f0c757a7e3fd208aa90426181865b29d8-runc.fZrvOJ.mount: Deactivated successfully. Oct 9 07:53:51.324519 kubelet[2562]: E1009 07:53:51.324010 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:53:51.341625 kubelet[2562]: I1009 07:53:51.341561 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s24v6" podStartSLOduration=29.581652716 podStartE2EDuration="34.341538017s" podCreationTimestamp="2024-10-09 07:53:17 +0000 UTC" firstStartedPulling="2024-10-09 07:53:44.680140559 +0000 UTC m=+49.935677680" lastFinishedPulling="2024-10-09 07:53:49.440025848 +0000 UTC m=+54.695562981" observedRunningTime="2024-10-09 07:53:50.363283631 +0000 UTC m=+55.618820772" watchObservedRunningTime="2024-10-09 07:53:51.341538017 +0000 UTC m=+56.597075159" Oct 9 07:53:52.673550 systemd[1]: Started sshd@12-209.38.129.97:22-139.178.89.65:51536.service - OpenSSH per-connection server daemon (139.178.89.65:51536). Oct 9 07:53:52.800125 sshd[4600]: Accepted publickey for core from 139.178.89.65 port 51536 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:52.804461 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:52.820181 systemd-logind[1446]: New session 13 of user core. Oct 9 07:53:52.826141 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 07:53:53.183520 sshd[4600]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:53.190584 systemd[1]: sshd@12-209.38.129.97:22-139.178.89.65:51536.service: Deactivated successfully. Oct 9 07:53:53.195072 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 07:53:53.197008 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Oct 9 07:53:53.199235 systemd-logind[1446]: Removed session 13. Oct 9 07:53:54.935494 containerd[1464]: time="2024-10-09T07:53:54.934912123Z" level=info msg="StopPodSandbox for \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\"" Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:54.994 [WARNING][4625] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef", Pod:"csi-node-driver-s24v6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3419515e0d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:54.994 [INFO][4625] k8s.go 608: Cleaning up netns ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:54.994 [INFO][4625] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" iface="eth0" netns="" Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:54.994 [INFO][4625] k8s.go 615: Releasing IP address(es) ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:54.994 [INFO][4625] utils.go 188: Calico CNI releasing IP address ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:55.031 [INFO][4632] ipam_plugin.go 417: Releasing address using handleID ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:55.031 [INFO][4632] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:55.031 [INFO][4632] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:55.039 [WARNING][4632] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:55.040 [INFO][4632] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:55.043 [INFO][4632] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:55.047629 containerd[1464]: 2024-10-09 07:53:55.045 [INFO][4625] k8s.go 621: Teardown processing complete. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:55.049206 containerd[1464]: time="2024-10-09T07:53:55.048272858Z" level=info msg="TearDown network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\" successfully" Oct 9 07:53:55.049206 containerd[1464]: time="2024-10-09T07:53:55.048329303Z" level=info msg="StopPodSandbox for \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\" returns successfully" Oct 9 07:53:55.049573 containerd[1464]: time="2024-10-09T07:53:55.049515034Z" level=info msg="RemovePodSandbox for \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\"" Oct 9 07:53:55.052080 containerd[1464]: time="2024-10-09T07:53:55.052002638Z" level=info msg="Forcibly stopping sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\"" Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.107 [WARNING][4650] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd1b0b62-8fcc-49bd-9c52-8a285174cd0c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"311fa1f3680c04b4cd9cd52bb2a567e5451740a89cb06e18ba46c4c5e1d3b5ef", Pod:"csi-node-driver-s24v6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.41.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali3419515e0d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.108 [INFO][4650] k8s.go 608: Cleaning up netns ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.108 [INFO][4650] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" iface="eth0" netns="" Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.108 [INFO][4650] k8s.go 615: Releasing IP address(es) ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.108 [INFO][4650] utils.go 188: Calico CNI releasing IP address ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.140 [INFO][4656] ipam_plugin.go 417: Releasing address using handleID ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.140 [INFO][4656] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.140 [INFO][4656] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.147 [WARNING][4656] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.147 [INFO][4656] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" HandleID="k8s-pod-network.6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Workload="ci--4081.1.0--0--871bb8dd75-k8s-csi--node--driver--s24v6-eth0" Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.149 [INFO][4656] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:55.153259 containerd[1464]: 2024-10-09 07:53:55.151 [INFO][4650] k8s.go 621: Teardown processing complete. ContainerID="6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93" Oct 9 07:53:55.153259 containerd[1464]: time="2024-10-09T07:53:55.153190149Z" level=info msg="TearDown network for sandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\" successfully" Oct 9 07:53:55.164521 containerd[1464]: time="2024-10-09T07:53:55.164454476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:53:55.164962 containerd[1464]: time="2024-10-09T07:53:55.164574615Z" level=info msg="RemovePodSandbox \"6fe48517e14ac5b9d986a21594c7cd5a58739f0f7cdbbed61ca4fe969c26fc93\" returns successfully" Oct 9 07:53:55.173279 containerd[1464]: time="2024-10-09T07:53:55.173162339Z" level=info msg="StopPodSandbox for \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\"" Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.228 [WARNING][4674] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0", GenerateName:"calico-kube-controllers-58bc88798c-", Namespace:"calico-system", SelfLink:"", UID:"15cd672c-8913-4d18-8c1d-961f59e5572e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bc88798c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb", Pod:"calico-kube-controllers-58bc88798c-plhm7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali344dfdb9a9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.228 [INFO][4674] k8s.go 608: Cleaning up netns ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.228 [INFO][4674] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" iface="eth0" netns="" Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.228 [INFO][4674] k8s.go 615: Releasing IP address(es) ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.228 [INFO][4674] utils.go 188: Calico CNI releasing IP address ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.257 [INFO][4680] ipam_plugin.go 417: Releasing address using handleID ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.257 [INFO][4680] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.257 [INFO][4680] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.264 [WARNING][4680] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.264 [INFO][4680] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.266 [INFO][4680] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:55.271097 containerd[1464]: 2024-10-09 07:53:55.268 [INFO][4674] k8s.go 621: Teardown processing complete. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:55.271097 containerd[1464]: time="2024-10-09T07:53:55.270849778Z" level=info msg="TearDown network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\" successfully" Oct 9 07:53:55.271097 containerd[1464]: time="2024-10-09T07:53:55.270881783Z" level=info msg="StopPodSandbox for \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\" returns successfully" Oct 9 07:53:55.274877 containerd[1464]: time="2024-10-09T07:53:55.272090565Z" level=info msg="RemovePodSandbox for \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\"" Oct 9 07:53:55.274877 containerd[1464]: time="2024-10-09T07:53:55.272170952Z" level=info msg="Forcibly stopping sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\"" Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.334 [WARNING][4698] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0", GenerateName:"calico-kube-controllers-58bc88798c-", Namespace:"calico-system", SelfLink:"", UID:"15cd672c-8913-4d18-8c1d-961f59e5572e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58bc88798c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"611edab44e04ba583ac9e4e5b254625492d99a173135191b1aabaf0a40b13edb", Pod:"calico-kube-controllers-58bc88798c-plhm7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.41.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali344dfdb9a9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.334 [INFO][4698] k8s.go 608: Cleaning up netns ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.334 [INFO][4698] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" iface="eth0" netns="" Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.335 [INFO][4698] k8s.go 615: Releasing IP address(es) ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.335 [INFO][4698] utils.go 188: Calico CNI releasing IP address ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.372 [INFO][4704] ipam_plugin.go 417: Releasing address using handleID ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.372 [INFO][4704] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.372 [INFO][4704] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.382 [WARNING][4704] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.382 [INFO][4704] ipam_plugin.go 445: Releasing address using workloadID ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" HandleID="k8s-pod-network.cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--kube--controllers--58bc88798c--plhm7-eth0" Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.385 [INFO][4704] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:55.389147 containerd[1464]: 2024-10-09 07:53:55.387 [INFO][4698] k8s.go 621: Teardown processing complete. ContainerID="cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c" Oct 9 07:53:55.389942 containerd[1464]: time="2024-10-09T07:53:55.389214626Z" level=info msg="TearDown network for sandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\" successfully" Oct 9 07:53:55.394189 containerd[1464]: time="2024-10-09T07:53:55.394094692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:53:55.394503 containerd[1464]: time="2024-10-09T07:53:55.394232198Z" level=info msg="RemovePodSandbox \"cdd1a19b504362eb81755bcd75c3dc12bb6dfa397d90d601def2dab80c969b2c\" returns successfully" Oct 9 07:53:55.395048 containerd[1464]: time="2024-10-09T07:53:55.394975948Z" level=info msg="StopPodSandbox for \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\"" Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.446 [WARNING][4722] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b5519e13-c682-44c5-8276-45bab21b54a1", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06", Pod:"coredns-7db6d8ff4d-2qx8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f64fb5a44d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.446 [INFO][4722] k8s.go 608: Cleaning up netns ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.447 [INFO][4722] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" iface="eth0" netns="" Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.447 [INFO][4722] k8s.go 615: Releasing IP address(es) ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.447 [INFO][4722] utils.go 188: Calico CNI releasing IP address ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.480 [INFO][4728] ipam_plugin.go 417: Releasing address using handleID ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.480 [INFO][4728] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.480 [INFO][4728] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.488 [WARNING][4728] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.488 [INFO][4728] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.490 [INFO][4728] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:55.495117 containerd[1464]: 2024-10-09 07:53:55.493 [INFO][4722] k8s.go 621: Teardown processing complete. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:55.495117 containerd[1464]: time="2024-10-09T07:53:55.495031117Z" level=info msg="TearDown network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\" successfully" Oct 9 07:53:55.495117 containerd[1464]: time="2024-10-09T07:53:55.495079415Z" level=info msg="StopPodSandbox for \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\" returns successfully" Oct 9 07:53:55.496945 containerd[1464]: time="2024-10-09T07:53:55.495640017Z" level=info msg="RemovePodSandbox for \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\"" Oct 9 07:53:55.496945 containerd[1464]: time="2024-10-09T07:53:55.495679344Z" level=info msg="Forcibly stopping sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\"" Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.582 [WARNING][4746] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b5519e13-c682-44c5-8276-45bab21b54a1", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"c53e185e9dc8444581e2ae0c75d4669a251f5db778699b9598db47771d543f06", Pod:"coredns-7db6d8ff4d-2qx8t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f64fb5a44d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.585 [INFO][4746] k8s.go 608: Cleaning up netns ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.585 [INFO][4746] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" iface="eth0" netns="" Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.585 [INFO][4746] k8s.go 615: Releasing IP address(es) ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.585 [INFO][4746] utils.go 188: Calico CNI releasing IP address ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.621 [INFO][4754] ipam_plugin.go 417: Releasing address using handleID ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.621 [INFO][4754] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.621 [INFO][4754] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.629 [WARNING][4754] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.629 [INFO][4754] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" HandleID="k8s-pod-network.98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--2qx8t-eth0" Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.632 [INFO][4754] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:55.638782 containerd[1464]: 2024-10-09 07:53:55.634 [INFO][4746] k8s.go 621: Teardown processing complete. ContainerID="98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c" Oct 9 07:53:55.638782 containerd[1464]: time="2024-10-09T07:53:55.638015787Z" level=info msg="TearDown network for sandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\" successfully" Oct 9 07:53:55.651317 containerd[1464]: time="2024-10-09T07:53:55.651099559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:53:55.651317 containerd[1464]: time="2024-10-09T07:53:55.651192709Z" level=info msg="RemovePodSandbox \"98e74451f814fb8027430440f89892090d8803da123a209eb056523a1d24fd2c\" returns successfully" Oct 9 07:53:55.652383 containerd[1464]: time="2024-10-09T07:53:55.652325940Z" level=info msg="StopPodSandbox for \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\"" Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.704 [WARNING][4775] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b16fd7a4-7278-47f9-ac26-d1aa8683b5a6", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2", Pod:"coredns-7db6d8ff4d-lcc6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida9e10be1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.705 [INFO][4775] k8s.go 608: Cleaning up netns ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.705 [INFO][4775] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" iface="eth0" netns="" Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.705 [INFO][4775] k8s.go 615: Releasing IP address(es) ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.705 [INFO][4775] utils.go 188: Calico CNI releasing IP address ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.738 [INFO][4781] ipam_plugin.go 417: Releasing address using handleID ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.738 [INFO][4781] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.738 [INFO][4781] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.746 [WARNING][4781] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.746 [INFO][4781] ipam_plugin.go 445: Releasing address using workloadID ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.749 [INFO][4781] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:55.753215 containerd[1464]: 2024-10-09 07:53:55.751 [INFO][4775] k8s.go 621: Teardown processing complete. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:55.753215 containerd[1464]: time="2024-10-09T07:53:55.753201496Z" level=info msg="TearDown network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\" successfully" Oct 9 07:53:55.754811 containerd[1464]: time="2024-10-09T07:53:55.753234655Z" level=info msg="StopPodSandbox for \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\" returns successfully" Oct 9 07:53:55.754811 containerd[1464]: time="2024-10-09T07:53:55.753913836Z" level=info msg="RemovePodSandbox for \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\"" Oct 9 07:53:55.754811 containerd[1464]: time="2024-10-09T07:53:55.753955771Z" level=info msg="Forcibly stopping sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\"" Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.803 [WARNING][4800] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b16fd7a4-7278-47f9-ac26-d1aa8683b5a6", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"a7133b68f7ea418331ac097e68119f5a1473cbcfb0c6dac94d7055663cbae0c2", Pod:"coredns-7db6d8ff4d-lcc6p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.41.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida9e10be1fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.804 [INFO][4800] k8s.go 608: Cleaning up netns ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.804 [INFO][4800] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" iface="eth0" netns="" Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.804 [INFO][4800] k8s.go 615: Releasing IP address(es) ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.804 [INFO][4800] utils.go 188: Calico CNI releasing IP address ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.833 [INFO][4806] ipam_plugin.go 417: Releasing address using handleID ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.834 [INFO][4806] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.834 [INFO][4806] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.841 [WARNING][4806] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.841 [INFO][4806] ipam_plugin.go 445: Releasing address using workloadID ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" HandleID="k8s-pod-network.78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Workload="ci--4081.1.0--0--871bb8dd75-k8s-coredns--7db6d8ff4d--lcc6p-eth0" Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.843 [INFO][4806] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:53:55.848298 containerd[1464]: 2024-10-09 07:53:55.846 [INFO][4800] k8s.go 621: Teardown processing complete. ContainerID="78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb" Oct 9 07:53:55.849427 containerd[1464]: time="2024-10-09T07:53:55.848317546Z" level=info msg="TearDown network for sandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\" successfully" Oct 9 07:53:55.853674 containerd[1464]: time="2024-10-09T07:53:55.853593119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 07:53:55.853850 containerd[1464]: time="2024-10-09T07:53:55.853766428Z" level=info msg="RemovePodSandbox \"78385f67ca4c8b7d8ad8d51de6125fcc4439b45bc8afdeafd1cc981530d20ccb\" returns successfully" Oct 9 07:53:56.935278 systemd[1]: run-containerd-runc-k8s.io-bec8cbee2bf6aaf0b01e672233927051b8b95845b87ec4404eda10b617eed423-runc.TRmh7z.mount: Deactivated successfully. Oct 9 07:53:58.204690 systemd[1]: Started sshd@13-209.38.129.97:22-139.178.89.65:44078.service - OpenSSH per-connection server daemon (139.178.89.65:44078). Oct 9 07:53:58.289557 sshd[4851]: Accepted publickey for core from 139.178.89.65 port 44078 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:53:58.331800 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:53:58.340506 systemd-logind[1446]: New session 14 of user core. Oct 9 07:53:58.346402 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 07:53:58.824439 sshd[4851]: pam_unix(sshd:session): session closed for user core Oct 9 07:53:58.829862 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Oct 9 07:53:58.831122 systemd[1]: sshd@13-209.38.129.97:22-139.178.89.65:44078.service: Deactivated successfully. Oct 9 07:53:58.834552 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 07:53:58.837462 systemd-logind[1446]: Removed session 14. Oct 9 07:54:02.177114 kubelet[2562]: I1009 07:54:02.176106 2562 topology_manager.go:215] "Topology Admit Handler" podUID="34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9" podNamespace="calico-apiserver" podName="calico-apiserver-84d95987f-xr7q8" Oct 9 07:54:02.214757 systemd[1]: Created slice kubepods-besteffort-pod34a3b1c3_587b_47f0_8bc8_5fbd95b7fbc9.slice - libcontainer container kubepods-besteffort-pod34a3b1c3_587b_47f0_8bc8_5fbd95b7fbc9.slice. Oct 9 07:54:02.314532 kubelet[2562]: I1009 07:54:02.314253 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9-calico-apiserver-certs\") pod \"calico-apiserver-84d95987f-xr7q8\" (UID: \"34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9\") " pod="calico-apiserver/calico-apiserver-84d95987f-xr7q8" Oct 9 07:54:02.314532 kubelet[2562]: I1009 07:54:02.314402 2562 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6fv\" (UniqueName: \"kubernetes.io/projected/34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9-kube-api-access-cd6fv\") pod \"calico-apiserver-84d95987f-xr7q8\" (UID: \"34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9\") " pod="calico-apiserver/calico-apiserver-84d95987f-xr7q8" Oct 9 07:54:02.418175 kubelet[2562]: E1009 07:54:02.417026 2562 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 07:54:02.431933 kubelet[2562]: E1009 07:54:02.430413 2562 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9-calico-apiserver-certs podName:34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9 nodeName:}" failed. No retries permitted until 2024-10-09 07:54:02.917186351 +0000 UTC m=+68.172723472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9-calico-apiserver-certs") pod "calico-apiserver-84d95987f-xr7q8" (UID: "34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9") : secret "calico-apiserver-certs" not found Oct 9 07:54:03.127080 containerd[1464]: time="2024-10-09T07:54:03.126984597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d95987f-xr7q8,Uid:34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9,Namespace:calico-apiserver,Attempt:0,}" Oct 9 07:54:03.420891 systemd-networkd[1371]: cali9c8371ee6e9: Link UP Oct 9 07:54:03.423424 systemd-networkd[1371]: cali9c8371ee6e9: Gained carrier Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.237 [INFO][4880] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0 calico-apiserver-84d95987f- calico-apiserver 34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9 1044 0 2024-10-09 07:54:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84d95987f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.1.0-0-871bb8dd75 calico-apiserver-84d95987f-xr7q8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9c8371ee6e9 [] []}} ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Namespace="calico-apiserver" Pod="calico-apiserver-84d95987f-xr7q8" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.238 [INFO][4880] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Namespace="calico-apiserver" Pod="calico-apiserver-84d95987f-xr7q8" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.319 [INFO][4891] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" HandleID="k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.333 [INFO][4891] ipam_plugin.go 270: Auto assigning IP ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" HandleID="k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319d30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.1.0-0-871bb8dd75", "pod":"calico-apiserver-84d95987f-xr7q8", "timestamp":"2024-10-09 07:54:03.31931016 +0000 UTC"}, Hostname:"ci-4081.1.0-0-871bb8dd75", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.333 [INFO][4891] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.333 [INFO][4891] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.333 [INFO][4891] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.1.0-0-871bb8dd75' Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.339 [INFO][4891] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.353 [INFO][4891] ipam.go 372: Looking up existing affinities for host host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.367 [INFO][4891] ipam.go 489: Trying affinity for 192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.372 [INFO][4891] ipam.go 155: Attempting to load block cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.377 [INFO][4891] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.41.0/26 host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.377 [INFO][4891] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.41.0/26 handle="k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.381 [INFO][4891] ipam.go 1685: Creating new handle: k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33 Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.389 [INFO][4891] ipam.go 1203: Writing block in order to claim IPs block=192.168.41.0/26 handle="k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.401 [INFO][4891] ipam.go 1216: Successfully claimed IPs: [192.168.41.5/26] block=192.168.41.0/26 handle="k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.401 [INFO][4891] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.41.5/26] handle="k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" host="ci-4081.1.0-0-871bb8dd75" Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.401 [INFO][4891] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 07:54:03.458685 containerd[1464]: 2024-10-09 07:54:03.401 [INFO][4891] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.41.5/26] IPv6=[] ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" HandleID="k8s-pod-network.583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Workload="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" Oct 9 07:54:03.461757 containerd[1464]: 2024-10-09 07:54:03.410 [INFO][4880] k8s.go 386: Populated endpoint ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Namespace="calico-apiserver" Pod="calico-apiserver-84d95987f-xr7q8" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0", GenerateName:"calico-apiserver-84d95987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d95987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"", Pod:"calico-apiserver-84d95987f-xr7q8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c8371ee6e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:03.461757 containerd[1464]: 2024-10-09 07:54:03.410 [INFO][4880] k8s.go 387: Calico CNI using IPs: [192.168.41.5/32] ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Namespace="calico-apiserver" Pod="calico-apiserver-84d95987f-xr7q8" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" Oct 9 07:54:03.461757 containerd[1464]: 2024-10-09 07:54:03.410 [INFO][4880] dataplane_linux.go 68: Setting the host side veth name to cali9c8371ee6e9 ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Namespace="calico-apiserver" Pod="calico-apiserver-84d95987f-xr7q8" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" Oct 9 07:54:03.461757 containerd[1464]: 2024-10-09 07:54:03.424 [INFO][4880] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Namespace="calico-apiserver" Pod="calico-apiserver-84d95987f-xr7q8" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" Oct 9 07:54:03.461757 containerd[1464]: 2024-10-09 07:54:03.426 [INFO][4880] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Namespace="calico-apiserver" Pod="calico-apiserver-84d95987f-xr7q8" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0", GenerateName:"calico-apiserver-84d95987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 7, 54, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d95987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.1.0-0-871bb8dd75", ContainerID:"583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33", Pod:"calico-apiserver-84d95987f-xr7q8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.41.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9c8371ee6e9", MAC:"ea:7f:44:01:78:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 07:54:03.461757 containerd[1464]: 2024-10-09 07:54:03.442 [INFO][4880] k8s.go 500: Wrote updated endpoint to datastore ContainerID="583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33" Namespace="calico-apiserver" Pod="calico-apiserver-84d95987f-xr7q8" WorkloadEndpoint="ci--4081.1.0--0--871bb8dd75-k8s-calico--apiserver--84d95987f--xr7q8-eth0" Oct 9 07:54:03.540579 containerd[1464]: time="2024-10-09T07:54:03.538658307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 07:54:03.540579 containerd[1464]: time="2024-10-09T07:54:03.538722355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 07:54:03.540579 containerd[1464]: time="2024-10-09T07:54:03.538733583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:03.540579 containerd[1464]: time="2024-10-09T07:54:03.538838989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 07:54:03.585420 systemd[1]: Started cri-containerd-583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33.scope - libcontainer container 583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33. Oct 9 07:54:03.663244 containerd[1464]: time="2024-10-09T07:54:03.662958155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d95987f-xr7q8,Uid:34a3b1c3-587b-47f0-8bc8-5fbd95b7fbc9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33\"" Oct 9 07:54:03.676871 containerd[1464]: time="2024-10-09T07:54:03.676582025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 07:54:03.851638 systemd[1]: Started sshd@14-209.38.129.97:22-139.178.89.65:44082.service - OpenSSH per-connection server daemon (139.178.89.65:44082). Oct 9 07:54:03.927641 sshd[4958]: Accepted publickey for core from 139.178.89.65 port 44082 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:03.930327 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:03.938437 systemd-logind[1446]: New session 15 of user core. Oct 9 07:54:03.944651 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 07:54:04.344483 sshd[4958]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:04.351632 systemd[1]: sshd@14-209.38.129.97:22-139.178.89.65:44082.service: Deactivated successfully. Oct 9 07:54:04.354295 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 07:54:04.355146 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Oct 9 07:54:04.356557 systemd-logind[1446]: Removed session 15. Oct 9 07:54:04.790564 systemd-networkd[1371]: cali9c8371ee6e9: Gained IPv6LL Oct 9 07:54:05.937974 kubelet[2562]: E1009 07:54:05.937896 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:06.216703 containerd[1464]: time="2024-10-09T07:54:06.216315957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:06.218410 containerd[1464]: time="2024-10-09T07:54:06.218003867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 07:54:06.220335 containerd[1464]: time="2024-10-09T07:54:06.220246536Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:06.227720 containerd[1464]: time="2024-10-09T07:54:06.227626610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 07:54:06.229570 containerd[1464]: time="2024-10-09T07:54:06.229293496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.552650651s" Oct 9 07:54:06.229570 containerd[1464]: time="2024-10-09T07:54:06.229350566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 07:54:06.237538 containerd[1464]: time="2024-10-09T07:54:06.237479701Z" level=info msg="CreateContainer within sandbox \"583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 07:54:06.266739 containerd[1464]: time="2024-10-09T07:54:06.266676906Z" level=info msg="CreateContainer within sandbox \"583ef5669333c8b3273534d9c5ee3c7338086f30d540cab03ab348406ccbcf33\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5caa3b4dd55f983156821fc169d59f2409a26902db719288f03163dba3357908\"" Oct 9 07:54:06.271201 containerd[1464]: time="2024-10-09T07:54:06.268495099Z" level=info msg="StartContainer for \"5caa3b4dd55f983156821fc169d59f2409a26902db719288f03163dba3357908\"" Oct 9 07:54:06.270274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1721204508.mount: Deactivated successfully. Oct 9 07:54:06.340680 systemd[1]: Started cri-containerd-5caa3b4dd55f983156821fc169d59f2409a26902db719288f03163dba3357908.scope - libcontainer container 5caa3b4dd55f983156821fc169d59f2409a26902db719288f03163dba3357908. Oct 9 07:54:06.449723 containerd[1464]: time="2024-10-09T07:54:06.449508202Z" level=info msg="StartContainer for \"5caa3b4dd55f983156821fc169d59f2409a26902db719288f03163dba3357908\" returns successfully" Oct 9 07:54:07.463117 kubelet[2562]: I1009 07:54:07.461702 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-84d95987f-xr7q8" podStartSLOduration=2.903035335 podStartE2EDuration="5.461678829s" podCreationTimestamp="2024-10-09 07:54:02 +0000 UTC" firstStartedPulling="2024-10-09 07:54:03.674894403 +0000 UTC m=+68.930431536" lastFinishedPulling="2024-10-09 07:54:06.233537898 +0000 UTC m=+71.489075030" observedRunningTime="2024-10-09 07:54:07.46000989 +0000 UTC m=+72.715547029" watchObservedRunningTime="2024-10-09 07:54:07.461678829 +0000 UTC m=+72.717215973" Oct 9 07:54:07.939288 kubelet[2562]: E1009 07:54:07.939032 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:09.367586 systemd[1]: Started sshd@15-209.38.129.97:22-139.178.89.65:59026.service - OpenSSH per-connection server daemon (139.178.89.65:59026). Oct 9 07:54:09.483180 sshd[5024]: Accepted publickey for core from 139.178.89.65 port 59026 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:09.488709 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:09.499433 systemd-logind[1446]: New session 16 of user core. Oct 9 07:54:09.506457 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 07:54:10.043976 sshd[5024]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:10.057965 systemd[1]: sshd@15-209.38.129.97:22-139.178.89.65:59026.service: Deactivated successfully. Oct 9 07:54:10.062939 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 07:54:10.066402 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Oct 9 07:54:10.074639 systemd[1]: Started sshd@16-209.38.129.97:22-139.178.89.65:59032.service - OpenSSH per-connection server daemon (139.178.89.65:59032). Oct 9 07:54:10.079829 systemd-logind[1446]: Removed session 16. Oct 9 07:54:10.137410 sshd[5039]: Accepted publickey for core from 139.178.89.65 port 59032 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:10.139986 sshd[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:10.148139 systemd-logind[1446]: New session 17 of user core. Oct 9 07:54:10.153387 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 07:54:10.748118 sshd[5039]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:10.761706 systemd[1]: sshd@16-209.38.129.97:22-139.178.89.65:59032.service: Deactivated successfully. Oct 9 07:54:10.767302 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 07:54:10.770806 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Oct 9 07:54:10.780906 systemd[1]: Started sshd@17-209.38.129.97:22-139.178.89.65:59036.service - OpenSSH per-connection server daemon (139.178.89.65:59036). Oct 9 07:54:10.783771 systemd-logind[1446]: Removed session 17. Oct 9 07:54:10.852864 sshd[5050]: Accepted publickey for core from 139.178.89.65 port 59036 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:10.855902 sshd[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:10.865255 systemd-logind[1446]: New session 18 of user core. Oct 9 07:54:10.870439 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 07:54:13.782443 sshd[5050]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:13.802500 systemd[1]: sshd@17-209.38.129.97:22-139.178.89.65:59036.service: Deactivated successfully. Oct 9 07:54:13.810777 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 07:54:13.814941 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Oct 9 07:54:13.829833 systemd[1]: Started sshd@18-209.38.129.97:22-139.178.89.65:59038.service - OpenSSH per-connection server daemon (139.178.89.65:59038). Oct 9 07:54:13.838080 systemd-logind[1446]: Removed session 18. Oct 9 07:54:13.938715 kubelet[2562]: E1009 07:54:13.938664 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 07:54:13.944998 sshd[5081]: Accepted publickey for core from 139.178.89.65 port 59038 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:13.950538 sshd[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:13.961692 systemd-logind[1446]: New session 19 of user core. Oct 9 07:54:13.967441 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 07:54:14.936058 sshd[5081]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:14.955154 systemd[1]: sshd@18-209.38.129.97:22-139.178.89.65:59038.service: Deactivated successfully. Oct 9 07:54:14.962132 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 07:54:14.965437 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Oct 9 07:54:14.989187 systemd[1]: Started sshd@19-209.38.129.97:22-139.178.89.65:59050.service - OpenSSH per-connection server daemon (139.178.89.65:59050). Oct 9 07:54:14.993327 systemd-logind[1446]: Removed session 19. Oct 9 07:54:15.042601 sshd[5092]: Accepted publickey for core from 139.178.89.65 port 59050 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:15.046854 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:15.055298 systemd-logind[1446]: New session 20 of user core. Oct 9 07:54:15.061371 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 07:54:15.243473 sshd[5092]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:15.250689 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Oct 9 07:54:15.250906 systemd[1]: sshd@19-209.38.129.97:22-139.178.89.65:59050.service: Deactivated successfully. Oct 9 07:54:15.254639 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 07:54:15.256652 systemd-logind[1446]: Removed session 20. Oct 9 07:54:20.268296 systemd[1]: Started sshd@20-209.38.129.97:22-139.178.89.65:50168.service - OpenSSH per-connection server daemon (139.178.89.65:50168). Oct 9 07:54:20.363742 sshd[5110]: Accepted publickey for core from 139.178.89.65 port 50168 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:20.368470 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:20.375720 systemd-logind[1446]: New session 21 of user core. Oct 9 07:54:20.380557 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 07:54:20.711704 sshd[5110]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:20.718708 systemd[1]: sshd@20-209.38.129.97:22-139.178.89.65:50168.service: Deactivated successfully. Oct 9 07:54:20.722417 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 07:54:20.723986 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Oct 9 07:54:20.726265 systemd-logind[1446]: Removed session 21. Oct 9 07:54:25.731479 systemd[1]: Started sshd@21-209.38.129.97:22-139.178.89.65:45438.service - OpenSSH per-connection server daemon (139.178.89.65:45438). Oct 9 07:54:25.813010 sshd[5158]: Accepted publickey for core from 139.178.89.65 port 45438 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:25.815647 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:25.824178 systemd-logind[1446]: New session 22 of user core. Oct 9 07:54:25.829405 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 07:54:26.021929 sshd[5158]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:26.028337 systemd[1]: sshd@21-209.38.129.97:22-139.178.89.65:45438.service: Deactivated successfully. Oct 9 07:54:26.032734 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 07:54:26.038853 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Oct 9 07:54:26.040839 systemd-logind[1446]: Removed session 22. Oct 9 07:54:31.044421 systemd[1]: Started sshd@22-209.38.129.97:22-139.178.89.65:45440.service - OpenSSH per-connection server daemon (139.178.89.65:45440). Oct 9 07:54:31.093641 sshd[5191]: Accepted publickey for core from 139.178.89.65 port 45440 ssh2: RSA SHA256:nDg0UeSiwkxxSWtKMhQ+P+HuSx1Axr49vgnqaJCGl7o Oct 9 07:54:31.096323 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 07:54:31.105526 systemd-logind[1446]: New session 23 of user core. Oct 9 07:54:31.111369 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 07:54:31.327407 sshd[5191]: pam_unix(sshd:session): session closed for user core Oct 9 07:54:31.332197 systemd[1]: sshd@22-209.38.129.97:22-139.178.89.65:45440.service: Deactivated successfully. Oct 9 07:54:31.334612 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 07:54:31.337124 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Oct 9 07:54:31.338512 systemd-logind[1446]: Removed session 23.