Oct 9 00:59:41.961686 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 00:59:41.961717 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:59:41.961732 kernel: BIOS-provided physical RAM map: Oct 9 00:59:41.961739 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 00:59:41.961769 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 00:59:41.961779 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 00:59:41.961793 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Oct 9 00:59:41.961800 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Oct 9 00:59:41.961807 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 00:59:41.961819 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 00:59:41.961831 kernel: NX (Execute Disable) protection: active Oct 9 00:59:41.961841 kernel: APIC: Static calls initialized Oct 9 00:59:41.961850 kernel: SMBIOS 2.8 present. Oct 9 00:59:41.961862 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Oct 9 00:59:41.961879 kernel: Hypervisor detected: KVM Oct 9 00:59:41.961892 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 00:59:41.961900 kernel: kvm-clock: using sched offset of 3115009950 cycles Oct 9 00:59:41.961909 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 00:59:41.961917 kernel: tsc: Detected 2494.138 MHz processor Oct 9 00:59:41.961926 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 00:59:41.961939 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 00:59:41.961950 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Oct 9 00:59:41.961964 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 00:59:41.961978 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 00:59:41.961990 kernel: ACPI: Early table checksum verification disabled Oct 9 00:59:41.962002 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Oct 9 00:59:41.962014 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:59:41.962029 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:59:41.962043 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:59:41.962058 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 9 00:59:41.962067 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:59:41.962075 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:59:41.962082 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:59:41.962093 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 00:59:41.962101 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Oct 9 00:59:41.962108 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Oct 9 00:59:41.962116 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 9 00:59:41.962124 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Oct 9 00:59:41.962131 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Oct 9 00:59:41.962139 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Oct 9 00:59:41.962156 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Oct 9 00:59:41.962170 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 9 00:59:41.962185 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 9 00:59:41.962201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 9 00:59:41.962216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 9 00:59:41.962232 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Oct 9 00:59:41.962247 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Oct 9 00:59:41.962265 kernel: Zone ranges: Oct 9 00:59:41.962273 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 00:59:41.962281 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Oct 9 00:59:41.962303 kernel: Normal empty Oct 9 00:59:41.962427 kernel: Movable zone start for each node Oct 9 00:59:41.962442 kernel: Early memory node ranges Oct 9 00:59:41.962455 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 00:59:41.962463 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Oct 9 00:59:41.962471 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Oct 9 00:59:41.962486 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 00:59:41.962495 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 00:59:41.962503 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Oct 9 00:59:41.962512 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 00:59:41.962520 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 00:59:41.962528 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 00:59:41.962537 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 00:59:41.962545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 00:59:41.962554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 00:59:41.962566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 00:59:41.962574 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 00:59:41.962582 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 00:59:41.962590 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 00:59:41.962599 kernel: TSC deadline timer available Oct 9 00:59:41.962607 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 00:59:41.962616 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 00:59:41.962624 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 9 00:59:41.962632 kernel: Booting paravirtualized kernel on KVM Oct 9 00:59:41.962646 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 00:59:41.962655 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 00:59:41.962663 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 00:59:41.962671 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 00:59:41.962679 kernel: pcpu-alloc: [0] 0 1 Oct 9 00:59:41.962687 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 00:59:41.962697 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:59:41.962706 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 00:59:41.962718 kernel: random: crng init done Oct 9 00:59:41.962726 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 00:59:41.962735 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 00:59:41.962743 kernel: Fallback order for Node 0: 0 Oct 9 00:59:41.962752 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Oct 9 00:59:41.962760 kernel: Policy zone: DMA32 Oct 9 00:59:41.962768 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 00:59:41.962776 kernel: Memory: 1971188K/2096600K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 125152K reserved, 0K cma-reserved) Oct 9 00:59:41.962785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 00:59:41.962797 kernel: Kernel/User page tables isolation: enabled Oct 9 00:59:41.962805 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 00:59:41.962814 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 00:59:41.962822 kernel: Dynamic Preempt: voluntary Oct 9 00:59:41.962830 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 00:59:41.962840 kernel: rcu: RCU event tracing is enabled. Oct 9 00:59:41.962848 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 00:59:41.962857 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 00:59:41.962865 kernel: Rude variant of Tasks RCU enabled. Oct 9 00:59:41.962877 kernel: Tracing variant of Tasks RCU enabled. Oct 9 00:59:41.962888 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 00:59:41.962896 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 00:59:41.962904 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 00:59:41.962913 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 00:59:41.962921 kernel: Console: colour VGA+ 80x25 Oct 9 00:59:41.962930 kernel: printk: console [tty0] enabled Oct 9 00:59:41.962938 kernel: printk: console [ttyS0] enabled Oct 9 00:59:41.962946 kernel: ACPI: Core revision 20230628 Oct 9 00:59:41.962955 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 00:59:41.962967 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 00:59:41.962976 kernel: x2apic enabled Oct 9 00:59:41.962984 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 00:59:41.962992 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 00:59:41.963001 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 00:59:41.963009 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Oct 9 00:59:41.963017 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 9 00:59:41.963025 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 9 00:59:41.963047 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 00:59:41.963056 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 00:59:41.963065 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 00:59:41.963077 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 00:59:41.963086 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 9 00:59:41.963095 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 00:59:41.963103 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 00:59:41.963112 kernel: MDS: Mitigation: Clear CPU buffers Oct 9 00:59:41.963121 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 9 00:59:41.963134 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 00:59:41.963143 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 00:59:41.963152 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 00:59:41.963161 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 00:59:41.963170 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 9 00:59:41.963179 kernel: Freeing SMP alternatives memory: 32K Oct 9 00:59:41.963188 kernel: pid_max: default: 32768 minimum: 301 Oct 9 00:59:41.963197 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 00:59:41.963209 kernel: landlock: Up and running. Oct 9 00:59:41.963217 kernel: SELinux: Initializing. Oct 9 00:59:41.963226 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 00:59:41.963235 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 00:59:41.963244 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Oct 9 00:59:41.963253 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:59:41.963271 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:59:41.963288 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 00:59:41.963297 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Oct 9 00:59:41.963829 kernel: signal: max sigframe size: 1776 Oct 9 00:59:41.963855 kernel: rcu: Hierarchical SRCU implementation. Oct 9 00:59:41.963867 kernel: rcu: Max phase no-delay instances is 400. Oct 9 00:59:41.963877 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 9 00:59:41.963887 kernel: smp: Bringing up secondary CPUs ... Oct 9 00:59:41.963897 kernel: smpboot: x86: Booting SMP configuration: Oct 9 00:59:41.963908 kernel: .... node #0, CPUs: #1 Oct 9 00:59:41.963918 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 00:59:41.963928 kernel: smpboot: Max logical packages: 1 Oct 9 00:59:41.963946 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Oct 9 00:59:41.963957 kernel: devtmpfs: initialized Oct 9 00:59:41.963967 kernel: x86/mm: Memory block size: 128MB Oct 9 00:59:41.963977 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 00:59:41.963988 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 00:59:41.963998 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 00:59:41.964008 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 00:59:41.964018 kernel: audit: initializing netlink subsys (disabled) Oct 9 00:59:41.964029 kernel: audit: type=2000 audit(1728435581.501:1): state=initialized audit_enabled=0 res=1 Oct 9 00:59:41.964044 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 00:59:41.964053 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 00:59:41.964063 kernel: cpuidle: using governor menu Oct 9 00:59:41.964074 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 00:59:41.964084 kernel: dca service started, version 1.12.1 Oct 9 00:59:41.964094 kernel: PCI: Using configuration type 1 for base access Oct 9 00:59:41.964104 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 00:59:41.964114 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 00:59:41.964124 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 00:59:41.964138 kernel: ACPI: Added _OSI(Module Device) Oct 9 00:59:41.964148 kernel: ACPI: Added _OSI(Processor Device) Oct 9 00:59:41.964158 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 00:59:41.964168 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 00:59:41.964178 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 00:59:41.964188 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 00:59:41.964198 kernel: ACPI: Interpreter enabled Oct 9 00:59:41.964208 kernel: ACPI: PM: (supports S0 S5) Oct 9 00:59:41.964218 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 00:59:41.964231 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 00:59:41.964242 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 00:59:41.964252 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 9 00:59:41.964262 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 00:59:41.964534 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 9 00:59:41.964657 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 9 00:59:41.964770 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 9 00:59:41.964790 kernel: acpiphp: Slot [3] registered Oct 9 00:59:41.964801 kernel: acpiphp: Slot [4] registered Oct 9 00:59:41.964812 kernel: acpiphp: Slot [5] registered Oct 9 00:59:41.964822 kernel: acpiphp: Slot [6] registered Oct 9 00:59:41.964832 kernel: acpiphp: Slot [7] registered Oct 9 00:59:41.964842 kernel: acpiphp: Slot [8] registered Oct 9 00:59:41.964852 kernel: acpiphp: Slot [9] registered Oct 9 00:59:41.964862 kernel: acpiphp: Slot [10] registered Oct 9 00:59:41.964872 kernel: acpiphp: Slot [11] registered Oct 9 00:59:41.964887 kernel: acpiphp: Slot [12] registered Oct 9 00:59:41.964897 kernel: acpiphp: Slot [13] registered Oct 9 00:59:41.964907 kernel: acpiphp: Slot [14] registered Oct 9 00:59:41.964918 kernel: acpiphp: Slot [15] registered Oct 9 00:59:41.964928 kernel: acpiphp: Slot [16] registered Oct 9 00:59:41.964938 kernel: acpiphp: Slot [17] registered Oct 9 00:59:41.964949 kernel: acpiphp: Slot [18] registered Oct 9 00:59:41.964959 kernel: acpiphp: Slot [19] registered Oct 9 00:59:41.964969 kernel: acpiphp: Slot [20] registered Oct 9 00:59:41.964980 kernel: acpiphp: Slot [21] registered Oct 9 00:59:41.964995 kernel: acpiphp: Slot [22] registered Oct 9 00:59:41.965005 kernel: acpiphp: Slot [23] registered Oct 9 00:59:41.965015 kernel: acpiphp: Slot [24] registered Oct 9 00:59:41.965025 kernel: acpiphp: Slot [25] registered Oct 9 00:59:41.965036 kernel: acpiphp: Slot [26] registered Oct 9 00:59:41.965046 kernel: acpiphp: Slot [27] registered Oct 9 00:59:41.965056 kernel: acpiphp: Slot [28] registered Oct 9 00:59:41.965066 kernel: acpiphp: Slot [29] registered Oct 9 00:59:41.965076 kernel: acpiphp: Slot [30] registered Oct 9 00:59:41.965090 kernel: acpiphp: Slot [31] registered Oct 9 00:59:41.965100 kernel: PCI host bridge to bus 0000:00 Oct 9 00:59:41.965231 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 00:59:41.965527 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 00:59:41.965642 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 00:59:41.965742 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 9 00:59:41.965846 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 9 00:59:41.965934 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 00:59:41.966072 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 9 00:59:41.966222 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 9 00:59:41.968632 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 9 00:59:41.968789 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Oct 9 00:59:41.968893 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 9 00:59:41.968991 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 9 00:59:41.969100 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 9 00:59:41.969201 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 9 00:59:41.969353 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Oct 9 00:59:41.969459 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Oct 9 00:59:41.969569 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 9 00:59:41.969671 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 9 00:59:41.969780 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 9 00:59:41.969908 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 9 00:59:41.970041 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 9 00:59:41.970202 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 9 00:59:41.972505 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Oct 9 00:59:41.972716 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 9 00:59:41.972872 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 00:59:41.973094 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 9 00:59:41.973233 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Oct 9 00:59:41.973458 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Oct 9 00:59:41.973609 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 9 00:59:41.973766 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 9 00:59:41.973887 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Oct 9 00:59:41.974033 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Oct 9 00:59:41.974196 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 9 00:59:41.974427 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Oct 9 00:59:41.974535 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Oct 9 00:59:41.974634 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Oct 9 00:59:41.974733 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 9 00:59:41.974844 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Oct 9 00:59:41.974948 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Oct 9 00:59:41.975061 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Oct 9 00:59:41.975165 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 9 00:59:41.975281 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Oct 9 00:59:41.977593 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Oct 9 00:59:41.977787 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Oct 9 00:59:41.977903 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Oct 9 00:59:41.978021 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Oct 9 00:59:41.978187 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Oct 9 00:59:41.978375 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Oct 9 00:59:41.978399 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 00:59:41.978415 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 00:59:41.978426 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 00:59:41.978439 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 00:59:41.978453 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 9 00:59:41.978475 kernel: iommu: Default domain type: Translated Oct 9 00:59:41.978488 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 00:59:41.978502 kernel: PCI: Using ACPI for IRQ routing Oct 9 00:59:41.978515 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 00:59:41.978528 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 00:59:41.978540 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Oct 9 00:59:41.978713 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 9 00:59:41.978852 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 9 00:59:41.978968 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 00:59:41.978986 kernel: vgaarb: loaded Oct 9 00:59:41.979000 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 00:59:41.979012 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 00:59:41.979026 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 00:59:41.979039 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 00:59:41.979053 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 00:59:41.979066 kernel: pnp: PnP ACPI init Oct 9 00:59:41.979081 kernel: pnp: PnP ACPI: found 4 devices Oct 9 00:59:41.979102 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 00:59:41.979115 kernel: NET: Registered PF_INET protocol family Oct 9 00:59:41.979129 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 00:59:41.979142 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 00:59:41.979155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 00:59:41.979168 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 00:59:41.979181 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 00:59:41.979194 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 00:59:41.979207 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 00:59:41.979225 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 00:59:41.979239 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 00:59:41.979253 kernel: NET: Registered PF_XDP protocol family Oct 9 00:59:41.981557 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 00:59:41.981756 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 00:59:41.981901 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 00:59:41.982025 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 9 00:59:41.982148 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 9 00:59:41.982349 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 9 00:59:41.982518 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 9 00:59:41.982538 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 9 00:59:41.982694 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 28804 usecs Oct 9 00:59:41.982715 kernel: PCI: CLS 0 bytes, default 64 Oct 9 00:59:41.982728 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 9 00:59:41.982742 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Oct 9 00:59:41.982754 kernel: Initialise system trusted keyrings Oct 9 00:59:41.982779 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 00:59:41.982794 kernel: Key type asymmetric registered Oct 9 00:59:41.982808 kernel: Asymmetric key parser 'x509' registered Oct 9 00:59:41.982822 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 00:59:41.982835 kernel: io scheduler mq-deadline registered Oct 9 00:59:41.982849 kernel: io scheduler kyber registered Oct 9 00:59:41.982862 kernel: io scheduler bfq registered Oct 9 00:59:41.982875 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 00:59:41.982889 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 9 00:59:41.982903 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 9 00:59:41.982922 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 9 00:59:41.982935 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 00:59:41.982948 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 00:59:41.982962 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 00:59:41.982975 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 00:59:41.982989 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 00:59:41.983002 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 00:59:41.983209 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 00:59:41.985464 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 00:59:41.985631 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T00:59:41 UTC (1728435581) Oct 9 00:59:41.985771 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 9 00:59:41.985789 kernel: intel_pstate: CPU model not supported Oct 9 00:59:41.985804 kernel: NET: Registered PF_INET6 protocol family Oct 9 00:59:41.985819 kernel: Segment Routing with IPv6 Oct 9 00:59:41.985832 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 00:59:41.985846 kernel: NET: Registered PF_PACKET protocol family Oct 9 00:59:41.985871 kernel: Key type dns_resolver registered Oct 9 00:59:41.985884 kernel: IPI shorthand broadcast: enabled Oct 9 00:59:41.985898 kernel: sched_clock: Marking stable (956006521, 95420494)->(1078674034, -27247019) Oct 9 00:59:41.985911 kernel: registered taskstats version 1 Oct 9 00:59:41.985925 kernel: Loading compiled-in X.509 certificates Oct 9 00:59:41.985940 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 00:59:41.985953 kernel: Key type .fscrypt registered Oct 9 00:59:41.985962 kernel: Key type fscrypt-provisioning registered Oct 9 00:59:41.985971 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 00:59:41.985986 kernel: ima: Allocated hash algorithm: sha1 Oct 9 00:59:41.985995 kernel: ima: No architecture policies found Oct 9 00:59:41.986004 kernel: clk: Disabling unused clocks Oct 9 00:59:41.986014 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 00:59:41.986023 kernel: Write protecting the kernel read-only data: 36864k Oct 9 00:59:41.986063 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 00:59:41.986084 kernel: Run /init as init process Oct 9 00:59:41.986098 kernel: with arguments: Oct 9 00:59:41.986113 kernel: /init Oct 9 00:59:41.986129 kernel: with environment: Oct 9 00:59:41.986142 kernel: HOME=/ Oct 9 00:59:41.986160 kernel: TERM=linux Oct 9 00:59:41.986173 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 00:59:41.986193 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:59:41.986210 systemd[1]: Detected virtualization kvm. Oct 9 00:59:41.986226 systemd[1]: Detected architecture x86-64. Oct 9 00:59:41.986239 systemd[1]: Running in initrd. Oct 9 00:59:41.986258 systemd[1]: No hostname configured, using default hostname. Oct 9 00:59:41.986272 systemd[1]: Hostname set to . Oct 9 00:59:41.986286 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:59:41.986380 systemd[1]: Queued start job for default target initrd.target. Oct 9 00:59:41.986397 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:59:41.986411 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:59:41.986431 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 00:59:41.986448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:59:41.986471 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 00:59:41.986486 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 00:59:41.986502 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 00:59:41.986518 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 00:59:41.986532 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:59:41.986550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:59:41.986572 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:59:41.986583 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:59:41.986593 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:59:41.986608 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:59:41.986618 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:59:41.986628 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:59:41.986642 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 00:59:41.986653 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 00:59:41.986663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:59:41.986673 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:59:41.986683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:59:41.986693 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:59:41.986703 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 00:59:41.986713 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:59:41.986726 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 00:59:41.986736 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 00:59:41.986746 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:59:41.986756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:59:41.986766 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:59:41.986776 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 00:59:41.986786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:59:41.986796 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 00:59:41.986852 systemd-journald[183]: Collecting audit messages is disabled. Oct 9 00:59:41.986887 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 00:59:41.986903 systemd-journald[183]: Journal started Oct 9 00:59:41.986929 systemd-journald[183]: Runtime Journal (/run/log/journal/049122f36e5741ffb7e1796ffac4610c) is 4.9M, max 39.3M, 34.4M free. Oct 9 00:59:41.991354 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:59:41.960382 systemd-modules-load[184]: Inserted module 'overlay' Oct 9 00:59:41.995406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 00:59:42.033333 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 00:59:42.033384 kernel: Bridge firewalling registered Oct 9 00:59:42.001440 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 9 00:59:42.031390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:59:42.034138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:59:42.043597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:59:42.045517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:59:42.048616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:59:42.052555 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:59:42.074551 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:59:42.078584 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:59:42.082624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:59:42.088576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:59:42.090677 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:59:42.097932 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 00:59:42.119334 dracut-cmdline[219]: dracut-dracut-053 Oct 9 00:59:42.123556 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 00:59:42.129680 systemd-resolved[216]: Positive Trust Anchors: Oct 9 00:59:42.129696 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:59:42.129733 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:59:42.133651 systemd-resolved[216]: Defaulting to hostname 'linux'. Oct 9 00:59:42.134987 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:59:42.135695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:59:42.224404 kernel: SCSI subsystem initialized Oct 9 00:59:42.235368 kernel: Loading iSCSI transport class v2.0-870. Oct 9 00:59:42.247370 kernel: iscsi: registered transport (tcp) Oct 9 00:59:42.270400 kernel: iscsi: registered transport (qla4xxx) Oct 9 00:59:42.270524 kernel: QLogic iSCSI HBA Driver Oct 9 00:59:42.324919 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 00:59:42.330703 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 00:59:42.363552 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 00:59:42.363653 kernel: device-mapper: uevent: version 1.0.3 Oct 9 00:59:42.365108 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 00:59:42.412394 kernel: raid6: avx2x4 gen() 24093 MB/s Oct 9 00:59:42.429393 kernel: raid6: avx2x2 gen() 23663 MB/s Oct 9 00:59:42.446627 kernel: raid6: avx2x1 gen() 19683 MB/s Oct 9 00:59:42.446751 kernel: raid6: using algorithm avx2x4 gen() 24093 MB/s Oct 9 00:59:42.464763 kernel: raid6: .... xor() 7879 MB/s, rmw enabled Oct 9 00:59:42.464889 kernel: raid6: using avx2x2 recovery algorithm Oct 9 00:59:42.488365 kernel: xor: automatically using best checksumming function avx Oct 9 00:59:42.653376 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 00:59:42.668471 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:59:42.674675 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:59:42.700622 systemd-udevd[403]: Using default interface naming scheme 'v255'. Oct 9 00:59:42.707012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:59:42.713560 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 00:59:42.742265 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Oct 9 00:59:42.785443 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:59:42.790680 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:59:42.867136 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:59:42.876925 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 00:59:42.907848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 00:59:42.910215 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:59:42.911577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:59:42.913090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:59:42.920628 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 00:59:42.956413 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:59:42.991361 kernel: scsi host0: Virtio SCSI HBA Oct 9 00:59:43.005349 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Oct 9 00:59:43.009343 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 9 00:59:43.022035 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 00:59:43.022134 kernel: GPT:9289727 != 125829119 Oct 9 00:59:43.022157 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 00:59:43.022179 kernel: GPT:9289727 != 125829119 Oct 9 00:59:43.022198 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 00:59:43.022235 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:59:43.033343 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 00:59:43.043649 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 00:59:43.043745 kernel: AES CTR mode by8 optimization enabled Oct 9 00:59:43.048143 kernel: ACPI: bus type USB registered Oct 9 00:59:43.048247 kernel: usbcore: registered new interface driver usbfs Oct 9 00:59:43.049779 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Oct 9 00:59:43.050549 kernel: usbcore: registered new interface driver hub Oct 9 00:59:43.050598 kernel: virtio_blk virtio5: [vdb] 968 512-byte logical blocks (496 kB/484 KiB) Oct 9 00:59:43.063512 kernel: usbcore: registered new device driver usb Oct 9 00:59:43.098523 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (454) Oct 9 00:59:43.118677 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Oct 9 00:59:43.119763 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 00:59:43.123673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:59:43.123833 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:59:43.130144 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:59:43.130718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:59:43.130914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:59:43.131633 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:59:43.140677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:59:43.156206 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 00:59:43.161341 kernel: libata version 3.00 loaded. Oct 9 00:59:43.167702 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 9 00:59:43.174387 kernel: scsi host1: ata_piix Oct 9 00:59:43.181459 kernel: scsi host2: ata_piix Oct 9 00:59:43.181789 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Oct 9 00:59:43.179188 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 00:59:43.220773 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Oct 9 00:59:43.220809 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Oct 9 00:59:43.221081 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Oct 9 00:59:43.221204 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Oct 9 00:59:43.221340 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Oct 9 00:59:43.221467 kernel: hub 1-0:1.0: USB hub found Oct 9 00:59:43.221613 kernel: hub 1-0:1.0: 2 ports detected Oct 9 00:59:43.221114 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 00:59:43.221873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:59:43.228528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:59:43.232582 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 00:59:43.239071 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 00:59:43.247129 disk-uuid[542]: Primary Header is updated. Oct 9 00:59:43.247129 disk-uuid[542]: Secondary Entries is updated. Oct 9 00:59:43.247129 disk-uuid[542]: Secondary Header is updated. Oct 9 00:59:43.262761 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:59:43.265352 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:59:43.271353 kernel: GPT:disk_guids don't match. Oct 9 00:59:43.271420 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 00:59:43.271434 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:59:43.289338 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:59:44.280413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 00:59:44.280948 disk-uuid[543]: The operation has completed successfully. Oct 9 00:59:44.327443 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 00:59:44.327594 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 00:59:44.341659 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 00:59:44.345448 sh[564]: Success Oct 9 00:59:44.360630 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 9 00:59:44.426983 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 00:59:44.444452 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 00:59:44.447193 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 00:59:44.470947 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 00:59:44.471050 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:59:44.471071 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 00:59:44.472435 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 00:59:44.473477 kernel: BTRFS info (device dm-0): using free space tree Oct 9 00:59:44.484258 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 00:59:44.485618 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 00:59:44.491678 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 00:59:44.493277 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 00:59:44.512392 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:59:44.512515 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:59:44.512536 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:59:44.518452 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:59:44.535989 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:59:44.535626 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 00:59:44.545238 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 00:59:44.558490 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 00:59:44.679461 ignition[662]: Ignition 2.19.0 Oct 9 00:59:44.679473 ignition[662]: Stage: fetch-offline Oct 9 00:59:44.679514 ignition[662]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:59:44.679524 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 00:59:44.679636 ignition[662]: parsed url from cmdline: "" Oct 9 00:59:44.682798 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:59:44.679642 ignition[662]: no config URL provided Oct 9 00:59:44.679650 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 00:59:44.679660 ignition[662]: no config at "/usr/lib/ignition/user.ign" Oct 9 00:59:44.679667 ignition[662]: failed to fetch config: resource requires networking Oct 9 00:59:44.679894 ignition[662]: Ignition finished successfully Oct 9 00:59:44.712042 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:59:44.718684 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:59:44.757421 systemd-networkd[755]: lo: Link UP Oct 9 00:59:44.757433 systemd-networkd[755]: lo: Gained carrier Oct 9 00:59:44.760045 systemd-networkd[755]: Enumeration completed Oct 9 00:59:44.760518 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 00:59:44.760523 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Oct 9 00:59:44.760681 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:59:44.761475 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:59:44.761480 systemd-networkd[755]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 00:59:44.761685 systemd[1]: Reached target network.target - Network. Oct 9 00:59:44.762138 systemd-networkd[755]: eth0: Link UP Oct 9 00:59:44.762142 systemd-networkd[755]: eth0: Gained carrier Oct 9 00:59:44.762152 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Oct 9 00:59:44.766817 systemd-networkd[755]: eth1: Link UP Oct 9 00:59:44.766822 systemd-networkd[755]: eth1: Gained carrier Oct 9 00:59:44.766839 systemd-networkd[755]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 00:59:44.769611 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 00:59:44.782475 systemd-networkd[755]: eth0: DHCPv4 address 143.110.225.158/20, gateway 143.110.224.1 acquired from 169.254.169.253 Oct 9 00:59:44.787437 systemd-networkd[755]: eth1: DHCPv4 address 10.124.0.17/20 acquired from 169.254.169.253 Oct 9 00:59:44.801598 ignition[757]: Ignition 2.19.0 Oct 9 00:59:44.801611 ignition[757]: Stage: fetch Oct 9 00:59:44.801832 ignition[757]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:59:44.801843 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 00:59:44.801970 ignition[757]: parsed url from cmdline: "" Oct 9 00:59:44.801974 ignition[757]: no config URL provided Oct 9 00:59:44.801979 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 00:59:44.801988 ignition[757]: no config at "/usr/lib/ignition/user.ign" Oct 9 00:59:44.802015 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Oct 9 00:59:44.818598 ignition[757]: GET result: OK Oct 9 00:59:44.818793 ignition[757]: parsing config with SHA512: 61a7e1afebf132136f0a2b8fddaaffd96d84bb7e542c3c89b85062a8c02993e19b78ee170551a4a31b4314b9d6cec2d0005440fdf67587581e9bc6802470a053 Oct 9 00:59:44.825640 unknown[757]: fetched base config from "system" Oct 9 00:59:44.825668 unknown[757]: fetched base config from "system" Oct 9 00:59:44.826400 ignition[757]: fetch: fetch complete Oct 9 00:59:44.825679 unknown[757]: fetched user config from "digitalocean" Oct 9 00:59:44.826415 ignition[757]: fetch: fetch passed Oct 9 00:59:44.826491 ignition[757]: Ignition finished successfully Oct 9 00:59:44.828621 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 00:59:44.834630 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 00:59:44.868504 ignition[765]: Ignition 2.19.0 Oct 9 00:59:44.868514 ignition[765]: Stage: kargs Oct 9 00:59:44.868720 ignition[765]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:59:44.868733 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 00:59:44.870987 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 00:59:44.869568 ignition[765]: kargs: kargs passed Oct 9 00:59:44.869624 ignition[765]: Ignition finished successfully Oct 9 00:59:44.877594 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 00:59:44.902629 ignition[771]: Ignition 2.19.0 Oct 9 00:59:44.902645 ignition[771]: Stage: disks Oct 9 00:59:44.902844 ignition[771]: no configs at "/usr/lib/ignition/base.d" Oct 9 00:59:44.902856 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 00:59:44.905647 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 00:59:44.903953 ignition[771]: disks: disks passed Oct 9 00:59:44.904009 ignition[771]: Ignition finished successfully Oct 9 00:59:44.911007 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 00:59:44.911503 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 00:59:44.912325 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:59:44.913091 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:59:44.913834 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:59:44.919596 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 00:59:44.940754 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 00:59:44.944235 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 00:59:44.950552 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 00:59:45.053341 kernel: EXT4-fs (vda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 00:59:45.054559 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 00:59:45.056220 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 00:59:45.068549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:59:45.071470 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 00:59:45.074607 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Oct 9 00:59:45.082679 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 00:59:45.090809 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Oct 9 00:59:45.090848 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:59:45.090876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:59:45.090893 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:59:45.084434 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 00:59:45.084503 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:59:45.095359 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:59:45.097004 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 00:59:45.100340 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 00:59:45.103225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:59:45.193521 coreos-metadata[790]: Oct 09 00:59:45.193 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 00:59:45.202802 coreos-metadata[790]: Oct 09 00:59:45.202 INFO Fetch successful Oct 9 00:59:45.205635 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 00:59:45.213879 coreos-metadata[789]: Oct 09 00:59:45.213 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 00:59:45.216413 coreos-metadata[790]: Oct 09 00:59:45.216 INFO wrote hostname ci-4116.0.0-c-50f1e82448 to /sysroot/etc/hostname Oct 9 00:59:45.217537 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 00:59:45.220428 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Oct 9 00:59:45.223544 coreos-metadata[789]: Oct 09 00:59:45.223 INFO Fetch successful Oct 9 00:59:45.229207 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 00:59:45.235339 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Oct 9 00:59:45.236423 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Oct 9 00:59:45.239190 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 00:59:45.368513 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 00:59:45.375521 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 00:59:45.378605 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 00:59:45.390362 kernel: BTRFS info (device vda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:59:45.415070 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 00:59:45.428645 ignition[908]: INFO : Ignition 2.19.0 Oct 9 00:59:45.430467 ignition[908]: INFO : Stage: mount Oct 9 00:59:45.430467 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:59:45.430467 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 00:59:45.432618 ignition[908]: INFO : mount: mount passed Oct 9 00:59:45.433186 ignition[908]: INFO : Ignition finished successfully Oct 9 00:59:45.434569 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 00:59:45.440517 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 00:59:45.469911 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 00:59:45.480710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 00:59:45.490614 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Oct 9 00:59:45.492538 kernel: BTRFS info (device vda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 00:59:45.492609 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 00:59:45.493425 kernel: BTRFS info (device vda6): using free space tree Oct 9 00:59:45.498420 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 00:59:45.500652 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 00:59:45.525385 ignition[937]: INFO : Ignition 2.19.0 Oct 9 00:59:45.525385 ignition[937]: INFO : Stage: files Oct 9 00:59:45.526813 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:59:45.526813 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 00:59:45.526813 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Oct 9 00:59:45.528514 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 00:59:45.528514 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 00:59:45.531892 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 00:59:45.532634 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 00:59:45.532634 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 00:59:45.532370 unknown[937]: wrote ssh authorized keys file for user: core Oct 9 00:59:45.534700 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 00:59:45.534700 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 00:59:45.695989 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 00:59:45.767353 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 00:59:45.767353 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 00:59:45.768846 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 00:59:45.775533 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 00:59:45.775533 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 00:59:45.775533 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 9 00:59:46.088547 systemd-networkd[755]: eth0: Gained IPv6LL Oct 9 00:59:46.238698 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 00:59:46.588734 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 9 00:59:46.588734 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 00:59:46.590623 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:59:46.590623 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 00:59:46.590623 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 00:59:46.590623 ignition[937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 9 00:59:46.590623 ignition[937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 00:59:46.595059 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:59:46.595059 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 00:59:46.595059 ignition[937]: INFO : files: files passed Oct 9 00:59:46.595059 ignition[937]: INFO : Ignition finished successfully Oct 9 00:59:46.592971 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 00:59:46.599603 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 00:59:46.601515 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 00:59:46.610847 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 00:59:46.611506 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 00:59:46.620336 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:59:46.620336 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:59:46.623394 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 00:59:46.625053 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:59:46.626705 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 00:59:46.631559 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 00:59:46.664878 systemd-networkd[755]: eth1: Gained IPv6LL Oct 9 00:59:46.678088 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 00:59:46.678439 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 00:59:46.679750 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 00:59:46.680622 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 00:59:46.681561 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 00:59:46.686607 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 00:59:46.717647 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:59:46.724577 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 00:59:46.739075 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:59:46.740507 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:59:46.741819 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 00:59:46.742725 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 00:59:46.742910 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 00:59:46.743907 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 00:59:46.744693 systemd[1]: Stopped target basic.target - Basic System. Oct 9 00:59:46.745518 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 00:59:46.746154 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 00:59:46.747222 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 00:59:46.747973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 00:59:46.748701 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 00:59:46.749606 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 00:59:46.750399 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 00:59:46.751120 systemd[1]: Stopped target swap.target - Swaps. Oct 9 00:59:46.751825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 00:59:46.752006 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 00:59:46.752913 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:59:46.753673 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:59:46.754538 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 00:59:46.754658 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:59:46.755297 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 00:59:46.755472 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 00:59:46.756463 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 00:59:46.756594 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 00:59:46.757664 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 00:59:46.757824 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 00:59:46.758545 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 00:59:46.758647 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 00:59:46.765784 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 00:59:46.766304 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 00:59:46.766554 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:59:46.769504 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 00:59:46.770633 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 00:59:46.772473 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:59:46.775558 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 00:59:46.775720 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 00:59:46.785794 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 00:59:46.785981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 00:59:46.797744 ignition[991]: INFO : Ignition 2.19.0 Oct 9 00:59:46.798841 ignition[991]: INFO : Stage: umount Oct 9 00:59:46.799621 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 00:59:46.800389 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Oct 9 00:59:46.802443 ignition[991]: INFO : umount: umount passed Oct 9 00:59:46.802987 ignition[991]: INFO : Ignition finished successfully Oct 9 00:59:46.805093 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 00:59:46.806193 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 00:59:46.831932 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 00:59:46.832415 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 00:59:46.832469 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 00:59:46.832899 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 00:59:46.832939 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 00:59:46.833732 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 00:59:46.833776 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 00:59:46.844853 systemd[1]: Stopped target network.target - Network. Oct 9 00:59:46.845172 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 00:59:46.845264 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 00:59:46.845843 systemd[1]: Stopped target paths.target - Path Units. Oct 9 00:59:46.847555 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 00:59:46.847719 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:59:46.848614 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 00:59:46.849389 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 00:59:46.850114 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 00:59:46.850171 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 00:59:46.850988 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 00:59:46.851034 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 00:59:46.851635 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 00:59:46.851684 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 00:59:46.852408 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 00:59:46.852451 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 00:59:46.868387 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 00:59:46.870146 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 00:59:46.872404 systemd-networkd[755]: eth1: DHCPv6 lease lost Oct 9 00:59:46.875296 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 00:59:46.875448 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 00:59:46.887103 systemd-networkd[755]: eth0: DHCPv6 lease lost Oct 9 00:59:46.895045 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 00:59:46.895446 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 00:59:46.896692 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 00:59:46.896840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 00:59:46.898901 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 00:59:46.899045 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:59:46.900011 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 00:59:46.900074 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 00:59:46.907603 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 00:59:46.908196 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 00:59:46.908328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 00:59:46.908930 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 00:59:46.908984 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:59:46.909446 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 00:59:46.909493 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 00:59:46.910758 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 00:59:46.910809 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:59:46.911289 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:59:46.936168 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 00:59:46.936382 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:59:46.937466 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 00:59:46.937559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 00:59:46.939200 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 00:59:46.939293 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 00:59:46.940203 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 00:59:46.940247 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:59:46.941016 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 00:59:46.941085 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 00:59:46.942408 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 00:59:46.942464 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 00:59:46.943148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 00:59:46.943197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 00:59:46.951641 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 00:59:46.952126 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 00:59:46.952199 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:59:46.952706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:59:46.952768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:59:46.959521 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 00:59:46.959649 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 00:59:46.960835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 00:59:46.972615 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 00:59:46.981092 systemd[1]: Switching root. Oct 9 00:59:47.007691 systemd-journald[183]: Journal stopped Oct 9 00:59:48.190366 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 9 00:59:48.190440 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 00:59:48.190456 kernel: SELinux: policy capability open_perms=1 Oct 9 00:59:48.190469 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 00:59:48.190482 kernel: SELinux: policy capability always_check_network=0 Oct 9 00:59:48.190499 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 00:59:48.190520 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 00:59:48.190532 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 00:59:48.190544 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 00:59:48.190557 kernel: audit: type=1403 audit(1728435587.168:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 00:59:48.190571 systemd[1]: Successfully loaded SELinux policy in 42.125ms. Oct 9 00:59:48.190592 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.820ms. Oct 9 00:59:48.190606 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 00:59:48.190619 systemd[1]: Detected virtualization kvm. Oct 9 00:59:48.190636 systemd[1]: Detected architecture x86-64. Oct 9 00:59:48.190652 systemd[1]: Detected first boot. Oct 9 00:59:48.190670 systemd[1]: Hostname set to . Oct 9 00:59:48.190683 systemd[1]: Initializing machine ID from VM UUID. Oct 9 00:59:48.190698 zram_generator::config[1034]: No configuration found. Oct 9 00:59:48.190713 systemd[1]: Populated /etc with preset unit settings. Oct 9 00:59:48.190726 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 00:59:48.190738 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 00:59:48.190753 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 00:59:48.190768 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 00:59:48.190787 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 00:59:48.190800 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 00:59:48.190813 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 00:59:48.190827 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 00:59:48.190844 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 00:59:48.190857 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 00:59:48.190880 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 00:59:48.190893 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 00:59:48.190905 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 00:59:48.190918 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 00:59:48.190931 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 00:59:48.190944 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 00:59:48.190957 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 00:59:48.190970 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 00:59:48.190982 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 00:59:48.190999 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 00:59:48.191013 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 00:59:48.191025 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 00:59:48.191038 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 00:59:48.191050 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 00:59:48.191063 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 00:59:48.191078 systemd[1]: Reached target slices.target - Slice Units. Oct 9 00:59:48.191091 systemd[1]: Reached target swap.target - Swaps. Oct 9 00:59:48.191103 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 00:59:48.191116 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 00:59:48.191131 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 00:59:48.191144 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 00:59:48.191156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 00:59:48.191169 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 00:59:48.191181 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 00:59:48.191193 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 00:59:48.191209 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 00:59:48.191221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:48.191233 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 00:59:48.191251 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 00:59:48.191267 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 00:59:48.191282 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 00:59:48.191294 systemd[1]: Reached target machines.target - Containers. Oct 9 00:59:48.191307 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 00:59:48.197749 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:59:48.197777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 00:59:48.197790 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 00:59:48.197808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:59:48.197821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:59:48.197835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:59:48.197848 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 00:59:48.197860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:59:48.197874 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 00:59:48.197892 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 00:59:48.197911 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 00:59:48.197930 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 00:59:48.197949 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 00:59:48.197965 kernel: fuse: init (API version 7.39) Oct 9 00:59:48.197984 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 00:59:48.198003 kernel: loop: module loaded Oct 9 00:59:48.198017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 00:59:48.198030 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 00:59:48.198048 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 00:59:48.198061 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 00:59:48.198074 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 00:59:48.198088 systemd[1]: Stopped verity-setup.service. Oct 9 00:59:48.198102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:48.198115 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 00:59:48.198134 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 00:59:48.198147 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 00:59:48.198163 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 00:59:48.198246 systemd-journald[1107]: Collecting audit messages is disabled. Oct 9 00:59:48.198291 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 00:59:48.198306 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 00:59:48.198364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 00:59:48.198380 systemd-journald[1107]: Journal started Oct 9 00:59:48.198406 systemd-journald[1107]: Runtime Journal (/run/log/journal/049122f36e5741ffb7e1796ffac4610c) is 4.9M, max 39.3M, 34.4M free. Oct 9 00:59:47.886224 systemd[1]: Queued start job for default target multi-user.target. Oct 9 00:59:47.905643 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 00:59:47.906383 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 00:59:48.207088 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 00:59:48.204231 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 00:59:48.206431 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 00:59:48.207631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:59:48.207824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:59:48.210900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:59:48.211049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:59:48.212097 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 00:59:48.212255 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 00:59:48.212961 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:59:48.213114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:59:48.230372 kernel: ACPI: bus type drm_connector registered Oct 9 00:59:48.231274 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:59:48.231463 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:59:48.233612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 00:59:48.234980 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 00:59:48.249424 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 00:59:48.261641 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 00:59:48.267046 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 00:59:48.267670 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:59:48.277583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 00:59:48.282180 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 00:59:48.288430 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 00:59:48.298950 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 00:59:48.299686 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 00:59:48.307869 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 00:59:48.309416 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 00:59:48.311981 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 00:59:48.317595 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 00:59:48.326666 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 00:59:48.327472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:59:48.339056 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 00:59:48.341686 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 00:59:48.342304 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:59:48.344813 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 00:59:48.352568 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 00:59:48.361968 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 00:59:48.365466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 00:59:48.366696 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 00:59:48.392403 kernel: loop0: detected capacity change from 0 to 140992 Oct 9 00:59:48.412084 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 00:59:48.414733 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 00:59:48.431544 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 00:59:48.444483 systemd-journald[1107]: Time spent on flushing to /var/log/journal/049122f36e5741ffb7e1796ffac4610c is 65.868ms for 994 entries. Oct 9 00:59:48.444483 systemd-journald[1107]: System Journal (/var/log/journal/049122f36e5741ffb7e1796ffac4610c) is 8.0M, max 195.6M, 187.6M free. Oct 9 00:59:48.537193 systemd-journald[1107]: Received client request to flush runtime journal. Oct 9 00:59:48.537305 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 00:59:48.537354 kernel: loop1: detected capacity change from 0 to 210664 Oct 9 00:59:48.454084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 00:59:48.468595 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 00:59:48.492353 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 00:59:48.494488 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 00:59:48.512269 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 00:59:48.521659 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 00:59:48.543208 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 00:59:48.552716 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 00:59:48.572813 kernel: loop2: detected capacity change from 0 to 138192 Oct 9 00:59:48.640925 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Oct 9 00:59:48.640949 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Oct 9 00:59:48.661791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 00:59:48.669357 kernel: loop3: detected capacity change from 0 to 8 Oct 9 00:59:48.695365 kernel: loop4: detected capacity change from 0 to 140992 Oct 9 00:59:48.732338 kernel: loop5: detected capacity change from 0 to 210664 Oct 9 00:59:48.753341 kernel: loop6: detected capacity change from 0 to 138192 Oct 9 00:59:48.774468 kernel: loop7: detected capacity change from 0 to 8 Oct 9 00:59:48.775297 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Oct 9 00:59:48.776967 (sd-merge)[1179]: Merged extensions into '/usr'. Oct 9 00:59:48.788952 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 00:59:48.788974 systemd[1]: Reloading... Oct 9 00:59:48.953426 zram_generator::config[1205]: No configuration found. Oct 9 00:59:49.104398 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 00:59:49.213376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:59:49.287895 systemd[1]: Reloading finished in 498 ms. Oct 9 00:59:49.311628 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 00:59:49.315861 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 00:59:49.326616 systemd[1]: Starting ensure-sysext.service... Oct 9 00:59:49.334783 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 00:59:49.354500 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Oct 9 00:59:49.354538 systemd[1]: Reloading... Oct 9 00:59:49.419733 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 00:59:49.420241 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 00:59:49.424447 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 00:59:49.424809 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Oct 9 00:59:49.424879 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Oct 9 00:59:49.434396 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:59:49.434411 systemd-tmpfiles[1249]: Skipping /boot Oct 9 00:59:49.457350 zram_generator::config[1271]: No configuration found. Oct 9 00:59:49.475697 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 00:59:49.475714 systemd-tmpfiles[1249]: Skipping /boot Oct 9 00:59:49.695343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 00:59:49.751780 systemd[1]: Reloading finished in 396 ms. Oct 9 00:59:49.771489 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 00:59:49.777065 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 00:59:49.792246 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:59:49.796574 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 00:59:49.800030 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 00:59:49.804719 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 00:59:49.813561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 00:59:49.818595 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 00:59:49.834778 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 00:59:49.838829 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:49.839026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:59:49.848806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:59:49.858370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:59:49.863632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:59:49.865524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:59:49.865705 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:49.871084 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:49.871295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:59:49.872240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:59:49.872350 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:49.877064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:49.877594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:59:49.887794 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 00:59:49.888653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:59:49.888819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:49.892050 systemd[1]: Finished ensure-sysext.service. Oct 9 00:59:49.903592 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 00:59:49.906776 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 00:59:49.915600 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 00:59:49.932977 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:59:49.933219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:59:49.938815 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 00:59:49.961919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:59:49.962193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:59:49.963919 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:59:49.964412 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:59:49.968446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:59:49.968536 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:59:49.969397 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 00:59:49.972613 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:59:49.975803 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Oct 9 00:59:49.978654 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 00:59:49.978877 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 00:59:49.981918 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 00:59:49.991390 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 00:59:50.024587 augenrules[1367]: No rules Oct 9 00:59:50.026731 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:59:50.026941 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:59:50.045223 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 00:59:50.053654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 00:59:50.098284 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 00:59:50.099230 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 00:59:50.108807 systemd-resolved[1326]: Positive Trust Anchors: Oct 9 00:59:50.109154 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 00:59:50.109261 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 00:59:50.115174 systemd-resolved[1326]: Using system hostname 'ci-4116.0.0-c-50f1e82448'. Oct 9 00:59:50.117932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 00:59:50.118624 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 00:59:50.166475 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 9 00:59:50.169920 systemd-networkd[1375]: lo: Link UP Oct 9 00:59:50.169930 systemd-networkd[1375]: lo: Gained carrier Oct 9 00:59:50.171265 systemd-networkd[1375]: Enumeration completed Oct 9 00:59:50.171403 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 00:59:50.171990 systemd[1]: Reached target network.target - Network. Oct 9 00:59:50.180596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 00:59:50.202539 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Oct 9 00:59:50.203055 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:50.203245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 00:59:50.214577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 00:59:50.217643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 00:59:50.221397 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1385) Oct 9 00:59:50.230486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 00:59:50.233605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 00:59:50.233657 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 00:59:50.233677 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 00:59:50.237432 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1380) Oct 9 00:59:50.247380 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 00:59:50.247547 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 00:59:50.252367 kernel: ISO 9660 Extensions: RRIP_1991A Oct 9 00:59:50.258762 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Oct 9 00:59:50.259608 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 00:59:50.259812 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 00:59:50.261447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 00:59:50.263480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 00:59:50.263765 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 00:59:50.270119 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 00:59:50.278367 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1380) Oct 9 00:59:50.300603 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 00:59:50.316384 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 00:59:50.327618 systemd-networkd[1375]: eth1: Configuring with /run/systemd/network/10-f6:d4:c5:54:ef:95.network. Oct 9 00:59:50.328481 systemd-networkd[1375]: eth1: Link UP Oct 9 00:59:50.328487 systemd-networkd[1375]: eth1: Gained carrier Oct 9 00:59:50.333301 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Oct 9 00:59:50.336525 systemd-networkd[1375]: eth0: Configuring with /run/systemd/network/10-0e:e4:38:fd:d9:90.network. Oct 9 00:59:50.337689 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Oct 9 00:59:50.339528 systemd-networkd[1375]: eth0: Link UP Oct 9 00:59:50.339540 systemd-networkd[1375]: eth0: Gained carrier Oct 9 00:59:50.343670 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Oct 9 00:59:50.346654 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Oct 9 00:59:50.351258 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 00:59:50.370421 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 9 00:59:50.380345 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 9 00:59:50.380615 kernel: ACPI: button: Power Button [PWRF] Oct 9 00:59:50.472656 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 9 00:59:50.484664 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 00:59:50.493652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:59:50.516393 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Oct 9 00:59:50.519340 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Oct 9 00:59:50.533946 kernel: Console: switching to colour dummy device 80x25 Oct 9 00:59:50.534068 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 00:59:50.534101 kernel: [drm] features: -context_init Oct 9 00:59:50.534910 kernel: [drm] number of scanouts: 1 Oct 9 00:59:50.537345 kernel: [drm] number of cap sets: 0 Oct 9 00:59:50.545351 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Oct 9 00:59:50.558304 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 00:59:50.558442 kernel: Console: switching to colour frame buffer device 128x48 Oct 9 00:59:50.577349 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 00:59:50.580239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 00:59:50.580520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:59:50.598817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 00:59:50.707366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 00:59:50.718479 kernel: EDAC MC: Ver: 3.0.0 Oct 9 00:59:50.754965 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 00:59:50.764694 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 00:59:50.781366 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:59:50.811217 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 00:59:50.812837 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 00:59:50.812986 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 00:59:50.813161 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 00:59:50.813261 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 00:59:50.813643 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 00:59:50.813881 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 00:59:50.813977 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 00:59:50.814061 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 00:59:50.814092 systemd[1]: Reached target paths.target - Path Units. Oct 9 00:59:50.814144 systemd[1]: Reached target timers.target - Timer Units. Oct 9 00:59:50.817930 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 00:59:50.820633 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 00:59:50.829058 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 00:59:50.834144 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 00:59:50.835159 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 00:59:50.835871 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 00:59:50.836932 systemd[1]: Reached target basic.target - Basic System. Oct 9 00:59:50.838917 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:59:50.838966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 00:59:50.840440 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 00:59:50.846151 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 00:59:50.855339 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 00:59:50.857743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 00:59:50.870498 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 00:59:50.879591 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 00:59:50.884119 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 00:59:50.890537 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 00:59:50.902090 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 00:59:50.912347 jq[1440]: false Oct 9 00:59:50.912835 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 00:59:50.926646 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 00:59:50.945640 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 00:59:50.953666 coreos-metadata[1438]: Oct 09 00:59:50.953 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 00:59:50.955401 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 00:59:50.957125 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 00:59:50.960018 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 00:59:50.968395 coreos-metadata[1438]: Oct 09 00:59:50.968 INFO Fetch successful Oct 9 00:59:50.973488 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 00:59:50.977867 dbus-daemon[1439]: [system] SELinux support is enabled Oct 9 00:59:50.978417 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 00:59:50.980055 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 00:59:50.990349 extend-filesystems[1443]: Found loop4 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found loop5 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found loop6 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found loop7 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found vda Oct 9 00:59:50.990349 extend-filesystems[1443]: Found vda1 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found vda2 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found vda3 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found usr Oct 9 00:59:50.990349 extend-filesystems[1443]: Found vda4 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found vda6 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found vda7 Oct 9 00:59:50.990349 extend-filesystems[1443]: Found vda9 Oct 9 00:59:50.990349 extend-filesystems[1443]: Checking size of /dev/vda9 Oct 9 00:59:50.991872 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 00:59:50.992089 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 00:59:50.993234 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 00:59:51.085443 jq[1452]: true Oct 9 00:59:50.993460 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 00:59:51.088625 extend-filesystems[1443]: Resized partition /dev/vda9 Oct 9 00:59:51.010889 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 00:59:51.112459 jq[1473]: true Oct 9 00:59:51.010981 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 00:59:51.112958 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Oct 9 00:59:51.140778 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Oct 9 00:59:51.016463 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 00:59:51.147411 update_engine[1450]: I20241009 00:59:51.120469 1450 main.cc:92] Flatcar Update Engine starting Oct 9 00:59:51.016598 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Oct 9 00:59:51.016642 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 00:59:51.081768 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 00:59:51.094897 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 00:59:51.157707 update_engine[1450]: I20241009 00:59:51.152916 1450 update_check_scheduler.cc:74] Next update check in 10m21s Oct 9 00:59:51.096058 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 00:59:51.152874 systemd[1]: Started update-engine.service - Update Engine. Oct 9 00:59:51.160637 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 00:59:51.165581 tar[1457]: linux-amd64/helm Oct 9 00:59:51.186721 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 00:59:51.189802 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 00:59:51.218356 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1378) Oct 9 00:59:51.260204 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 9 00:59:51.294553 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 00:59:51.294553 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 9 00:59:51.294553 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 9 00:59:51.305592 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Oct 9 00:59:51.305592 extend-filesystems[1443]: Found vdb Oct 9 00:59:51.321574 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 00:59:51.322779 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 00:59:51.369889 systemd-logind[1449]: New seat seat0. Oct 9 00:59:51.378519 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Oct 9 00:59:51.378548 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 00:59:51.378963 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 00:59:51.386237 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Oct 9 00:59:51.393101 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 00:59:51.396743 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 00:59:51.409838 systemd[1]: Starting sshkeys.service... Oct 9 00:59:51.458903 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 00:59:51.468473 systemd-networkd[1375]: eth1: Gained IPv6LL Oct 9 00:59:51.468823 systemd-networkd[1375]: eth0: Gained IPv6LL Oct 9 00:59:51.469461 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Oct 9 00:59:51.498682 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 00:59:51.518175 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 00:59:51.518877 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 00:59:51.532016 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 00:59:51.546757 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 00:59:51.557325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 00:59:51.563756 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 00:59:51.574560 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 00:59:51.576370 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 00:59:51.598416 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 00:59:51.621600 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 00:59:51.633138 coreos-metadata[1520]: Oct 09 00:59:51.632 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Oct 9 00:59:51.645844 coreos-metadata[1520]: Oct 09 00:59:51.644 INFO Fetch successful Oct 9 00:59:51.646957 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 00:59:51.657845 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 00:59:51.672427 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 00:59:51.674546 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 00:59:51.694481 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 00:59:51.710027 unknown[1520]: wrote ssh authorized keys file for user: core Oct 9 00:59:51.760894 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" Oct 9 00:59:51.762310 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 00:59:51.766121 systemd[1]: Finished sshkeys.service. Oct 9 00:59:51.835448 containerd[1468]: time="2024-10-09T00:59:51.835104563Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 00:59:51.880402 containerd[1468]: time="2024-10-09T00:59:51.879028027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:59:51.881292 containerd[1468]: time="2024-10-09T00:59:51.881249336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:59:51.881598 containerd[1468]: time="2024-10-09T00:59:51.881579220Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 00:59:51.881684 containerd[1468]: time="2024-10-09T00:59:51.881671054Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 00:59:51.881873 containerd[1468]: time="2024-10-09T00:59:51.881859127Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 00:59:51.881933 containerd[1468]: time="2024-10-09T00:59:51.881924022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 00:59:51.882042 containerd[1468]: time="2024-10-09T00:59:51.882026854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:59:51.882104 containerd[1468]: time="2024-10-09T00:59:51.882094629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:59:51.882417 containerd[1468]: time="2024-10-09T00:59:51.882394590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:59:51.882500 containerd[1468]: time="2024-10-09T00:59:51.882473076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 00:59:51.882563 containerd[1468]: time="2024-10-09T00:59:51.882551160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:59:51.882621 containerd[1468]: time="2024-10-09T00:59:51.882610541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 00:59:51.882748 containerd[1468]: time="2024-10-09T00:59:51.882735340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:59:51.883005 containerd[1468]: time="2024-10-09T00:59:51.882989199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 00:59:51.883215 containerd[1468]: time="2024-10-09T00:59:51.883198996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 00:59:51.883280 containerd[1468]: time="2024-10-09T00:59:51.883269595Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 00:59:51.883443 containerd[1468]: time="2024-10-09T00:59:51.883428526Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 00:59:51.883566 containerd[1468]: time="2024-10-09T00:59:51.883548976Z" level=info msg="metadata content store policy set" policy=shared Oct 9 00:59:51.893083 containerd[1468]: time="2024-10-09T00:59:51.893000733Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 00:59:51.893550 containerd[1468]: time="2024-10-09T00:59:51.893447862Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 00:59:51.894415 containerd[1468]: time="2024-10-09T00:59:51.893603059Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 00:59:51.894415 containerd[1468]: time="2024-10-09T00:59:51.893621860Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 00:59:51.894415 containerd[1468]: time="2024-10-09T00:59:51.893637955Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 00:59:51.894415 containerd[1468]: time="2024-10-09T00:59:51.893825001Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 00:59:51.894415 containerd[1468]: time="2024-10-09T00:59:51.894089220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 00:59:51.894744 containerd[1468]: time="2024-10-09T00:59:51.894713949Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 00:59:51.894831 containerd[1468]: time="2024-10-09T00:59:51.894819729Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 00:59:51.895058 containerd[1468]: time="2024-10-09T00:59:51.895041184Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 00:59:51.895139 containerd[1468]: time="2024-10-09T00:59:51.895127865Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 00:59:51.895199 containerd[1468]: time="2024-10-09T00:59:51.895179468Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 00:59:51.895245 containerd[1468]: time="2024-10-09T00:59:51.895236588Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 00:59:51.895354 containerd[1468]: time="2024-10-09T00:59:51.895341218Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 00:59:51.895432 containerd[1468]: time="2024-10-09T00:59:51.895420425Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 00:59:51.895511 containerd[1468]: time="2024-10-09T00:59:51.895498755Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 00:59:51.895719 containerd[1468]: time="2024-10-09T00:59:51.895699995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 00:59:51.895795 containerd[1468]: time="2024-10-09T00:59:51.895784553Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 00:59:51.895902 containerd[1468]: time="2024-10-09T00:59:51.895889823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896125 containerd[1468]: time="2024-10-09T00:59:51.896104672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896248 containerd[1468]: time="2024-10-09T00:59:51.896230418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896602 containerd[1468]: time="2024-10-09T00:59:51.896488907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896602 containerd[1468]: time="2024-10-09T00:59:51.896508524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896602 containerd[1468]: time="2024-10-09T00:59:51.896522388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896954 containerd[1468]: time="2024-10-09T00:59:51.896806057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896954 containerd[1468]: time="2024-10-09T00:59:51.896832997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896954 containerd[1468]: time="2024-10-09T00:59:51.896846306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896954 containerd[1468]: time="2024-10-09T00:59:51.896861452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896954 containerd[1468]: time="2024-10-09T00:59:51.896884240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896954 containerd[1468]: time="2024-10-09T00:59:51.896895611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896954 containerd[1468]: time="2024-10-09T00:59:51.896908921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.896954 containerd[1468]: time="2024-10-09T00:59:51.896924566Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 00:59:51.897465 containerd[1468]: time="2024-10-09T00:59:51.897226047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.897465 containerd[1468]: time="2024-10-09T00:59:51.897258103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.897465 containerd[1468]: time="2024-10-09T00:59:51.897273818Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 00:59:51.897465 containerd[1468]: time="2024-10-09T00:59:51.897370278Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 00:59:51.897779 containerd[1468]: time="2024-10-09T00:59:51.897389356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 00:59:51.897779 containerd[1468]: time="2024-10-09T00:59:51.897648835Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 00:59:51.897779 containerd[1468]: time="2024-10-09T00:59:51.897667772Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 00:59:51.897779 containerd[1468]: time="2024-10-09T00:59:51.897677373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.897779 containerd[1468]: time="2024-10-09T00:59:51.897688989Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 00:59:51.897779 containerd[1468]: time="2024-10-09T00:59:51.897709677Z" level=info msg="NRI interface is disabled by configuration." Oct 9 00:59:51.897779 containerd[1468]: time="2024-10-09T00:59:51.897720511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 00:59:51.898718 containerd[1468]: time="2024-10-09T00:59:51.898486132Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 00:59:51.899327 containerd[1468]: time="2024-10-09T00:59:51.899044444Z" level=info msg="Connect containerd service" Oct 9 00:59:51.899327 containerd[1468]: time="2024-10-09T00:59:51.899121159Z" level=info msg="using legacy CRI server" Oct 9 00:59:51.899327 containerd[1468]: time="2024-10-09T00:59:51.899135900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 00:59:51.899578 containerd[1468]: time="2024-10-09T00:59:51.899443068Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 00:59:51.901326 containerd[1468]: time="2024-10-09T00:59:51.901292906Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 00:59:51.901797 containerd[1468]: time="2024-10-09T00:59:51.901771146Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 00:59:51.902356 containerd[1468]: time="2024-10-09T00:59:51.901906407Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 00:59:51.902356 containerd[1468]: time="2024-10-09T00:59:51.901957828Z" level=info msg="Start subscribing containerd event" Oct 9 00:59:51.902356 containerd[1468]: time="2024-10-09T00:59:51.902010088Z" level=info msg="Start recovering state" Oct 9 00:59:51.902356 containerd[1468]: time="2024-10-09T00:59:51.902079811Z" level=info msg="Start event monitor" Oct 9 00:59:51.902356 containerd[1468]: time="2024-10-09T00:59:51.902094592Z" level=info msg="Start snapshots syncer" Oct 9 00:59:51.902356 containerd[1468]: time="2024-10-09T00:59:51.902107602Z" level=info msg="Start cni network conf syncer for default" Oct 9 00:59:51.902356 containerd[1468]: time="2024-10-09T00:59:51.902117575Z" level=info msg="Start streaming server" Oct 9 00:59:51.902356 containerd[1468]: time="2024-10-09T00:59:51.902291550Z" level=info msg="containerd successfully booted in 0.070921s" Oct 9 00:59:51.902480 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 00:59:52.202822 tar[1457]: linux-amd64/LICENSE Oct 9 00:59:52.203740 tar[1457]: linux-amd64/README.md Oct 9 00:59:52.218767 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 00:59:52.909675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 00:59:52.912380 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 00:59:52.914091 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 00:59:52.918491 systemd[1]: Startup finished in 1.118s (kernel) + 5.441s (initrd) + 5.791s (userspace) = 12.351s. Oct 9 00:59:53.694956 kubelet[1562]: E1009 00:59:53.694850 1562 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 00:59:53.698584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 00:59:53.698808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 00:59:53.699241 systemd[1]: kubelet.service: Consumed 1.358s CPU time. Oct 9 00:59:55.633801 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 00:59:55.638728 systemd[1]: Started sshd@0-143.110.225.158:22-139.178.68.195:48950.service - OpenSSH per-connection server daemon (139.178.68.195:48950). Oct 9 00:59:55.727504 sshd[1575]: Accepted publickey for core from 139.178.68.195 port 48950 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 00:59:55.730834 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:55.747804 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 00:59:55.748042 systemd-logind[1449]: New session 1 of user core. Oct 9 00:59:55.753846 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 00:59:55.774073 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 00:59:55.783850 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 00:59:55.801136 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 00:59:55.932424 systemd[1579]: Queued start job for default target default.target. Oct 9 00:59:55.949017 systemd[1579]: Created slice app.slice - User Application Slice. Oct 9 00:59:55.949083 systemd[1579]: Reached target paths.target - Paths. Oct 9 00:59:55.949108 systemd[1579]: Reached target timers.target - Timers. Oct 9 00:59:55.951491 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 00:59:55.967634 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 00:59:55.967802 systemd[1579]: Reached target sockets.target - Sockets. Oct 9 00:59:55.967821 systemd[1579]: Reached target basic.target - Basic System. Oct 9 00:59:55.967874 systemd[1579]: Reached target default.target - Main User Target. Oct 9 00:59:55.967911 systemd[1579]: Startup finished in 156ms. Oct 9 00:59:55.968700 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 00:59:55.977874 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 00:59:56.049814 systemd[1]: Started sshd@1-143.110.225.158:22-139.178.68.195:48966.service - OpenSSH per-connection server daemon (139.178.68.195:48966). Oct 9 00:59:56.095156 sshd[1590]: Accepted publickey for core from 139.178.68.195 port 48966 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 00:59:56.097502 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:56.103371 systemd-logind[1449]: New session 2 of user core. Oct 9 00:59:56.115628 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 00:59:56.179040 sshd[1590]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:56.189122 systemd[1]: sshd@1-143.110.225.158:22-139.178.68.195:48966.service: Deactivated successfully. Oct 9 00:59:56.191558 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 00:59:56.194631 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Oct 9 00:59:56.200933 systemd[1]: Started sshd@2-143.110.225.158:22-139.178.68.195:48972.service - OpenSSH per-connection server daemon (139.178.68.195:48972). Oct 9 00:59:56.203382 systemd-logind[1449]: Removed session 2. Oct 9 00:59:56.245074 sshd[1597]: Accepted publickey for core from 139.178.68.195 port 48972 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 00:59:56.246911 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:56.253608 systemd-logind[1449]: New session 3 of user core. Oct 9 00:59:56.260731 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 00:59:56.317777 sshd[1597]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:56.330553 systemd[1]: sshd@2-143.110.225.158:22-139.178.68.195:48972.service: Deactivated successfully. Oct 9 00:59:56.333222 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 00:59:56.334430 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Oct 9 00:59:56.342932 systemd[1]: Started sshd@3-143.110.225.158:22-139.178.68.195:48974.service - OpenSSH per-connection server daemon (139.178.68.195:48974). Oct 9 00:59:56.344750 systemd-logind[1449]: Removed session 3. Oct 9 00:59:56.396229 sshd[1604]: Accepted publickey for core from 139.178.68.195 port 48974 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 00:59:56.398688 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:56.406607 systemd-logind[1449]: New session 4 of user core. Oct 9 00:59:56.414607 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 00:59:56.485209 sshd[1604]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:56.496633 systemd[1]: sshd@3-143.110.225.158:22-139.178.68.195:48974.service: Deactivated successfully. Oct 9 00:59:56.499728 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 00:59:56.502510 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Oct 9 00:59:56.506817 systemd[1]: Started sshd@4-143.110.225.158:22-139.178.68.195:48976.service - OpenSSH per-connection server daemon (139.178.68.195:48976). Oct 9 00:59:56.508978 systemd-logind[1449]: Removed session 4. Oct 9 00:59:56.561846 sshd[1611]: Accepted publickey for core from 139.178.68.195 port 48976 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 00:59:56.563587 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:56.570639 systemd-logind[1449]: New session 5 of user core. Oct 9 00:59:56.577688 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 00:59:56.655333 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 00:59:56.656503 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:59:56.674571 sudo[1614]: pam_unix(sudo:session): session closed for user root Oct 9 00:59:56.679142 sshd[1611]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:56.694631 systemd[1]: sshd@4-143.110.225.158:22-139.178.68.195:48976.service: Deactivated successfully. Oct 9 00:59:56.697622 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 00:59:56.699492 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Oct 9 00:59:56.711870 systemd[1]: Started sshd@5-143.110.225.158:22-139.178.68.195:48986.service - OpenSSH per-connection server daemon (139.178.68.195:48986). Oct 9 00:59:56.714039 systemd-logind[1449]: Removed session 5. Oct 9 00:59:56.762665 sshd[1619]: Accepted publickey for core from 139.178.68.195 port 48986 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 00:59:56.765206 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:56.771200 systemd-logind[1449]: New session 6 of user core. Oct 9 00:59:56.781735 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 00:59:56.845808 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 00:59:56.846206 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:59:56.852122 sudo[1623]: pam_unix(sudo:session): session closed for user root Oct 9 00:59:56.859620 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 00:59:56.859986 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:59:56.883007 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 00:59:56.924039 augenrules[1645]: No rules Oct 9 00:59:56.925796 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 00:59:56.926160 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 00:59:56.928182 sudo[1622]: pam_unix(sudo:session): session closed for user root Oct 9 00:59:56.932895 sshd[1619]: pam_unix(sshd:session): session closed for user core Oct 9 00:59:56.955495 systemd[1]: sshd@5-143.110.225.158:22-139.178.68.195:48986.service: Deactivated successfully. Oct 9 00:59:56.957817 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 00:59:56.960633 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Oct 9 00:59:56.968893 systemd[1]: Started sshd@6-143.110.225.158:22-139.178.68.195:48996.service - OpenSSH per-connection server daemon (139.178.68.195:48996). Oct 9 00:59:56.975915 systemd-logind[1449]: Removed session 6. Oct 9 00:59:57.028966 sshd[1653]: Accepted publickey for core from 139.178.68.195 port 48996 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 00:59:57.030681 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 00:59:57.041985 systemd-logind[1449]: New session 7 of user core. Oct 9 00:59:57.048620 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 00:59:57.110149 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 00:59:57.110526 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 00:59:57.606927 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 00:59:57.607018 (dockerd)[1674]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 00:59:58.064345 dockerd[1674]: time="2024-10-09T00:59:58.064088367Z" level=info msg="Starting up" Oct 9 00:59:58.208665 systemd[1]: var-lib-docker-metacopy\x2dcheck508012885-merged.mount: Deactivated successfully. Oct 9 00:59:58.232506 dockerd[1674]: time="2024-10-09T00:59:58.232432141Z" level=info msg="Loading containers: start." Oct 9 00:59:58.452344 kernel: Initializing XFRM netlink socket Oct 9 00:59:58.490407 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Oct 9 00:59:58.565701 systemd-networkd[1375]: docker0: Link UP Oct 9 00:59:58.566434 systemd-timesyncd[1339]: Network configuration changed, trying to establish connection. Oct 9 00:59:58.612252 dockerd[1674]: time="2024-10-09T00:59:58.612196345Z" level=info msg="Loading containers: done." Oct 9 00:59:58.631403 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1780950240-merged.mount: Deactivated successfully. Oct 9 00:59:58.634832 dockerd[1674]: time="2024-10-09T00:59:58.634544270Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 00:59:58.634832 dockerd[1674]: time="2024-10-09T00:59:58.634669323Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 00:59:58.634832 dockerd[1674]: time="2024-10-09T00:59:58.634801554Z" level=info msg="Daemon has completed initialization" Oct 9 00:59:58.701264 dockerd[1674]: time="2024-10-09T00:59:58.701184902Z" level=info msg="API listen on /run/docker.sock" Oct 9 00:59:58.701642 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 00:59:59.862330 containerd[1468]: time="2024-10-09T00:59:59.861781073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 9 01:00:01.073271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634491781.mount: Deactivated successfully. Oct 9 01:00:03.708183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:00:03.722264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:04.008068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:04.020090 (kubelet)[1937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:00:04.140441 kubelet[1937]: E1009 01:00:04.139564 1937 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:00:04.155298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:00:04.155841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:00:04.877204 containerd[1468]: time="2024-10-09T01:00:04.877109923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:04.880887 containerd[1468]: time="2024-10-09T01:00:04.880729884Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32754097" Oct 9 01:00:04.883979 containerd[1468]: time="2024-10-09T01:00:04.883900657Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:04.890547 containerd[1468]: time="2024-10-09T01:00:04.890459327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:04.892730 containerd[1468]: time="2024-10-09T01:00:04.892659526Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 5.030791929s" Oct 9 01:00:04.893155 containerd[1468]: time="2024-10-09T01:00:04.893102993Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 9 01:00:04.959881 containerd[1468]: time="2024-10-09T01:00:04.959801217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 9 01:00:07.577347 containerd[1468]: time="2024-10-09T01:00:07.574883581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:07.581786 containerd[1468]: time="2024-10-09T01:00:07.581653039Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29591652" Oct 9 01:00:07.584754 containerd[1468]: time="2024-10-09T01:00:07.584647301Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:07.592502 containerd[1468]: time="2024-10-09T01:00:07.591435914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:07.593825 containerd[1468]: time="2024-10-09T01:00:07.593755423Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 2.633885595s" Oct 9 01:00:07.593825 containerd[1468]: time="2024-10-09T01:00:07.593821692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 9 01:00:07.639684 containerd[1468]: time="2024-10-09T01:00:07.639568183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 9 01:00:07.643060 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Oct 9 01:00:09.600988 containerd[1468]: time="2024-10-09T01:00:09.600915541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:09.604199 containerd[1468]: time="2024-10-09T01:00:09.604115712Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17779987" Oct 9 01:00:09.612052 containerd[1468]: time="2024-10-09T01:00:09.611929151Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:09.626363 containerd[1468]: time="2024-10-09T01:00:09.625498597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:09.627536 containerd[1468]: time="2024-10-09T01:00:09.627467972Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 1.9878234s" Oct 9 01:00:09.627775 containerd[1468]: time="2024-10-09T01:00:09.627742067Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 9 01:00:09.700841 containerd[1468]: time="2024-10-09T01:00:09.700726013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 9 01:00:10.729445 systemd-resolved[1326]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Oct 9 01:00:11.776672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769912839.mount: Deactivated successfully. Oct 9 01:00:12.751358 containerd[1468]: time="2024-10-09T01:00:12.749471253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:12.751989 containerd[1468]: time="2024-10-09T01:00:12.751300296Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29039362" Oct 9 01:00:12.753203 containerd[1468]: time="2024-10-09T01:00:12.753132889Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:12.761372 containerd[1468]: time="2024-10-09T01:00:12.759556889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:12.761372 containerd[1468]: time="2024-10-09T01:00:12.760594920Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 3.059487225s" Oct 9 01:00:12.761372 containerd[1468]: time="2024-10-09T01:00:12.760640155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 9 01:00:12.824343 containerd[1468]: time="2024-10-09T01:00:12.823773755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:00:13.474915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819358045.mount: Deactivated successfully. Oct 9 01:00:14.207657 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:00:14.216885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:14.497184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:14.511532 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:00:14.642670 kubelet[2033]: E1009 01:00:14.642573 2033 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:00:14.648975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:00:14.649256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:00:15.139179 containerd[1468]: time="2024-10-09T01:00:15.137689587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:15.141488 containerd[1468]: time="2024-10-09T01:00:15.141395600Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Oct 9 01:00:15.144012 containerd[1468]: time="2024-10-09T01:00:15.143936116Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:15.149240 containerd[1468]: time="2024-10-09T01:00:15.149171030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:15.150699 containerd[1468]: time="2024-10-09T01:00:15.150619601Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.326775813s" Oct 9 01:00:15.150699 containerd[1468]: time="2024-10-09T01:00:15.150695944Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 01:00:15.189231 containerd[1468]: time="2024-10-09T01:00:15.189135164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:00:15.191631 systemd-resolved[1326]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Oct 9 01:00:15.685328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959648802.mount: Deactivated successfully. Oct 9 01:00:15.695745 containerd[1468]: time="2024-10-09T01:00:15.694526607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:15.696113 containerd[1468]: time="2024-10-09T01:00:15.695978423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Oct 9 01:00:15.696943 containerd[1468]: time="2024-10-09T01:00:15.696914149Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:15.701313 containerd[1468]: time="2024-10-09T01:00:15.701236728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:15.703565 containerd[1468]: time="2024-10-09T01:00:15.703357862Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 514.114699ms" Oct 9 01:00:15.703565 containerd[1468]: time="2024-10-09T01:00:15.703419970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 01:00:15.743651 containerd[1468]: time="2024-10-09T01:00:15.743605845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 9 01:00:16.368009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446648362.mount: Deactivated successfully. Oct 9 01:00:18.444228 containerd[1468]: time="2024-10-09T01:00:18.444122496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:18.446477 containerd[1468]: time="2024-10-09T01:00:18.446395277Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Oct 9 01:00:18.449969 containerd[1468]: time="2024-10-09T01:00:18.449903302Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:18.455343 containerd[1468]: time="2024-10-09T01:00:18.455254663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:00:18.458638 containerd[1468]: time="2024-10-09T01:00:18.458556307Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.714904879s" Oct 9 01:00:18.458638 containerd[1468]: time="2024-10-09T01:00:18.458631949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 9 01:00:21.821329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:21.833848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:21.870876 systemd[1]: Reloading requested from client PID 2161 ('systemctl') (unit session-7.scope)... Oct 9 01:00:21.870896 systemd[1]: Reloading... Oct 9 01:00:22.042401 zram_generator::config[2200]: No configuration found. Oct 9 01:00:22.202920 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:00:22.337717 systemd[1]: Reloading finished in 466 ms. Oct 9 01:00:22.416646 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 01:00:22.416746 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 01:00:22.417100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:22.424950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:22.595668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:22.600457 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:00:22.673940 kubelet[2255]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:00:22.673940 kubelet[2255]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:00:22.673940 kubelet[2255]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:00:22.674890 kubelet[2255]: I1009 01:00:22.674790 2255 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:00:23.500509 kubelet[2255]: I1009 01:00:23.500445 2255 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:00:23.500509 kubelet[2255]: I1009 01:00:23.500500 2255 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:00:23.500823 kubelet[2255]: I1009 01:00:23.500797 2255 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:00:23.525369 kubelet[2255]: I1009 01:00:23.525131 2255 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:00:23.526296 kubelet[2255]: E1009 01:00:23.526250 2255 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.110.225.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.548584 kubelet[2255]: I1009 01:00:23.548403 2255 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:00:23.554716 kubelet[2255]: I1009 01:00:23.554261 2255 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:00:23.554716 kubelet[2255]: I1009 01:00:23.554382 2255 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4116.0.0-c-50f1e82448","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:00:23.555376 kubelet[2255]: I1009 01:00:23.555345 2255 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:00:23.555514 kubelet[2255]: I1009 01:00:23.555503 2255 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:00:23.555737 kubelet[2255]: I1009 01:00:23.555725 2255 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:00:23.559087 kubelet[2255]: I1009 01:00:23.559038 2255 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:00:23.559417 kubelet[2255]: I1009 01:00:23.559246 2255 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:00:23.559417 kubelet[2255]: I1009 01:00:23.559285 2255 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:00:23.559417 kubelet[2255]: I1009 01:00:23.559305 2255 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:00:23.561527 kubelet[2255]: W1009 01:00:23.561439 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.225.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116.0.0-c-50f1e82448&limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.561527 kubelet[2255]: E1009 01:00:23.561521 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.110.225.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116.0.0-c-50f1e82448&limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.566210 kubelet[2255]: W1009 01:00:23.565976 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.225.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.566210 kubelet[2255]: E1009 01:00:23.566058 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.110.225.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.567327 kubelet[2255]: I1009 01:00:23.567249 2255 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:00:23.571346 kubelet[2255]: I1009 01:00:23.569298 2255 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:00:23.571346 kubelet[2255]: W1009 01:00:23.569424 2255 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:00:23.571346 kubelet[2255]: I1009 01:00:23.570464 2255 server.go:1264] "Started kubelet" Oct 9 01:00:23.573419 kubelet[2255]: I1009 01:00:23.573365 2255 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:00:23.575416 kubelet[2255]: I1009 01:00:23.575385 2255 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:00:23.578261 kubelet[2255]: I1009 01:00:23.577924 2255 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:00:23.578439 kubelet[2255]: I1009 01:00:23.578371 2255 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:00:23.579165 kubelet[2255]: E1009 01:00:23.578612 2255 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.225.158:6443/api/v1/namespaces/default/events\": dial tcp 143.110.225.158:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4116.0.0-c-50f1e82448.17fca30ac00bdf4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116.0.0-c-50f1e82448,UID:ci-4116.0.0-c-50f1e82448,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4116.0.0-c-50f1e82448,},FirstTimestamp:2024-10-09 01:00:23.570431819 +0000 UTC m=+0.962735835,LastTimestamp:2024-10-09 01:00:23.570431819 +0000 UTC m=+0.962735835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116.0.0-c-50f1e82448,}" Oct 9 01:00:23.586990 kubelet[2255]: I1009 01:00:23.586740 2255 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:00:23.591131 kubelet[2255]: E1009 01:00:23.590710 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116.0.0-c-50f1e82448\" not found" Oct 9 01:00:23.591131 kubelet[2255]: I1009 01:00:23.590786 2255 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:00:23.591131 kubelet[2255]: I1009 01:00:23.590911 2255 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:00:23.591131 kubelet[2255]: I1009 01:00:23.590988 2255 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:00:23.592418 kubelet[2255]: W1009 01:00:23.591539 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.225.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.592418 kubelet[2255]: E1009 01:00:23.591641 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.110.225.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.592418 kubelet[2255]: E1009 01:00:23.591746 2255 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:00:23.592418 kubelet[2255]: E1009 01:00:23.592276 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.225.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116.0.0-c-50f1e82448?timeout=10s\": dial tcp 143.110.225.158:6443: connect: connection refused" interval="200ms" Oct 9 01:00:23.596355 kubelet[2255]: I1009 01:00:23.596235 2255 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:00:23.597961 kubelet[2255]: I1009 01:00:23.597919 2255 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:00:23.597961 kubelet[2255]: I1009 01:00:23.597946 2255 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:00:23.618925 kubelet[2255]: I1009 01:00:23.618841 2255 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:00:23.621567 kubelet[2255]: I1009 01:00:23.621520 2255 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:00:23.622370 kubelet[2255]: I1009 01:00:23.621977 2255 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:00:23.622370 kubelet[2255]: I1009 01:00:23.622024 2255 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:00:23.622370 kubelet[2255]: E1009 01:00:23.622121 2255 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:00:23.635542 kubelet[2255]: W1009 01:00:23.635299 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.225.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.638500 kubelet[2255]: E1009 01:00:23.637417 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.110.225.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:23.640759 kubelet[2255]: I1009 01:00:23.640729 2255 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:00:23.641001 kubelet[2255]: I1009 01:00:23.640984 2255 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:00:23.641108 kubelet[2255]: I1009 01:00:23.641095 2255 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:00:23.648994 kubelet[2255]: I1009 01:00:23.648944 2255 policy_none.go:49] "None policy: Start" Oct 9 01:00:23.651307 kubelet[2255]: I1009 01:00:23.651267 2255 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:00:23.651480 kubelet[2255]: I1009 01:00:23.651340 2255 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:00:23.669007 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:00:23.685346 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:00:23.692257 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:00:23.693052 kubelet[2255]: I1009 01:00:23.692260 2255 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.693605 kubelet[2255]: E1009 01:00:23.693404 2255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.225.158:6443/api/v1/nodes\": dial tcp 143.110.225.158:6443: connect: connection refused" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.699592 kubelet[2255]: I1009 01:00:23.699492 2255 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:00:23.700301 kubelet[2255]: I1009 01:00:23.699843 2255 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:00:23.700301 kubelet[2255]: I1009 01:00:23.700029 2255 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:00:23.703007 kubelet[2255]: E1009 01:00:23.702573 2255 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4116.0.0-c-50f1e82448\" not found" Oct 9 01:00:23.722447 kubelet[2255]: I1009 01:00:23.722347 2255 topology_manager.go:215] "Topology Admit Handler" podUID="d6c8b8c1bd8ebf6a25e0b5991b11bf3e" podNamespace="kube-system" podName="kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.724281 kubelet[2255]: I1009 01:00:23.723580 2255 topology_manager.go:215] "Topology Admit Handler" podUID="d9a1330bc60aeeafad1dcb30b210eaec" podNamespace="kube-system" podName="kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.726278 kubelet[2255]: I1009 01:00:23.726237 2255 topology_manager.go:215] "Topology Admit Handler" podUID="24bc39273b76e3989973a01638d49ca1" podNamespace="kube-system" podName="kube-scheduler-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.738116 systemd[1]: Created slice kubepods-burstable-podd6c8b8c1bd8ebf6a25e0b5991b11bf3e.slice - libcontainer container kubepods-burstable-podd6c8b8c1bd8ebf6a25e0b5991b11bf3e.slice. Oct 9 01:00:23.763163 systemd[1]: Created slice kubepods-burstable-podd9a1330bc60aeeafad1dcb30b210eaec.slice - libcontainer container kubepods-burstable-podd9a1330bc60aeeafad1dcb30b210eaec.slice. Oct 9 01:00:23.772126 systemd[1]: Created slice kubepods-burstable-pod24bc39273b76e3989973a01638d49ca1.slice - libcontainer container kubepods-burstable-pod24bc39273b76e3989973a01638d49ca1.slice. Oct 9 01:00:23.793206 kubelet[2255]: E1009 01:00:23.793131 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.225.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116.0.0-c-50f1e82448?timeout=10s\": dial tcp 143.110.225.158:6443: connect: connection refused" interval="400ms" Oct 9 01:00:23.892166 kubelet[2255]: I1009 01:00:23.891794 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-flexvolume-dir\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.892166 kubelet[2255]: I1009 01:00:23.891865 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-kubeconfig\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.892166 kubelet[2255]: I1009 01:00:23.891894 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.892166 kubelet[2255]: I1009 01:00:23.891918 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24bc39273b76e3989973a01638d49ca1-kubeconfig\") pod \"kube-scheduler-ci-4116.0.0-c-50f1e82448\" (UID: \"24bc39273b76e3989973a01638d49ca1\") " pod="kube-system/kube-scheduler-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.892166 kubelet[2255]: I1009 01:00:23.891942 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6c8b8c1bd8ebf6a25e0b5991b11bf3e-ca-certs\") pod \"kube-apiserver-ci-4116.0.0-c-50f1e82448\" (UID: \"d6c8b8c1bd8ebf6a25e0b5991b11bf3e\") " pod="kube-system/kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.892608 kubelet[2255]: I1009 01:00:23.891967 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6c8b8c1bd8ebf6a25e0b5991b11bf3e-k8s-certs\") pod \"kube-apiserver-ci-4116.0.0-c-50f1e82448\" (UID: \"d6c8b8c1bd8ebf6a25e0b5991b11bf3e\") " pod="kube-system/kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.892608 kubelet[2255]: I1009 01:00:23.891990 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-ca-certs\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.892608 kubelet[2255]: I1009 01:00:23.892013 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-k8s-certs\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.892608 kubelet[2255]: I1009 01:00:23.892039 2255 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6c8b8c1bd8ebf6a25e0b5991b11bf3e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116.0.0-c-50f1e82448\" (UID: \"d6c8b8c1bd8ebf6a25e0b5991b11bf3e\") " pod="kube-system/kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.895079 kubelet[2255]: I1009 01:00:23.895023 2255 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:23.895669 kubelet[2255]: E1009 01:00:23.895530 2255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.225.158:6443/api/v1/nodes\": dial tcp 143.110.225.158:6443: connect: connection refused" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:24.058480 kubelet[2255]: E1009 01:00:24.057489 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:24.058714 containerd[1468]: time="2024-10-09T01:00:24.058667403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116.0.0-c-50f1e82448,Uid:d6c8b8c1bd8ebf6a25e0b5991b11bf3e,Namespace:kube-system,Attempt:0,}" Oct 9 01:00:24.070176 kubelet[2255]: E1009 01:00:24.069794 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:24.070480 containerd[1468]: time="2024-10-09T01:00:24.070436424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116.0.0-c-50f1e82448,Uid:d9a1330bc60aeeafad1dcb30b210eaec,Namespace:kube-system,Attempt:0,}" Oct 9 01:00:24.076112 kubelet[2255]: E1009 01:00:24.076069 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:24.077378 containerd[1468]: time="2024-10-09T01:00:24.076961738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116.0.0-c-50f1e82448,Uid:24bc39273b76e3989973a01638d49ca1,Namespace:kube-system,Attempt:0,}" Oct 9 01:00:24.194223 kubelet[2255]: E1009 01:00:24.194170 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.225.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116.0.0-c-50f1e82448?timeout=10s\": dial tcp 143.110.225.158:6443: connect: connection refused" interval="800ms" Oct 9 01:00:24.297231 kubelet[2255]: I1009 01:00:24.296736 2255 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:24.297231 kubelet[2255]: E1009 01:00:24.297170 2255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.225.158:6443/api/v1/nodes\": dial tcp 143.110.225.158:6443: connect: connection refused" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:24.653835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667458870.mount: Deactivated successfully. Oct 9 01:00:24.663223 containerd[1468]: time="2024-10-09T01:00:24.662311995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:24.664192 containerd[1468]: time="2024-10-09T01:00:24.664146094Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Oct 9 01:00:24.670808 containerd[1468]: time="2024-10-09T01:00:24.670738912Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:24.673036 containerd[1468]: time="2024-10-09T01:00:24.672963407Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:00:24.673443 containerd[1468]: time="2024-10-09T01:00:24.673391748Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:24.676198 containerd[1468]: time="2024-10-09T01:00:24.676145082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:24.677507 containerd[1468]: time="2024-10-09T01:00:24.677360129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:00:24.678097 containerd[1468]: time="2024-10-09T01:00:24.677978804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:00:24.682434 containerd[1468]: time="2024-10-09T01:00:24.682307611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 623.519344ms" Oct 9 01:00:24.687353 containerd[1468]: time="2024-10-09T01:00:24.686871830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.777462ms" Oct 9 01:00:24.699081 containerd[1468]: time="2024-10-09T01:00:24.699017251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.456857ms" Oct 9 01:00:24.823560 kubelet[2255]: W1009 01:00:24.823146 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.225.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:24.823560 kubelet[2255]: E1009 01:00:24.823242 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.110.225.158:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:24.928906 containerd[1468]: time="2024-10-09T01:00:24.927915847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:00:24.928906 containerd[1468]: time="2024-10-09T01:00:24.928065145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:00:24.928906 containerd[1468]: time="2024-10-09T01:00:24.928102334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:24.928906 containerd[1468]: time="2024-10-09T01:00:24.928386732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:24.933437 containerd[1468]: time="2024-10-09T01:00:24.933209162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:00:24.933437 containerd[1468]: time="2024-10-09T01:00:24.933295864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:00:24.933437 containerd[1468]: time="2024-10-09T01:00:24.933335273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:24.933989 containerd[1468]: time="2024-10-09T01:00:24.933465409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:24.943014 containerd[1468]: time="2024-10-09T01:00:24.941220030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:00:24.943014 containerd[1468]: time="2024-10-09T01:00:24.941333089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:00:24.943014 containerd[1468]: time="2024-10-09T01:00:24.941372364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:24.943014 containerd[1468]: time="2024-10-09T01:00:24.942751081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:24.982235 systemd[1]: Started cri-containerd-c2e8a948948438d04e1c2185d04017916bdee356ef82c398a44177d315a01834.scope - libcontainer container c2e8a948948438d04e1c2185d04017916bdee356ef82c398a44177d315a01834. Oct 9 01:00:24.995010 systemd[1]: Started cri-containerd-05dc7033f3d452bae9715e5ba1884d4c64b6264a9894e6778cab69eff54c8cf2.scope - libcontainer container 05dc7033f3d452bae9715e5ba1884d4c64b6264a9894e6778cab69eff54c8cf2. Oct 9 01:00:24.998886 kubelet[2255]: E1009 01:00:24.995118 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.225.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116.0.0-c-50f1e82448?timeout=10s\": dial tcp 143.110.225.158:6443: connect: connection refused" interval="1.6s" Oct 9 01:00:25.005219 systemd[1]: Started cri-containerd-9931cba4c8504d88cca09b38445ff501939d01dae0c8e331a69fc05723607a8c.scope - libcontainer container 9931cba4c8504d88cca09b38445ff501939d01dae0c8e331a69fc05723607a8c. Oct 9 01:00:25.007388 kubelet[2255]: W1009 01:00:25.006908 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.225.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:25.007388 kubelet[2255]: E1009 01:00:25.006983 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.110.225.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:25.084502 kubelet[2255]: W1009 01:00:25.084058 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.225.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:25.084502 kubelet[2255]: E1009 01:00:25.084264 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.110.225.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:25.101270 kubelet[2255]: I1009 01:00:25.100529 2255 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:25.101270 kubelet[2255]: E1009 01:00:25.101202 2255 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.225.158:6443/api/v1/nodes\": dial tcp 143.110.225.158:6443: connect: connection refused" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:25.108689 containerd[1468]: time="2024-10-09T01:00:25.108638143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116.0.0-c-50f1e82448,Uid:d6c8b8c1bd8ebf6a25e0b5991b11bf3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2e8a948948438d04e1c2185d04017916bdee356ef82c398a44177d315a01834\"" Oct 9 01:00:25.112672 kubelet[2255]: E1009 01:00:25.112243 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:25.126635 containerd[1468]: time="2024-10-09T01:00:25.126457419Z" level=info msg="CreateContainer within sandbox \"c2e8a948948438d04e1c2185d04017916bdee356ef82c398a44177d315a01834\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:00:25.128748 containerd[1468]: time="2024-10-09T01:00:25.128617861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116.0.0-c-50f1e82448,Uid:d9a1330bc60aeeafad1dcb30b210eaec,Namespace:kube-system,Attempt:0,} returns sandbox id \"05dc7033f3d452bae9715e5ba1884d4c64b6264a9894e6778cab69eff54c8cf2\"" Oct 9 01:00:25.130878 kubelet[2255]: E1009 01:00:25.130602 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:25.134830 containerd[1468]: time="2024-10-09T01:00:25.134534879Z" level=info msg="CreateContainer within sandbox \"05dc7033f3d452bae9715e5ba1884d4c64b6264a9894e6778cab69eff54c8cf2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:00:25.151572 containerd[1468]: time="2024-10-09T01:00:25.149854858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116.0.0-c-50f1e82448,Uid:24bc39273b76e3989973a01638d49ca1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9931cba4c8504d88cca09b38445ff501939d01dae0c8e331a69fc05723607a8c\"" Oct 9 01:00:25.153535 kubelet[2255]: E1009 01:00:25.152381 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:25.154683 kubelet[2255]: W1009 01:00:25.154594 2255 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.225.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116.0.0-c-50f1e82448&limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:25.155648 kubelet[2255]: E1009 01:00:25.155470 2255 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.110.225.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116.0.0-c-50f1e82448&limit=500&resourceVersion=0": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:25.158685 containerd[1468]: time="2024-10-09T01:00:25.158446265Z" level=info msg="CreateContainer within sandbox \"9931cba4c8504d88cca09b38445ff501939d01dae0c8e331a69fc05723607a8c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:00:25.174760 containerd[1468]: time="2024-10-09T01:00:25.174687199Z" level=info msg="CreateContainer within sandbox \"c2e8a948948438d04e1c2185d04017916bdee356ef82c398a44177d315a01834\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f67d28e8158379577e394a92361d9fa8dd276ba3c66e8c0e6402bf3002db18b2\"" Oct 9 01:00:25.176478 containerd[1468]: time="2024-10-09T01:00:25.176431337Z" level=info msg="StartContainer for \"f67d28e8158379577e394a92361d9fa8dd276ba3c66e8c0e6402bf3002db18b2\"" Oct 9 01:00:25.184294 containerd[1468]: time="2024-10-09T01:00:25.182627557Z" level=info msg="CreateContainer within sandbox \"05dc7033f3d452bae9715e5ba1884d4c64b6264a9894e6778cab69eff54c8cf2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5909ad1b8f9c63634e9a5a4e38e7c7aab6c543ff61aafc9262076a37bb996063\"" Oct 9 01:00:25.185923 containerd[1468]: time="2024-10-09T01:00:25.185869133Z" level=info msg="StartContainer for \"5909ad1b8f9c63634e9a5a4e38e7c7aab6c543ff61aafc9262076a37bb996063\"" Oct 9 01:00:25.203385 containerd[1468]: time="2024-10-09T01:00:25.203201958Z" level=info msg="CreateContainer within sandbox \"9931cba4c8504d88cca09b38445ff501939d01dae0c8e331a69fc05723607a8c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82471394da358e5183ee26252fe9124b7219758fcf5ad0ef6ef704d245127bda\"" Oct 9 01:00:25.204591 containerd[1468]: time="2024-10-09T01:00:25.204545204Z" level=info msg="StartContainer for \"82471394da358e5183ee26252fe9124b7219758fcf5ad0ef6ef704d245127bda\"" Oct 9 01:00:25.258653 systemd[1]: Started cri-containerd-5909ad1b8f9c63634e9a5a4e38e7c7aab6c543ff61aafc9262076a37bb996063.scope - libcontainer container 5909ad1b8f9c63634e9a5a4e38e7c7aab6c543ff61aafc9262076a37bb996063. Oct 9 01:00:25.260923 systemd[1]: Started cri-containerd-f67d28e8158379577e394a92361d9fa8dd276ba3c66e8c0e6402bf3002db18b2.scope - libcontainer container f67d28e8158379577e394a92361d9fa8dd276ba3c66e8c0e6402bf3002db18b2. Oct 9 01:00:25.269818 systemd[1]: Started cri-containerd-82471394da358e5183ee26252fe9124b7219758fcf5ad0ef6ef704d245127bda.scope - libcontainer container 82471394da358e5183ee26252fe9124b7219758fcf5ad0ef6ef704d245127bda. Oct 9 01:00:25.399794 containerd[1468]: time="2024-10-09T01:00:25.399698540Z" level=info msg="StartContainer for \"5909ad1b8f9c63634e9a5a4e38e7c7aab6c543ff61aafc9262076a37bb996063\" returns successfully" Oct 9 01:00:25.413418 containerd[1468]: time="2024-10-09T01:00:25.413347124Z" level=info msg="StartContainer for \"82471394da358e5183ee26252fe9124b7219758fcf5ad0ef6ef704d245127bda\" returns successfully" Oct 9 01:00:25.423464 containerd[1468]: time="2024-10-09T01:00:25.423285113Z" level=info msg="StartContainer for \"f67d28e8158379577e394a92361d9fa8dd276ba3c66e8c0e6402bf3002db18b2\" returns successfully" Oct 9 01:00:25.661361 kubelet[2255]: E1009 01:00:25.654688 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:25.661361 kubelet[2255]: E1009 01:00:25.658045 2255 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.110.225.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.110.225.158:6443: connect: connection refused Oct 9 01:00:25.663335 kubelet[2255]: E1009 01:00:25.663275 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:25.669187 kubelet[2255]: E1009 01:00:25.669145 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:26.678266 kubelet[2255]: E1009 01:00:26.678204 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:26.703697 kubelet[2255]: I1009 01:00:26.703002 2255 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:27.872694 kubelet[2255]: E1009 01:00:27.872635 2255 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4116.0.0-c-50f1e82448\" not found" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:28.013353 kubelet[2255]: I1009 01:00:28.010761 2255 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:28.038114 kubelet[2255]: E1009 01:00:28.038053 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116.0.0-c-50f1e82448\" not found" Oct 9 01:00:28.139283 kubelet[2255]: E1009 01:00:28.139121 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116.0.0-c-50f1e82448\" not found" Oct 9 01:00:28.240053 kubelet[2255]: E1009 01:00:28.239973 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116.0.0-c-50f1e82448\" not found" Oct 9 01:00:28.340266 kubelet[2255]: E1009 01:00:28.340201 2255 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116.0.0-c-50f1e82448\" not found" Oct 9 01:00:28.565106 kubelet[2255]: I1009 01:00:28.565015 2255 apiserver.go:52] "Watching apiserver" Oct 9 01:00:28.591571 kubelet[2255]: I1009 01:00:28.591464 2255 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:00:29.724699 systemd-resolved[1326]: Clock change detected. Flushing caches. Oct 9 01:00:29.724900 systemd-timesyncd[1339]: Contacted time server 74.208.25.46:123 (2.flatcar.pool.ntp.org). Oct 9 01:00:29.724980 systemd-timesyncd[1339]: Initial clock synchronization to Wed 2024-10-09 01:00:29.724535 UTC. Oct 9 01:00:30.236805 kubelet[2255]: W1009 01:00:30.235154 2255 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:30.236805 kubelet[2255]: E1009 01:00:30.236113 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:30.624296 kubelet[2255]: E1009 01:00:30.624053 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:31.530616 systemd[1]: Reloading requested from client PID 2527 ('systemctl') (unit session-7.scope)... Oct 9 01:00:31.530676 systemd[1]: Reloading... Oct 9 01:00:31.646680 zram_generator::config[2564]: No configuration found. Oct 9 01:00:31.838669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:00:31.953851 systemd[1]: Reloading finished in 422 ms. Oct 9 01:00:32.014820 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:32.017888 kubelet[2255]: I1009 01:00:32.016067 2255 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:00:32.035519 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:00:32.035830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:32.035914 systemd[1]: kubelet.service: Consumed 1.449s CPU time, 110.6M memory peak, 0B memory swap peak. Oct 9 01:00:32.047679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:00:32.243090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:00:32.261235 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:00:32.354618 kubelet[2617]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:00:32.355109 kubelet[2617]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:00:32.355197 kubelet[2617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:00:32.355355 kubelet[2617]: I1009 01:00:32.355321 2617 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:00:32.364166 kubelet[2617]: I1009 01:00:32.364124 2617 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 9 01:00:32.364396 kubelet[2617]: I1009 01:00:32.364379 2617 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:00:32.364796 kubelet[2617]: I1009 01:00:32.364780 2617 server.go:927] "Client rotation is on, will bootstrap in background" Oct 9 01:00:32.369149 kubelet[2617]: I1009 01:00:32.369115 2617 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:00:32.372580 kubelet[2617]: I1009 01:00:32.372128 2617 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:00:32.394245 kubelet[2617]: I1009 01:00:32.394203 2617 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:00:32.396670 kubelet[2617]: I1009 01:00:32.395075 2617 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:00:32.396670 kubelet[2617]: I1009 01:00:32.395141 2617 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4116.0.0-c-50f1e82448","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:00:32.396670 kubelet[2617]: I1009 01:00:32.395535 2617 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:00:32.396670 kubelet[2617]: I1009 01:00:32.395549 2617 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:00:32.397055 kubelet[2617]: I1009 01:00:32.395608 2617 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:00:32.397055 kubelet[2617]: I1009 01:00:32.395740 2617 kubelet.go:400] "Attempting to sync node with API server" Oct 9 01:00:32.397055 kubelet[2617]: I1009 01:00:32.395753 2617 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:00:32.397055 kubelet[2617]: I1009 01:00:32.395778 2617 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:00:32.397055 kubelet[2617]: I1009 01:00:32.395797 2617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:00:32.401150 kubelet[2617]: I1009 01:00:32.401117 2617 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:00:32.401619 kubelet[2617]: I1009 01:00:32.401598 2617 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:00:32.402292 kubelet[2617]: I1009 01:00:32.402271 2617 server.go:1264] "Started kubelet" Oct 9 01:00:32.407333 kubelet[2617]: I1009 01:00:32.407301 2617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:00:32.421254 kubelet[2617]: I1009 01:00:32.421179 2617 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:00:32.424255 kubelet[2617]: I1009 01:00:32.424225 2617 server.go:455] "Adding debug handlers to kubelet server" Oct 9 01:00:32.432008 kubelet[2617]: I1009 01:00:32.425706 2617 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:00:32.432008 kubelet[2617]: I1009 01:00:32.430975 2617 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:00:32.432008 kubelet[2617]: I1009 01:00:32.431228 2617 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:00:32.432008 kubelet[2617]: I1009 01:00:32.425730 2617 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 9 01:00:32.432008 kubelet[2617]: I1009 01:00:32.431595 2617 reconciler.go:26] "Reconciler: start to sync state" Oct 9 01:00:32.436726 kubelet[2617]: I1009 01:00:32.436696 2617 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:00:32.436989 kubelet[2617]: I1009 01:00:32.436969 2617 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:00:32.438062 kubelet[2617]: E1009 01:00:32.438038 2617 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:00:32.442139 kubelet[2617]: I1009 01:00:32.442111 2617 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:00:32.450503 kubelet[2617]: I1009 01:00:32.450458 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:00:32.455080 kubelet[2617]: I1009 01:00:32.454123 2617 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:00:32.455296 kubelet[2617]: I1009 01:00:32.455279 2617 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:00:32.455390 kubelet[2617]: I1009 01:00:32.455381 2617 kubelet.go:2337] "Starting kubelet main sync loop" Oct 9 01:00:32.455518 kubelet[2617]: E1009 01:00:32.455499 2617 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:00:32.531542 kubelet[2617]: I1009 01:00:32.529950 2617 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.544678 kubelet[2617]: I1009 01:00:32.544604 2617 kubelet_node_status.go:112] "Node was previously registered" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.546198 kubelet[2617]: I1009 01:00:32.546099 2617 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.556260 kubelet[2617]: E1009 01:00:32.555750 2617 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:00:32.576499 kubelet[2617]: I1009 01:00:32.576468 2617 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:00:32.577201 kubelet[2617]: I1009 01:00:32.576705 2617 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:00:32.577201 kubelet[2617]: I1009 01:00:32.576741 2617 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:00:32.577201 kubelet[2617]: I1009 01:00:32.576918 2617 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:00:32.577201 kubelet[2617]: I1009 01:00:32.576929 2617 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:00:32.577201 kubelet[2617]: I1009 01:00:32.576949 2617 policy_none.go:49] "None policy: Start" Oct 9 01:00:32.578697 kubelet[2617]: I1009 01:00:32.578181 2617 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:00:32.578697 kubelet[2617]: I1009 01:00:32.578218 2617 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:00:32.578697 kubelet[2617]: I1009 01:00:32.578511 2617 state_mem.go:75] "Updated machine memory state" Oct 9 01:00:32.595707 kubelet[2617]: I1009 01:00:32.592393 2617 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:00:32.595707 kubelet[2617]: I1009 01:00:32.592692 2617 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 9 01:00:32.595707 kubelet[2617]: I1009 01:00:32.593535 2617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:00:32.756071 kubelet[2617]: I1009 01:00:32.755993 2617 topology_manager.go:215] "Topology Admit Handler" podUID="d6c8b8c1bd8ebf6a25e0b5991b11bf3e" podNamespace="kube-system" podName="kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.756938 kubelet[2617]: I1009 01:00:32.756905 2617 topology_manager.go:215] "Topology Admit Handler" podUID="d9a1330bc60aeeafad1dcb30b210eaec" podNamespace="kube-system" podName="kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.757190 kubelet[2617]: I1009 01:00:32.757169 2617 topology_manager.go:215] "Topology Admit Handler" podUID="24bc39273b76e3989973a01638d49ca1" podNamespace="kube-system" podName="kube-scheduler-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.781243 kubelet[2617]: W1009 01:00:32.780779 2617 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:32.781243 kubelet[2617]: E1009 01:00:32.780882 2617 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" already exists" pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.783906 kubelet[2617]: W1009 01:00:32.782209 2617 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:32.783906 kubelet[2617]: E1009 01:00:32.782282 2617 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4116.0.0-c-50f1e82448\" already exists" pod="kube-system/kube-scheduler-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.783906 kubelet[2617]: W1009 01:00:32.783234 2617 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:32.835781 kubelet[2617]: I1009 01:00:32.835569 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/24bc39273b76e3989973a01638d49ca1-kubeconfig\") pod \"kube-scheduler-ci-4116.0.0-c-50f1e82448\" (UID: \"24bc39273b76e3989973a01638d49ca1\") " pod="kube-system/kube-scheduler-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.836327 kubelet[2617]: I1009 01:00:32.836032 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6c8b8c1bd8ebf6a25e0b5991b11bf3e-ca-certs\") pod \"kube-apiserver-ci-4116.0.0-c-50f1e82448\" (UID: \"d6c8b8c1bd8ebf6a25e0b5991b11bf3e\") " pod="kube-system/kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.836327 kubelet[2617]: I1009 01:00:32.836206 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6c8b8c1bd8ebf6a25e0b5991b11bf3e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116.0.0-c-50f1e82448\" (UID: \"d6c8b8c1bd8ebf6a25e0b5991b11bf3e\") " pod="kube-system/kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.836689 kubelet[2617]: I1009 01:00:32.836291 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-ca-certs\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.836689 kubelet[2617]: I1009 01:00:32.836511 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-flexvolume-dir\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.837207 kubelet[2617]: I1009 01:00:32.836884 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6c8b8c1bd8ebf6a25e0b5991b11bf3e-k8s-certs\") pod \"kube-apiserver-ci-4116.0.0-c-50f1e82448\" (UID: \"d6c8b8c1bd8ebf6a25e0b5991b11bf3e\") " pod="kube-system/kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.837207 kubelet[2617]: I1009 01:00:32.837013 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-k8s-certs\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.837207 kubelet[2617]: I1009 01:00:32.837103 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-kubeconfig\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:32.837207 kubelet[2617]: I1009 01:00:32.837168 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9a1330bc60aeeafad1dcb30b210eaec-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116.0.0-c-50f1e82448\" (UID: \"d9a1330bc60aeeafad1dcb30b210eaec\") " pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:33.083163 kubelet[2617]: E1009 01:00:33.082282 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:33.083163 kubelet[2617]: E1009 01:00:33.083163 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:33.084977 kubelet[2617]: E1009 01:00:33.084900 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:33.402060 kubelet[2617]: I1009 01:00:33.400856 2617 apiserver.go:52] "Watching apiserver" Oct 9 01:00:33.432102 kubelet[2617]: I1009 01:00:33.431983 2617 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 9 01:00:33.525665 kubelet[2617]: E1009 01:00:33.524406 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:33.526467 kubelet[2617]: E1009 01:00:33.525935 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:33.546383 kubelet[2617]: W1009 01:00:33.546334 2617 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 9 01:00:33.546563 kubelet[2617]: E1009 01:00:33.546426 2617 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4116.0.0-c-50f1e82448\" already exists" pod="kube-system/kube-apiserver-ci-4116.0.0-c-50f1e82448" Oct 9 01:00:33.547690 kubelet[2617]: E1009 01:00:33.547304 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:33.719721 kubelet[2617]: I1009 01:00:33.719449 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4116.0.0-c-50f1e82448" podStartSLOduration=1.71942071 podStartE2EDuration="1.71942071s" podCreationTimestamp="2024-10-09 01:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:00:33.645780748 +0000 UTC m=+1.378851122" watchObservedRunningTime="2024-10-09 01:00:33.71942071 +0000 UTC m=+1.452491084" Oct 9 01:00:33.805125 kubelet[2617]: I1009 01:00:33.804928 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4116.0.0-c-50f1e82448" podStartSLOduration=1.804901292 podStartE2EDuration="1.804901292s" podCreationTimestamp="2024-10-09 01:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:00:33.720758609 +0000 UTC m=+1.453828989" watchObservedRunningTime="2024-10-09 01:00:33.804901292 +0000 UTC m=+1.537971668" Oct 9 01:00:34.527190 kubelet[2617]: E1009 01:00:34.527121 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:34.701011 kubelet[2617]: E1009 01:00:34.700885 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:37.517186 update_engine[1450]: I20241009 01:00:37.516992 1450 update_attempter.cc:509] Updating boot flags... Oct 9 01:00:37.556777 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2686) Oct 9 01:00:37.641990 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2688) Oct 9 01:00:37.715056 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2688) Oct 9 01:00:38.310177 sudo[1656]: pam_unix(sudo:session): session closed for user root Oct 9 01:00:38.315469 sshd[1653]: pam_unix(sshd:session): session closed for user core Oct 9 01:00:38.323991 systemd[1]: sshd@6-143.110.225.158:22-139.178.68.195:48996.service: Deactivated successfully. Oct 9 01:00:38.327192 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:00:38.327901 systemd[1]: session-7.scope: Consumed 6.249s CPU time, 181.5M memory peak, 0B memory swap peak. Oct 9 01:00:38.328646 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:00:38.329948 systemd-logind[1449]: Removed session 7. Oct 9 01:00:38.895953 kubelet[2617]: E1009 01:00:38.895912 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:38.920288 kubelet[2617]: I1009 01:00:38.920189 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4116.0.0-c-50f1e82448" podStartSLOduration=8.920167887 podStartE2EDuration="8.920167887s" podCreationTimestamp="2024-10-09 01:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:00:33.806762214 +0000 UTC m=+1.539832590" watchObservedRunningTime="2024-10-09 01:00:38.920167887 +0000 UTC m=+6.653238259" Oct 9 01:00:39.543369 kubelet[2617]: E1009 01:00:39.543186 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:40.073469 kubelet[2617]: E1009 01:00:40.073433 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:40.544771 kubelet[2617]: E1009 01:00:40.544731 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:44.707965 kubelet[2617]: E1009 01:00:44.707461 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:45.648585 kubelet[2617]: I1009 01:00:45.648506 2617 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:00:45.649755 containerd[1468]: time="2024-10-09T01:00:45.649690437Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:00:45.652073 kubelet[2617]: I1009 01:00:45.650042 2617 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:00:46.375710 kubelet[2617]: I1009 01:00:46.375653 2617 topology_manager.go:215] "Topology Admit Handler" podUID="1d28a9da-0652-4595-8ac5-87578605634c" podNamespace="kube-system" podName="kube-proxy-cbm5d" Oct 9 01:00:46.385357 systemd[1]: Created slice kubepods-besteffort-pod1d28a9da_0652_4595_8ac5_87578605634c.slice - libcontainer container kubepods-besteffort-pod1d28a9da_0652_4595_8ac5_87578605634c.slice. Oct 9 01:00:46.438674 kubelet[2617]: I1009 01:00:46.438136 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d28a9da-0652-4595-8ac5-87578605634c-kube-proxy\") pod \"kube-proxy-cbm5d\" (UID: \"1d28a9da-0652-4595-8ac5-87578605634c\") " pod="kube-system/kube-proxy-cbm5d" Oct 9 01:00:46.438674 kubelet[2617]: I1009 01:00:46.438191 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d28a9da-0652-4595-8ac5-87578605634c-xtables-lock\") pod \"kube-proxy-cbm5d\" (UID: \"1d28a9da-0652-4595-8ac5-87578605634c\") " pod="kube-system/kube-proxy-cbm5d" Oct 9 01:00:46.438674 kubelet[2617]: I1009 01:00:46.438215 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d28a9da-0652-4595-8ac5-87578605634c-lib-modules\") pod \"kube-proxy-cbm5d\" (UID: \"1d28a9da-0652-4595-8ac5-87578605634c\") " pod="kube-system/kube-proxy-cbm5d" Oct 9 01:00:46.438674 kubelet[2617]: I1009 01:00:46.438236 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7r7g\" (UniqueName: \"kubernetes.io/projected/1d28a9da-0652-4595-8ac5-87578605634c-kube-api-access-p7r7g\") pod \"kube-proxy-cbm5d\" (UID: \"1d28a9da-0652-4595-8ac5-87578605634c\") " pod="kube-system/kube-proxy-cbm5d" Oct 9 01:00:46.695059 kubelet[2617]: E1009 01:00:46.694593 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:46.696793 containerd[1468]: time="2024-10-09T01:00:46.696725061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbm5d,Uid:1d28a9da-0652-4595-8ac5-87578605634c,Namespace:kube-system,Attempt:0,}" Oct 9 01:00:46.752294 containerd[1468]: time="2024-10-09T01:00:46.752035259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:00:46.752294 containerd[1468]: time="2024-10-09T01:00:46.752157787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:00:46.752294 containerd[1468]: time="2024-10-09T01:00:46.752176088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:46.754968 containerd[1468]: time="2024-10-09T01:00:46.752977772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:46.808948 systemd[1]: Started cri-containerd-65f6e8cf4924920b3cda862764ca052f986c53f07bfaa77b912e01c0e8cb868b.scope - libcontainer container 65f6e8cf4924920b3cda862764ca052f986c53f07bfaa77b912e01c0e8cb868b. Oct 9 01:00:46.867704 containerd[1468]: time="2024-10-09T01:00:46.867548137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbm5d,Uid:1d28a9da-0652-4595-8ac5-87578605634c,Namespace:kube-system,Attempt:0,} returns sandbox id \"65f6e8cf4924920b3cda862764ca052f986c53f07bfaa77b912e01c0e8cb868b\"" Oct 9 01:00:46.870009 kubelet[2617]: I1009 01:00:46.869796 2617 topology_manager.go:215] "Topology Admit Handler" podUID="20aab211-f5be-45dd-9c5a-5c1c8d1c290b" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-j77nq" Oct 9 01:00:46.871484 kubelet[2617]: E1009 01:00:46.871415 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:46.883661 containerd[1468]: time="2024-10-09T01:00:46.883246243Z" level=info msg="CreateContainer within sandbox \"65f6e8cf4924920b3cda862764ca052f986c53f07bfaa77b912e01c0e8cb868b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:00:46.892714 systemd[1]: Created slice kubepods-besteffort-pod20aab211_f5be_45dd_9c5a_5c1c8d1c290b.slice - libcontainer container kubepods-besteffort-pod20aab211_f5be_45dd_9c5a_5c1c8d1c290b.slice. Oct 9 01:00:46.942263 kubelet[2617]: I1009 01:00:46.942210 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phpz\" (UniqueName: \"kubernetes.io/projected/20aab211-f5be-45dd-9c5a-5c1c8d1c290b-kube-api-access-4phpz\") pod \"tigera-operator-77f994b5bb-j77nq\" (UID: \"20aab211-f5be-45dd-9c5a-5c1c8d1c290b\") " pod="tigera-operator/tigera-operator-77f994b5bb-j77nq" Oct 9 01:00:46.942446 kubelet[2617]: I1009 01:00:46.942313 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/20aab211-f5be-45dd-9c5a-5c1c8d1c290b-var-lib-calico\") pod \"tigera-operator-77f994b5bb-j77nq\" (UID: \"20aab211-f5be-45dd-9c5a-5c1c8d1c290b\") " pod="tigera-operator/tigera-operator-77f994b5bb-j77nq" Oct 9 01:00:46.952916 containerd[1468]: time="2024-10-09T01:00:46.948893815Z" level=info msg="CreateContainer within sandbox \"65f6e8cf4924920b3cda862764ca052f986c53f07bfaa77b912e01c0e8cb868b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"888ebe59e950e1639114809da2d4ec8c8fa399fc7d57356bc3c96ca170eeb7fc\"" Oct 9 01:00:46.952916 containerd[1468]: time="2024-10-09T01:00:46.950176798Z" level=info msg="StartContainer for \"888ebe59e950e1639114809da2d4ec8c8fa399fc7d57356bc3c96ca170eeb7fc\"" Oct 9 01:00:46.991996 systemd[1]: Started cri-containerd-888ebe59e950e1639114809da2d4ec8c8fa399fc7d57356bc3c96ca170eeb7fc.scope - libcontainer container 888ebe59e950e1639114809da2d4ec8c8fa399fc7d57356bc3c96ca170eeb7fc. Oct 9 01:00:47.042920 containerd[1468]: time="2024-10-09T01:00:47.041702425Z" level=info msg="StartContainer for \"888ebe59e950e1639114809da2d4ec8c8fa399fc7d57356bc3c96ca170eeb7fc\" returns successfully" Oct 9 01:00:47.202386 containerd[1468]: time="2024-10-09T01:00:47.202311081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-j77nq,Uid:20aab211-f5be-45dd-9c5a-5c1c8d1c290b,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:00:47.246690 containerd[1468]: time="2024-10-09T01:00:47.245622190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:00:47.246690 containerd[1468]: time="2024-10-09T01:00:47.245747479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:00:47.246690 containerd[1468]: time="2024-10-09T01:00:47.245760406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:47.247364 containerd[1468]: time="2024-10-09T01:00:47.246983771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:00:47.284991 systemd[1]: Started cri-containerd-565b353a25b276faf3e9772218daa399421bd18f8094764e1652a462c320aaf8.scope - libcontainer container 565b353a25b276faf3e9772218daa399421bd18f8094764e1652a462c320aaf8. Oct 9 01:00:47.349371 containerd[1468]: time="2024-10-09T01:00:47.349325256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-j77nq,Uid:20aab211-f5be-45dd-9c5a-5c1c8d1c290b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"565b353a25b276faf3e9772218daa399421bd18f8094764e1652a462c320aaf8\"" Oct 9 01:00:47.352054 containerd[1468]: time="2024-10-09T01:00:47.351766801Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:00:47.567748 kubelet[2617]: E1009 01:00:47.566939 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:00:47.591576 kubelet[2617]: I1009 01:00:47.591505 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cbm5d" podStartSLOduration=1.591483091 podStartE2EDuration="1.591483091s" podCreationTimestamp="2024-10-09 01:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:00:47.590158744 +0000 UTC m=+15.323229129" watchObservedRunningTime="2024-10-09 01:00:47.591483091 +0000 UTC m=+15.324553464" Oct 9 01:01:07.480317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244387136.mount: Deactivated successfully. Oct 9 01:01:09.409601 containerd[1468]: time="2024-10-09T01:01:09.409516463Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:09.411119 containerd[1468]: time="2024-10-09T01:01:09.410831049Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136521" Oct 9 01:01:09.412491 containerd[1468]: time="2024-10-09T01:01:09.412013302Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:09.415269 containerd[1468]: time="2024-10-09T01:01:09.415211193Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:09.416728 containerd[1468]: time="2024-10-09T01:01:09.416688983Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 22.064744408s" Oct 9 01:01:09.416728 containerd[1468]: time="2024-10-09T01:01:09.416727757Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 01:01:09.422768 containerd[1468]: time="2024-10-09T01:01:09.422724325Z" level=info msg="CreateContainer within sandbox \"565b353a25b276faf3e9772218daa399421bd18f8094764e1652a462c320aaf8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:01:09.442040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079426856.mount: Deactivated successfully. Oct 9 01:01:09.443506 containerd[1468]: time="2024-10-09T01:01:09.443436882Z" level=info msg="CreateContainer within sandbox \"565b353a25b276faf3e9772218daa399421bd18f8094764e1652a462c320aaf8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"202f504d04da39df3d770bed707f17a76b630d84387431a66a7119aef398360e\"" Oct 9 01:01:09.445395 containerd[1468]: time="2024-10-09T01:01:09.444720773Z" level=info msg="StartContainer for \"202f504d04da39df3d770bed707f17a76b630d84387431a66a7119aef398360e\"" Oct 9 01:01:09.495033 systemd[1]: Started cri-containerd-202f504d04da39df3d770bed707f17a76b630d84387431a66a7119aef398360e.scope - libcontainer container 202f504d04da39df3d770bed707f17a76b630d84387431a66a7119aef398360e. Oct 9 01:01:09.533713 containerd[1468]: time="2024-10-09T01:01:09.533663673Z" level=info msg="StartContainer for \"202f504d04da39df3d770bed707f17a76b630d84387431a66a7119aef398360e\" returns successfully" Oct 9 01:01:12.835647 kubelet[2617]: I1009 01:01:12.832854 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-j77nq" podStartSLOduration=4.764934591 podStartE2EDuration="26.832824728s" podCreationTimestamp="2024-10-09 01:00:46 +0000 UTC" firstStartedPulling="2024-10-09 01:00:47.351037788 +0000 UTC m=+15.084108156" lastFinishedPulling="2024-10-09 01:01:09.418927938 +0000 UTC m=+37.151998293" observedRunningTime="2024-10-09 01:01:09.647147714 +0000 UTC m=+37.380218089" watchObservedRunningTime="2024-10-09 01:01:12.832824728 +0000 UTC m=+40.565895095" Oct 9 01:01:12.835647 kubelet[2617]: I1009 01:01:12.833205 2617 topology_manager.go:215] "Topology Admit Handler" podUID="81620346-f6d2-49a7-8028-f8201cf286c7" podNamespace="calico-system" podName="calico-typha-7f4bf88659-x5fd5" Oct 9 01:01:12.849656 systemd[1]: Created slice kubepods-besteffort-pod81620346_f6d2_49a7_8028_f8201cf286c7.slice - libcontainer container kubepods-besteffort-pod81620346_f6d2_49a7_8028_f8201cf286c7.slice. Oct 9 01:01:12.955468 kubelet[2617]: I1009 01:01:12.955411 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/81620346-f6d2-49a7-8028-f8201cf286c7-typha-certs\") pod \"calico-typha-7f4bf88659-x5fd5\" (UID: \"81620346-f6d2-49a7-8028-f8201cf286c7\") " pod="calico-system/calico-typha-7f4bf88659-x5fd5" Oct 9 01:01:12.955645 kubelet[2617]: I1009 01:01:12.955536 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81620346-f6d2-49a7-8028-f8201cf286c7-tigera-ca-bundle\") pod \"calico-typha-7f4bf88659-x5fd5\" (UID: \"81620346-f6d2-49a7-8028-f8201cf286c7\") " pod="calico-system/calico-typha-7f4bf88659-x5fd5" Oct 9 01:01:12.955645 kubelet[2617]: I1009 01:01:12.955566 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99zds\" (UniqueName: \"kubernetes.io/projected/81620346-f6d2-49a7-8028-f8201cf286c7-kube-api-access-99zds\") pod \"calico-typha-7f4bf88659-x5fd5\" (UID: \"81620346-f6d2-49a7-8028-f8201cf286c7\") " pod="calico-system/calico-typha-7f4bf88659-x5fd5" Oct 9 01:01:13.028038 kubelet[2617]: I1009 01:01:13.027972 2617 topology_manager.go:215] "Topology Admit Handler" podUID="6ead6a85-fda2-4403-89e0-1a65e72cdf01" podNamespace="calico-system" podName="calico-node-prf8n" Oct 9 01:01:13.039180 systemd[1]: Created slice kubepods-besteffort-pod6ead6a85_fda2_4403_89e0_1a65e72cdf01.slice - libcontainer container kubepods-besteffort-pod6ead6a85_fda2_4403_89e0_1a65e72cdf01.slice. Oct 9 01:01:13.159834 kubelet[2617]: I1009 01:01:13.159701 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-net-dir\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160271 kubelet[2617]: I1009 01:01:13.160059 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gppf\" (UniqueName: \"kubernetes.io/projected/6ead6a85-fda2-4403-89e0-1a65e72cdf01-kube-api-access-9gppf\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160271 kubelet[2617]: I1009 01:01:13.160134 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-policysync\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160271 kubelet[2617]: I1009 01:01:13.160160 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-var-run-calico\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160271 kubelet[2617]: I1009 01:01:13.160184 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ead6a85-fda2-4403-89e0-1a65e72cdf01-tigera-ca-bundle\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160271 kubelet[2617]: I1009 01:01:13.160216 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-lib-modules\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160507 kubelet[2617]: I1009 01:01:13.160230 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-var-lib-calico\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160507 kubelet[2617]: I1009 01:01:13.160251 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-log-dir\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160904 kubelet[2617]: I1009 01:01:13.160610 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-xtables-lock\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160904 kubelet[2617]: I1009 01:01:13.160657 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-bin-dir\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160904 kubelet[2617]: I1009 01:01:13.160681 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-flexvol-driver-host\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.160904 kubelet[2617]: I1009 01:01:13.160701 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6ead6a85-fda2-4403-89e0-1a65e72cdf01-node-certs\") pod \"calico-node-prf8n\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " pod="calico-system/calico-node-prf8n" Oct 9 01:01:13.168051 kubelet[2617]: I1009 01:01:13.167066 2617 topology_manager.go:215] "Topology Admit Handler" podUID="3003107e-be83-475f-b7b0-944d115c5adb" podNamespace="calico-system" podName="csi-node-driver-2tt4q" Oct 9 01:01:13.171261 kubelet[2617]: E1009 01:01:13.170175 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:13.189412 kubelet[2617]: E1009 01:01:13.171666 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tt4q" podUID="3003107e-be83-475f-b7b0-944d115c5adb" Oct 9 01:01:13.193492 containerd[1468]: time="2024-10-09T01:01:13.193110646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f4bf88659-x5fd5,Uid:81620346-f6d2-49a7-8028-f8201cf286c7,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:13.262907 kubelet[2617]: I1009 01:01:13.261872 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3003107e-be83-475f-b7b0-944d115c5adb-kubelet-dir\") pod \"csi-node-driver-2tt4q\" (UID: \"3003107e-be83-475f-b7b0-944d115c5adb\") " pod="calico-system/csi-node-driver-2tt4q" Oct 9 01:01:13.262907 kubelet[2617]: I1009 01:01:13.261933 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3003107e-be83-475f-b7b0-944d115c5adb-socket-dir\") pod \"csi-node-driver-2tt4q\" (UID: \"3003107e-be83-475f-b7b0-944d115c5adb\") " pod="calico-system/csi-node-driver-2tt4q" Oct 9 01:01:13.262907 kubelet[2617]: I1009 01:01:13.261948 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3003107e-be83-475f-b7b0-944d115c5adb-registration-dir\") pod \"csi-node-driver-2tt4q\" (UID: \"3003107e-be83-475f-b7b0-944d115c5adb\") " pod="calico-system/csi-node-driver-2tt4q" Oct 9 01:01:13.262907 kubelet[2617]: I1009 01:01:13.262018 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3003107e-be83-475f-b7b0-944d115c5adb-varrun\") pod \"csi-node-driver-2tt4q\" (UID: \"3003107e-be83-475f-b7b0-944d115c5adb\") " pod="calico-system/csi-node-driver-2tt4q" Oct 9 01:01:13.262907 kubelet[2617]: I1009 01:01:13.262034 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bc7r\" (UniqueName: \"kubernetes.io/projected/3003107e-be83-475f-b7b0-944d115c5adb-kube-api-access-4bc7r\") pod \"csi-node-driver-2tt4q\" (UID: \"3003107e-be83-475f-b7b0-944d115c5adb\") " pod="calico-system/csi-node-driver-2tt4q" Oct 9 01:01:13.285779 kubelet[2617]: E1009 01:01:13.285732 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.285779 kubelet[2617]: W1009 01:01:13.285773 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.286115 kubelet[2617]: E1009 01:01:13.285829 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.300863 containerd[1468]: time="2024-10-09T01:01:13.300031958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:13.300863 containerd[1468]: time="2024-10-09T01:01:13.300152491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:13.300863 containerd[1468]: time="2024-10-09T01:01:13.300189762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:13.302991 containerd[1468]: time="2024-10-09T01:01:13.302881205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:13.322106 kubelet[2617]: E1009 01:01:13.322048 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.322428 kubelet[2617]: W1009 01:01:13.322309 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.322428 kubelet[2617]: E1009 01:01:13.322358 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.345130 systemd[1]: Started cri-containerd-e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6.scope - libcontainer container e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6. Oct 9 01:01:13.355896 kubelet[2617]: E1009 01:01:13.355853 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:13.359666 containerd[1468]: time="2024-10-09T01:01:13.358466859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-prf8n,Uid:6ead6a85-fda2-4403-89e0-1a65e72cdf01,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:13.366666 kubelet[2617]: E1009 01:01:13.366370 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.366666 kubelet[2617]: W1009 01:01:13.366407 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.366666 kubelet[2617]: E1009 01:01:13.366444 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.367773 kubelet[2617]: E1009 01:01:13.367436 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.367773 kubelet[2617]: W1009 01:01:13.367461 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.367773 kubelet[2617]: E1009 01:01:13.367487 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.368986 kubelet[2617]: E1009 01:01:13.368938 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.368986 kubelet[2617]: W1009 01:01:13.368975 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.369127 kubelet[2617]: E1009 01:01:13.369019 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.370367 kubelet[2617]: E1009 01:01:13.370337 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.370467 kubelet[2617]: W1009 01:01:13.370372 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.370508 kubelet[2617]: E1009 01:01:13.370479 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.372802 kubelet[2617]: E1009 01:01:13.372768 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.372802 kubelet[2617]: W1009 01:01:13.372799 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.374254 kubelet[2617]: E1009 01:01:13.372826 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.376294 kubelet[2617]: E1009 01:01:13.375930 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.376294 kubelet[2617]: W1009 01:01:13.375959 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.376294 kubelet[2617]: E1009 01:01:13.375984 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.377684 kubelet[2617]: E1009 01:01:13.377653 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.377684 kubelet[2617]: W1009 01:01:13.377680 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.377808 kubelet[2617]: E1009 01:01:13.377704 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.379800 kubelet[2617]: E1009 01:01:13.379774 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.379891 kubelet[2617]: W1009 01:01:13.379813 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.379891 kubelet[2617]: E1009 01:01:13.379838 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.381732 kubelet[2617]: E1009 01:01:13.381579 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.381732 kubelet[2617]: W1009 01:01:13.381607 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.381732 kubelet[2617]: E1009 01:01:13.381652 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.388263 kubelet[2617]: E1009 01:01:13.386557 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.388263 kubelet[2617]: W1009 01:01:13.386588 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.388263 kubelet[2617]: E1009 01:01:13.386614 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.388263 kubelet[2617]: E1009 01:01:13.387902 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.388263 kubelet[2617]: W1009 01:01:13.387929 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.388263 kubelet[2617]: E1009 01:01:13.387955 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.391409 kubelet[2617]: E1009 01:01:13.388808 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.391409 kubelet[2617]: W1009 01:01:13.388827 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.391409 kubelet[2617]: E1009 01:01:13.388848 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.391409 kubelet[2617]: E1009 01:01:13.391399 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.391409 kubelet[2617]: W1009 01:01:13.391421 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.391762 kubelet[2617]: E1009 01:01:13.391446 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.394892 kubelet[2617]: E1009 01:01:13.394500 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.394892 kubelet[2617]: W1009 01:01:13.394532 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.394892 kubelet[2617]: E1009 01:01:13.394667 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.396565 kubelet[2617]: E1009 01:01:13.395317 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.396565 kubelet[2617]: W1009 01:01:13.395334 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.396565 kubelet[2617]: E1009 01:01:13.395348 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.397238 kubelet[2617]: E1009 01:01:13.396740 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.397238 kubelet[2617]: W1009 01:01:13.396758 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.397238 kubelet[2617]: E1009 01:01:13.396774 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.402501 kubelet[2617]: E1009 01:01:13.402461 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.402501 kubelet[2617]: W1009 01:01:13.402494 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.402892 kubelet[2617]: E1009 01:01:13.402519 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.403679 kubelet[2617]: E1009 01:01:13.403647 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.406831 kubelet[2617]: W1009 01:01:13.403683 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.406831 kubelet[2617]: E1009 01:01:13.403711 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.412593 kubelet[2617]: E1009 01:01:13.409654 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.412593 kubelet[2617]: W1009 01:01:13.409675 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.412593 kubelet[2617]: E1009 01:01:13.409860 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.412593 kubelet[2617]: W1009 01:01:13.409868 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.412593 kubelet[2617]: E1009 01:01:13.409886 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.412593 kubelet[2617]: E1009 01:01:13.409902 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.412593 kubelet[2617]: E1009 01:01:13.410040 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.412593 kubelet[2617]: W1009 01:01:13.410048 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.412593 kubelet[2617]: E1009 01:01:13.410068 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.412593 kubelet[2617]: E1009 01:01:13.410224 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.418437 containerd[1468]: time="2024-10-09T01:01:13.407487573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:13.418437 containerd[1468]: time="2024-10-09T01:01:13.407553340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:13.418437 containerd[1468]: time="2024-10-09T01:01:13.407565943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:13.418437 containerd[1468]: time="2024-10-09T01:01:13.407690124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:13.418587 kubelet[2617]: W1009 01:01:13.410240 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.418587 kubelet[2617]: E1009 01:01:13.410258 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.418587 kubelet[2617]: E1009 01:01:13.410485 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.418587 kubelet[2617]: W1009 01:01:13.410495 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.418587 kubelet[2617]: E1009 01:01:13.410537 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.418587 kubelet[2617]: E1009 01:01:13.410810 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.418587 kubelet[2617]: W1009 01:01:13.410822 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.418587 kubelet[2617]: E1009 01:01:13.410833 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.427447 kubelet[2617]: E1009 01:01:13.427415 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.427447 kubelet[2617]: W1009 01:01:13.427439 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.427767 kubelet[2617]: E1009 01:01:13.427466 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.446872 systemd[1]: Started cri-containerd-1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5.scope - libcontainer container 1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5. Oct 9 01:01:13.451091 kubelet[2617]: E1009 01:01:13.450849 2617 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:01:13.451091 kubelet[2617]: W1009 01:01:13.450872 2617 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:01:13.451091 kubelet[2617]: E1009 01:01:13.450897 2617 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:01:13.514805 containerd[1468]: time="2024-10-09T01:01:13.514612234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-prf8n,Uid:6ead6a85-fda2-4403-89e0-1a65e72cdf01,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\"" Oct 9 01:01:13.520043 kubelet[2617]: E1009 01:01:13.519538 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:13.525119 containerd[1468]: time="2024-10-09T01:01:13.525057578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f4bf88659-x5fd5,Uid:81620346-f6d2-49a7-8028-f8201cf286c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\"" Oct 9 01:01:13.527079 kubelet[2617]: E1009 01:01:13.526054 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:13.534739 containerd[1468]: time="2024-10-09T01:01:13.534690658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:01:14.459668 kubelet[2617]: E1009 01:01:14.459363 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tt4q" podUID="3003107e-be83-475f-b7b0-944d115c5adb" Oct 9 01:01:15.167292 containerd[1468]: time="2024-10-09T01:01:15.166581099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:15.168664 containerd[1468]: time="2024-10-09T01:01:15.168595378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 01:01:15.169853 containerd[1468]: time="2024-10-09T01:01:15.169804839Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:15.173441 containerd[1468]: time="2024-10-09T01:01:15.173030449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:15.174392 containerd[1468]: time="2024-10-09T01:01:15.174360316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.639611948s" Oct 9 01:01:15.174517 containerd[1468]: time="2024-10-09T01:01:15.174502434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 01:01:15.178744 containerd[1468]: time="2024-10-09T01:01:15.176345875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:01:15.178744 containerd[1468]: time="2024-10-09T01:01:15.178523774Z" level=info msg="CreateContainer within sandbox \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:01:15.199376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993586564.mount: Deactivated successfully. Oct 9 01:01:15.203558 containerd[1468]: time="2024-10-09T01:01:15.203519139Z" level=info msg="CreateContainer within sandbox \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d\"" Oct 9 01:01:15.204707 containerd[1468]: time="2024-10-09T01:01:15.204679380Z" level=info msg="StartContainer for \"bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d\"" Oct 9 01:01:15.250870 systemd[1]: Started cri-containerd-bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d.scope - libcontainer container bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d. Oct 9 01:01:15.312738 containerd[1468]: time="2024-10-09T01:01:15.312574261Z" level=info msg="StartContainer for \"bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d\" returns successfully" Oct 9 01:01:15.334892 systemd[1]: cri-containerd-bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d.scope: Deactivated successfully. Oct 9 01:01:15.381385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d-rootfs.mount: Deactivated successfully. Oct 9 01:01:15.400892 containerd[1468]: time="2024-10-09T01:01:15.400798540Z" level=info msg="shim disconnected" id=bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d namespace=k8s.io Oct 9 01:01:15.403701 containerd[1468]: time="2024-10-09T01:01:15.401256963Z" level=warning msg="cleaning up after shim disconnected" id=bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d namespace=k8s.io Oct 9 01:01:15.403701 containerd[1468]: time="2024-10-09T01:01:15.401357798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:01:15.654675 containerd[1468]: time="2024-10-09T01:01:15.653490450Z" level=info msg="StopPodSandbox for \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\"" Oct 9 01:01:15.654675 containerd[1468]: time="2024-10-09T01:01:15.653532527Z" level=info msg="Container to stop \"bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:01:15.669126 systemd[1]: cri-containerd-1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5.scope: Deactivated successfully. Oct 9 01:01:15.706549 containerd[1468]: time="2024-10-09T01:01:15.706479420Z" level=info msg="shim disconnected" id=1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5 namespace=k8s.io Oct 9 01:01:15.706861 containerd[1468]: time="2024-10-09T01:01:15.706535402Z" level=warning msg="cleaning up after shim disconnected" id=1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5 namespace=k8s.io Oct 9 01:01:15.706861 containerd[1468]: time="2024-10-09T01:01:15.706607769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:01:15.734416 containerd[1468]: time="2024-10-09T01:01:15.733049079Z" level=info msg="TearDown network for sandbox \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\" successfully" Oct 9 01:01:15.734416 containerd[1468]: time="2024-10-09T01:01:15.733139451Z" level=info msg="StopPodSandbox for \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\" returns successfully" Oct 9 01:01:15.816726 kubelet[2617]: I1009 01:01:15.815999 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-policysync\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.816726 kubelet[2617]: I1009 01:01:15.816141 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-lib-modules\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.816726 kubelet[2617]: I1009 01:01:15.816197 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ead6a85-fda2-4403-89e0-1a65e72cdf01-tigera-ca-bundle\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.816726 kubelet[2617]: I1009 01:01:15.816224 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-flexvol-driver-host\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.816726 kubelet[2617]: I1009 01:01:15.816252 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gppf\" (UniqueName: \"kubernetes.io/projected/6ead6a85-fda2-4403-89e0-1a65e72cdf01-kube-api-access-9gppf\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.816726 kubelet[2617]: I1009 01:01:15.816267 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-var-run-calico\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.818833 kubelet[2617]: I1009 01:01:15.816281 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-log-dir\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.818833 kubelet[2617]: I1009 01:01:15.816297 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-bin-dir\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.818833 kubelet[2617]: I1009 01:01:15.816322 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6ead6a85-fda2-4403-89e0-1a65e72cdf01-node-certs\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.818833 kubelet[2617]: I1009 01:01:15.816335 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-net-dir\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.818833 kubelet[2617]: I1009 01:01:15.816350 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-var-lib-calico\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.818833 kubelet[2617]: I1009 01:01:15.816372 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-xtables-lock\") pod \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\" (UID: \"6ead6a85-fda2-4403-89e0-1a65e72cdf01\") " Oct 9 01:01:15.819040 kubelet[2617]: I1009 01:01:15.816475 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.819040 kubelet[2617]: I1009 01:01:15.816525 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-policysync" (OuterVolumeSpecName: "policysync") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.819040 kubelet[2617]: I1009 01:01:15.816540 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.819040 kubelet[2617]: I1009 01:01:15.816848 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.819040 kubelet[2617]: I1009 01:01:15.816998 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.819254 kubelet[2617]: I1009 01:01:15.817002 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ead6a85-fda2-4403-89e0-1a65e72cdf01-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:01:15.819254 kubelet[2617]: I1009 01:01:15.817050 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.822998 kubelet[2617]: I1009 01:01:15.822626 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ead6a85-fda2-4403-89e0-1a65e72cdf01-node-certs" (OuterVolumeSpecName: "node-certs") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 01:01:15.822998 kubelet[2617]: I1009 01:01:15.822743 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.822998 kubelet[2617]: I1009 01:01:15.822765 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.822998 kubelet[2617]: I1009 01:01:15.822783 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 9 01:01:15.822998 kubelet[2617]: I1009 01:01:15.822935 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ead6a85-fda2-4403-89e0-1a65e72cdf01-kube-api-access-9gppf" (OuterVolumeSpecName: "kube-api-access-9gppf") pod "6ead6a85-fda2-4403-89e0-1a65e72cdf01" (UID: "6ead6a85-fda2-4403-89e0-1a65e72cdf01"). InnerVolumeSpecName "kube-api-access-9gppf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:01:15.917763 kubelet[2617]: I1009 01:01:15.917505 2617 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-var-lib-calico\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.917763 kubelet[2617]: I1009 01:01:15.917583 2617 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-xtables-lock\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.917763 kubelet[2617]: I1009 01:01:15.917598 2617 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-policysync\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.917763 kubelet[2617]: I1009 01:01:15.917610 2617 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-lib-modules\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.917763 kubelet[2617]: I1009 01:01:15.917625 2617 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ead6a85-fda2-4403-89e0-1a65e72cdf01-tigera-ca-bundle\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.917763 kubelet[2617]: I1009 01:01:15.917656 2617 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-flexvol-driver-host\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.917763 kubelet[2617]: I1009 01:01:15.917668 2617 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9gppf\" (UniqueName: \"kubernetes.io/projected/6ead6a85-fda2-4403-89e0-1a65e72cdf01-kube-api-access-9gppf\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.917763 kubelet[2617]: I1009 01:01:15.917683 2617 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-net-dir\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.918252 kubelet[2617]: I1009 01:01:15.917694 2617 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-var-run-calico\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.918252 kubelet[2617]: I1009 01:01:15.917706 2617 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-log-dir\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.918252 kubelet[2617]: I1009 01:01:15.917715 2617 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6ead6a85-fda2-4403-89e0-1a65e72cdf01-cni-bin-dir\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:15.918252 kubelet[2617]: I1009 01:01:15.917729 2617 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6ead6a85-fda2-4403-89e0-1a65e72cdf01-node-certs\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:16.198145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5-rootfs.mount: Deactivated successfully. Oct 9 01:01:16.198301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5-shm.mount: Deactivated successfully. Oct 9 01:01:16.198392 systemd[1]: var-lib-kubelet-pods-6ead6a85\x2dfda2\x2d4403\x2d89e0\x2d1a65e72cdf01-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9gppf.mount: Deactivated successfully. Oct 9 01:01:16.198478 systemd[1]: var-lib-kubelet-pods-6ead6a85\x2dfda2\x2d4403\x2d89e0\x2d1a65e72cdf01-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Oct 9 01:01:16.460256 kubelet[2617]: E1009 01:01:16.459247 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tt4q" podUID="3003107e-be83-475f-b7b0-944d115c5adb" Oct 9 01:01:16.472886 systemd[1]: Removed slice kubepods-besteffort-pod6ead6a85_fda2_4403_89e0_1a65e72cdf01.slice - libcontainer container kubepods-besteffort-pod6ead6a85_fda2_4403_89e0_1a65e72cdf01.slice. Oct 9 01:01:16.665220 kubelet[2617]: I1009 01:01:16.664390 2617 scope.go:117] "RemoveContainer" containerID="bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d" Oct 9 01:01:16.667582 containerd[1468]: time="2024-10-09T01:01:16.667547267Z" level=info msg="RemoveContainer for \"bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d\"" Oct 9 01:01:16.676715 containerd[1468]: time="2024-10-09T01:01:16.676518874Z" level=info msg="RemoveContainer for \"bb204e64330ad1a6b153ea6eb1b7b26f66dfffb4f8b9a30e6f98cc133650ef1d\" returns successfully" Oct 9 01:01:16.839069 kubelet[2617]: I1009 01:01:16.836108 2617 topology_manager.go:215] "Topology Admit Handler" podUID="a4d69039-eeb4-49c5-aaf6-a0874a01ca8f" podNamespace="calico-system" podName="calico-node-fbbm6" Oct 9 01:01:16.839069 kubelet[2617]: E1009 01:01:16.836182 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ead6a85-fda2-4403-89e0-1a65e72cdf01" containerName="flexvol-driver" Oct 9 01:01:16.839069 kubelet[2617]: I1009 01:01:16.836210 2617 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ead6a85-fda2-4403-89e0-1a65e72cdf01" containerName="flexvol-driver" Oct 9 01:01:16.850512 systemd[1]: Created slice kubepods-besteffort-poda4d69039_eeb4_49c5_aaf6_a0874a01ca8f.slice - libcontainer container kubepods-besteffort-poda4d69039_eeb4_49c5_aaf6_a0874a01ca8f.slice. Oct 9 01:01:16.927119 kubelet[2617]: I1009 01:01:16.927073 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-lib-modules\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927119 kubelet[2617]: I1009 01:01:16.927114 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-tigera-ca-bundle\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927119 kubelet[2617]: I1009 01:01:16.927131 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-node-certs\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927333 kubelet[2617]: I1009 01:01:16.927149 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-cni-net-dir\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927333 kubelet[2617]: I1009 01:01:16.927164 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-cni-log-dir\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927333 kubelet[2617]: I1009 01:01:16.927179 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-flexvol-driver-host\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927333 kubelet[2617]: I1009 01:01:16.927197 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-xtables-lock\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927333 kubelet[2617]: I1009 01:01:16.927220 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-var-lib-calico\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927467 kubelet[2617]: I1009 01:01:16.927242 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9zw\" (UniqueName: \"kubernetes.io/projected/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-kube-api-access-tv9zw\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927467 kubelet[2617]: I1009 01:01:16.927264 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-policysync\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927467 kubelet[2617]: I1009 01:01:16.927285 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-var-run-calico\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:16.927467 kubelet[2617]: I1009 01:01:16.927302 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4d69039-eeb4-49c5-aaf6-a0874a01ca8f-cni-bin-dir\") pod \"calico-node-fbbm6\" (UID: \"a4d69039-eeb4-49c5-aaf6-a0874a01ca8f\") " pod="calico-system/calico-node-fbbm6" Oct 9 01:01:17.156781 kubelet[2617]: E1009 01:01:17.155474 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:17.159669 containerd[1468]: time="2024-10-09T01:01:17.158113011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fbbm6,Uid:a4d69039-eeb4-49c5-aaf6-a0874a01ca8f,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:17.221764 containerd[1468]: time="2024-10-09T01:01:17.220041639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:17.221764 containerd[1468]: time="2024-10-09T01:01:17.220524054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:17.221764 containerd[1468]: time="2024-10-09T01:01:17.220540447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:17.222525 containerd[1468]: time="2024-10-09T01:01:17.221912489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:17.264438 systemd[1]: run-containerd-runc-k8s.io-35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb-runc.H8rOMl.mount: Deactivated successfully. Oct 9 01:01:17.282283 systemd[1]: Started cri-containerd-35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb.scope - libcontainer container 35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb. Oct 9 01:01:17.357372 containerd[1468]: time="2024-10-09T01:01:17.357139377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fbbm6,Uid:a4d69039-eeb4-49c5-aaf6-a0874a01ca8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb\"" Oct 9 01:01:17.360689 kubelet[2617]: E1009 01:01:17.360443 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:17.367406 containerd[1468]: time="2024-10-09T01:01:17.366969209Z" level=info msg="CreateContainer within sandbox \"35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:01:17.411342 containerd[1468]: time="2024-10-09T01:01:17.410687137Z" level=info msg="CreateContainer within sandbox \"35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa\"" Oct 9 01:01:17.414942 containerd[1468]: time="2024-10-09T01:01:17.412875563Z" level=info msg="StartContainer for \"dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa\"" Oct 9 01:01:17.517925 systemd[1]: Started cri-containerd-dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa.scope - libcontainer container dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa. Oct 9 01:01:17.633998 containerd[1468]: time="2024-10-09T01:01:17.633948013Z" level=info msg="StartContainer for \"dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa\" returns successfully" Oct 9 01:01:17.672363 kubelet[2617]: E1009 01:01:17.672111 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:17.675452 containerd[1468]: time="2024-10-09T01:01:17.674581663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:17.677847 containerd[1468]: time="2024-10-09T01:01:17.677773687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 01:01:17.680224 containerd[1468]: time="2024-10-09T01:01:17.678773378Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:17.681702 containerd[1468]: time="2024-10-09T01:01:17.681664328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:17.683651 containerd[1468]: time="2024-10-09T01:01:17.683601793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.507223885s" Oct 9 01:01:17.683785 containerd[1468]: time="2024-10-09T01:01:17.683769907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 01:01:17.693280 containerd[1468]: time="2024-10-09T01:01:17.693238626Z" level=info msg="CreateContainer within sandbox \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:01:17.711575 containerd[1468]: time="2024-10-09T01:01:17.711521435Z" level=info msg="CreateContainer within sandbox \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\"" Oct 9 01:01:17.713116 containerd[1468]: time="2024-10-09T01:01:17.713071472Z" level=info msg="StartContainer for \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\"" Oct 9 01:01:17.745582 systemd[1]: cri-containerd-dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa.scope: Deactivated successfully. Oct 9 01:01:17.760000 systemd[1]: Started cri-containerd-52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590.scope - libcontainer container 52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590. Oct 9 01:01:17.858433 containerd[1468]: time="2024-10-09T01:01:17.858140538Z" level=info msg="shim disconnected" id=dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa namespace=k8s.io Oct 9 01:01:17.858433 containerd[1468]: time="2024-10-09T01:01:17.858202952Z" level=warning msg="cleaning up after shim disconnected" id=dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa namespace=k8s.io Oct 9 01:01:17.858433 containerd[1468]: time="2024-10-09T01:01:17.858212377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:01:17.901161 containerd[1468]: time="2024-10-09T01:01:17.899979914Z" level=info msg="StartContainer for \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\" returns successfully" Oct 9 01:01:18.235311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfa739d250ecf4e5e891bf7cdee491db865510b50a899524fd8a385db5e1eaaa-rootfs.mount: Deactivated successfully. Oct 9 01:01:18.458607 kubelet[2617]: E1009 01:01:18.458555 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tt4q" podUID="3003107e-be83-475f-b7b0-944d115c5adb" Oct 9 01:01:18.461526 kubelet[2617]: I1009 01:01:18.461487 2617 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ead6a85-fda2-4403-89e0-1a65e72cdf01" path="/var/lib/kubelet/pods/6ead6a85-fda2-4403-89e0-1a65e72cdf01/volumes" Oct 9 01:01:18.676872 kubelet[2617]: E1009 01:01:18.676800 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:18.681539 containerd[1468]: time="2024-10-09T01:01:18.681283093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:01:18.686457 containerd[1468]: time="2024-10-09T01:01:18.686405057Z" level=info msg="StopContainer for \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\" with timeout 300 (s)" Oct 9 01:01:18.689144 containerd[1468]: time="2024-10-09T01:01:18.687529230Z" level=info msg="Stop container \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\" with signal terminated" Oct 9 01:01:18.731735 systemd[1]: cri-containerd-52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590.scope: Deactivated successfully. Oct 9 01:01:18.743112 kubelet[2617]: I1009 01:01:18.742399 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f4bf88659-x5fd5" podStartSLOduration=2.591101768 podStartE2EDuration="6.742374991s" podCreationTimestamp="2024-10-09 01:01:12 +0000 UTC" firstStartedPulling="2024-10-09 01:01:13.533552402 +0000 UTC m=+41.266622757" lastFinishedPulling="2024-10-09 01:01:17.684825619 +0000 UTC m=+45.417895980" observedRunningTime="2024-10-09 01:01:18.735418205 +0000 UTC m=+46.468488616" watchObservedRunningTime="2024-10-09 01:01:18.742374991 +0000 UTC m=+46.475445403" Oct 9 01:01:18.790704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590-rootfs.mount: Deactivated successfully. Oct 9 01:01:18.799052 containerd[1468]: time="2024-10-09T01:01:18.798709009Z" level=info msg="shim disconnected" id=52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590 namespace=k8s.io Oct 9 01:01:18.799052 containerd[1468]: time="2024-10-09T01:01:18.798792268Z" level=warning msg="cleaning up after shim disconnected" id=52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590 namespace=k8s.io Oct 9 01:01:18.799052 containerd[1468]: time="2024-10-09T01:01:18.798806267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:01:18.834036 containerd[1468]: time="2024-10-09T01:01:18.833973993Z" level=info msg="StopContainer for \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\" returns successfully" Oct 9 01:01:18.835183 containerd[1468]: time="2024-10-09T01:01:18.834908839Z" level=info msg="StopPodSandbox for \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\"" Oct 9 01:01:18.835183 containerd[1468]: time="2024-10-09T01:01:18.834978141Z" level=info msg="Container to stop \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 9 01:01:18.840281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6-shm.mount: Deactivated successfully. Oct 9 01:01:18.854723 systemd[1]: cri-containerd-e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6.scope: Deactivated successfully. Oct 9 01:01:18.912290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6-rootfs.mount: Deactivated successfully. Oct 9 01:01:18.914712 containerd[1468]: time="2024-10-09T01:01:18.912831286Z" level=info msg="shim disconnected" id=e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6 namespace=k8s.io Oct 9 01:01:18.916221 containerd[1468]: time="2024-10-09T01:01:18.915065619Z" level=warning msg="cleaning up after shim disconnected" id=e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6 namespace=k8s.io Oct 9 01:01:18.916221 containerd[1468]: time="2024-10-09T01:01:18.915106897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:01:18.959498 containerd[1468]: time="2024-10-09T01:01:18.958955451Z" level=info msg="TearDown network for sandbox \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\" successfully" Oct 9 01:01:18.959498 containerd[1468]: time="2024-10-09T01:01:18.959010791Z" level=info msg="StopPodSandbox for \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\" returns successfully" Oct 9 01:01:19.014710 kubelet[2617]: I1009 01:01:19.014654 2617 topology_manager.go:215] "Topology Admit Handler" podUID="b50ddedc-efe9-4f28-acdb-74db3dd536eb" podNamespace="calico-system" podName="calico-typha-7d7788cd8c-fnxfh" Oct 9 01:01:19.014710 kubelet[2617]: E1009 01:01:19.014735 2617 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81620346-f6d2-49a7-8028-f8201cf286c7" containerName="calico-typha" Oct 9 01:01:19.015126 kubelet[2617]: I1009 01:01:19.014777 2617 memory_manager.go:354] "RemoveStaleState removing state" podUID="81620346-f6d2-49a7-8028-f8201cf286c7" containerName="calico-typha" Oct 9 01:01:19.027205 systemd[1]: Created slice kubepods-besteffort-podb50ddedc_efe9_4f28_acdb_74db3dd536eb.slice - libcontainer container kubepods-besteffort-podb50ddedc_efe9_4f28_acdb_74db3dd536eb.slice. Oct 9 01:01:19.048251 kubelet[2617]: I1009 01:01:19.047569 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zds\" (UniqueName: \"kubernetes.io/projected/81620346-f6d2-49a7-8028-f8201cf286c7-kube-api-access-99zds\") pod \"81620346-f6d2-49a7-8028-f8201cf286c7\" (UID: \"81620346-f6d2-49a7-8028-f8201cf286c7\") " Oct 9 01:01:19.049001 kubelet[2617]: I1009 01:01:19.048604 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/81620346-f6d2-49a7-8028-f8201cf286c7-typha-certs\") pod \"81620346-f6d2-49a7-8028-f8201cf286c7\" (UID: \"81620346-f6d2-49a7-8028-f8201cf286c7\") " Oct 9 01:01:19.049001 kubelet[2617]: I1009 01:01:19.048666 2617 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81620346-f6d2-49a7-8028-f8201cf286c7-tigera-ca-bundle\") pod \"81620346-f6d2-49a7-8028-f8201cf286c7\" (UID: \"81620346-f6d2-49a7-8028-f8201cf286c7\") " Oct 9 01:01:19.056441 systemd[1]: var-lib-kubelet-pods-81620346\x2df6d2\x2d49a7\x2d8028\x2df8201cf286c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d99zds.mount: Deactivated successfully. Oct 9 01:01:19.058753 kubelet[2617]: I1009 01:01:19.058305 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81620346-f6d2-49a7-8028-f8201cf286c7-kube-api-access-99zds" (OuterVolumeSpecName: "kube-api-access-99zds") pod "81620346-f6d2-49a7-8028-f8201cf286c7" (UID: "81620346-f6d2-49a7-8028-f8201cf286c7"). InnerVolumeSpecName "kube-api-access-99zds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 9 01:01:19.069722 kubelet[2617]: I1009 01:01:19.068367 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81620346-f6d2-49a7-8028-f8201cf286c7-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "81620346-f6d2-49a7-8028-f8201cf286c7" (UID: "81620346-f6d2-49a7-8028-f8201cf286c7"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 9 01:01:19.071433 kubelet[2617]: I1009 01:01:19.071369 2617 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81620346-f6d2-49a7-8028-f8201cf286c7-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "81620346-f6d2-49a7-8028-f8201cf286c7" (UID: "81620346-f6d2-49a7-8028-f8201cf286c7"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 9 01:01:19.150739 kubelet[2617]: I1009 01:01:19.149612 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b50ddedc-efe9-4f28-acdb-74db3dd536eb-tigera-ca-bundle\") pod \"calico-typha-7d7788cd8c-fnxfh\" (UID: \"b50ddedc-efe9-4f28-acdb-74db3dd536eb\") " pod="calico-system/calico-typha-7d7788cd8c-fnxfh" Oct 9 01:01:19.151254 kubelet[2617]: I1009 01:01:19.151073 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvp4\" (UniqueName: \"kubernetes.io/projected/b50ddedc-efe9-4f28-acdb-74db3dd536eb-kube-api-access-vjvp4\") pod \"calico-typha-7d7788cd8c-fnxfh\" (UID: \"b50ddedc-efe9-4f28-acdb-74db3dd536eb\") " pod="calico-system/calico-typha-7d7788cd8c-fnxfh" Oct 9 01:01:19.151254 kubelet[2617]: I1009 01:01:19.151171 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b50ddedc-efe9-4f28-acdb-74db3dd536eb-typha-certs\") pod \"calico-typha-7d7788cd8c-fnxfh\" (UID: \"b50ddedc-efe9-4f28-acdb-74db3dd536eb\") " pod="calico-system/calico-typha-7d7788cd8c-fnxfh" Oct 9 01:01:19.151568 kubelet[2617]: I1009 01:01:19.151499 2617 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-99zds\" (UniqueName: \"kubernetes.io/projected/81620346-f6d2-49a7-8028-f8201cf286c7-kube-api-access-99zds\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:19.151568 kubelet[2617]: I1009 01:01:19.151529 2617 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/81620346-f6d2-49a7-8028-f8201cf286c7-typha-certs\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:19.151568 kubelet[2617]: I1009 01:01:19.151544 2617 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81620346-f6d2-49a7-8028-f8201cf286c7-tigera-ca-bundle\") on node \"ci-4116.0.0-c-50f1e82448\" DevicePath \"\"" Oct 9 01:01:19.232480 systemd[1]: var-lib-kubelet-pods-81620346\x2df6d2\x2d49a7\x2d8028\x2df8201cf286c7-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Oct 9 01:01:19.232608 systemd[1]: var-lib-kubelet-pods-81620346\x2df6d2\x2d49a7\x2d8028\x2df8201cf286c7-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Oct 9 01:01:19.332806 kubelet[2617]: E1009 01:01:19.331429 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:19.332961 containerd[1468]: time="2024-10-09T01:01:19.332366431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d7788cd8c-fnxfh,Uid:b50ddedc-efe9-4f28-acdb-74db3dd536eb,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:19.381659 containerd[1468]: time="2024-10-09T01:01:19.380954008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:19.381659 containerd[1468]: time="2024-10-09T01:01:19.381027718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:19.381659 containerd[1468]: time="2024-10-09T01:01:19.381046274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:19.381659 containerd[1468]: time="2024-10-09T01:01:19.381144121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:19.413874 systemd[1]: Started cri-containerd-31aba403cf7318db5c462f0f093f9bd14a5c6efdd3d97a51c0153c664a5d0600.scope - libcontainer container 31aba403cf7318db5c462f0f093f9bd14a5c6efdd3d97a51c0153c664a5d0600. Oct 9 01:01:19.475660 containerd[1468]: time="2024-10-09T01:01:19.475603743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d7788cd8c-fnxfh,Uid:b50ddedc-efe9-4f28-acdb-74db3dd536eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"31aba403cf7318db5c462f0f093f9bd14a5c6efdd3d97a51c0153c664a5d0600\"" Oct 9 01:01:19.476850 kubelet[2617]: E1009 01:01:19.476814 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:19.488096 containerd[1468]: time="2024-10-09T01:01:19.487974244Z" level=info msg="CreateContainer within sandbox \"31aba403cf7318db5c462f0f093f9bd14a5c6efdd3d97a51c0153c664a5d0600\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:01:19.506731 containerd[1468]: time="2024-10-09T01:01:19.506681523Z" level=info msg="CreateContainer within sandbox \"31aba403cf7318db5c462f0f093f9bd14a5c6efdd3d97a51c0153c664a5d0600\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"86dc63bf11d434060b434c27dcf56f9e6750a2680f4ef999601670110fa1e2b4\"" Oct 9 01:01:19.508933 containerd[1468]: time="2024-10-09T01:01:19.508887660Z" level=info msg="StartContainer for \"86dc63bf11d434060b434c27dcf56f9e6750a2680f4ef999601670110fa1e2b4\"" Oct 9 01:01:19.566706 systemd[1]: Started cri-containerd-86dc63bf11d434060b434c27dcf56f9e6750a2680f4ef999601670110fa1e2b4.scope - libcontainer container 86dc63bf11d434060b434c27dcf56f9e6750a2680f4ef999601670110fa1e2b4. Oct 9 01:01:19.627978 containerd[1468]: time="2024-10-09T01:01:19.627790168Z" level=info msg="StartContainer for \"86dc63bf11d434060b434c27dcf56f9e6750a2680f4ef999601670110fa1e2b4\" returns successfully" Oct 9 01:01:19.690958 kubelet[2617]: E1009 01:01:19.690926 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:19.692735 kubelet[2617]: I1009 01:01:19.692709 2617 scope.go:117] "RemoveContainer" containerID="52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590" Oct 9 01:01:19.695260 containerd[1468]: time="2024-10-09T01:01:19.694305022Z" level=info msg="RemoveContainer for \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\"" Oct 9 01:01:19.702405 containerd[1468]: time="2024-10-09T01:01:19.701840388Z" level=info msg="RemoveContainer for \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\" returns successfully" Oct 9 01:01:19.702555 kubelet[2617]: I1009 01:01:19.702505 2617 scope.go:117] "RemoveContainer" containerID="52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590" Oct 9 01:01:19.702970 containerd[1468]: time="2024-10-09T01:01:19.702929228Z" level=error msg="ContainerStatus for \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\": not found" Oct 9 01:01:19.703980 kubelet[2617]: E1009 01:01:19.703190 2617 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\": not found" containerID="52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590" Oct 9 01:01:19.703980 kubelet[2617]: I1009 01:01:19.703233 2617 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590"} err="failed to get container status \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\": rpc error: code = NotFound desc = an error occurred when try to find container \"52e041a247d3fc963ebf8830d0ca553142c52ecd18e0e472aed010d5182b3590\": not found" Oct 9 01:01:19.707778 systemd[1]: Removed slice kubepods-besteffort-pod81620346_f6d2_49a7_8028_f8201cf286c7.slice - libcontainer container kubepods-besteffort-pod81620346_f6d2_49a7_8028_f8201cf286c7.slice. Oct 9 01:01:19.732429 kubelet[2617]: I1009 01:01:19.732339 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d7788cd8c-fnxfh" podStartSLOduration=6.732316796 podStartE2EDuration="6.732316796s" podCreationTimestamp="2024-10-09 01:01:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:19.729547378 +0000 UTC m=+47.462617751" watchObservedRunningTime="2024-10-09 01:01:19.732316796 +0000 UTC m=+47.465387191" Oct 9 01:01:20.457919 kubelet[2617]: E1009 01:01:20.457843 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tt4q" podUID="3003107e-be83-475f-b7b0-944d115c5adb" Oct 9 01:01:20.464095 kubelet[2617]: I1009 01:01:20.463969 2617 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81620346-f6d2-49a7-8028-f8201cf286c7" path="/var/lib/kubelet/pods/81620346-f6d2-49a7-8028-f8201cf286c7/volumes" Oct 9 01:01:22.212506 containerd[1468]: time="2024-10-09T01:01:22.212444160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:22.214651 containerd[1468]: time="2024-10-09T01:01:22.213883512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 01:01:22.220461 containerd[1468]: time="2024-10-09T01:01:22.220313075Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:22.225566 containerd[1468]: time="2024-10-09T01:01:22.224800491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:22.226226 containerd[1468]: time="2024-10-09T01:01:22.226186105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 3.544852687s" Oct 9 01:01:22.226333 containerd[1468]: time="2024-10-09T01:01:22.226319415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 01:01:22.235573 containerd[1468]: time="2024-10-09T01:01:22.235429102Z" level=info msg="CreateContainer within sandbox \"35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:01:22.269373 containerd[1468]: time="2024-10-09T01:01:22.269309377Z" level=info msg="CreateContainer within sandbox \"35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a\"" Oct 9 01:01:22.275057 containerd[1468]: time="2024-10-09T01:01:22.275003265Z" level=info msg="StartContainer for \"0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a\"" Oct 9 01:01:22.452840 systemd[1]: run-containerd-runc-k8s.io-0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a-runc.CLFnLa.mount: Deactivated successfully. Oct 9 01:01:22.458231 kubelet[2617]: E1009 01:01:22.458186 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tt4q" podUID="3003107e-be83-475f-b7b0-944d115c5adb" Oct 9 01:01:22.469928 systemd[1]: Started cri-containerd-0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a.scope - libcontainer container 0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a. Oct 9 01:01:22.562663 containerd[1468]: time="2024-10-09T01:01:22.562530058Z" level=info msg="StartContainer for \"0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a\" returns successfully" Oct 9 01:01:22.705939 kubelet[2617]: E1009 01:01:22.705899 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:23.389589 systemd[1]: cri-containerd-0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a.scope: Deactivated successfully. Oct 9 01:01:23.419254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a-rootfs.mount: Deactivated successfully. Oct 9 01:01:23.429226 containerd[1468]: time="2024-10-09T01:01:23.429108175Z" level=info msg="shim disconnected" id=0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a namespace=k8s.io Oct 9 01:01:23.429226 containerd[1468]: time="2024-10-09T01:01:23.429182715Z" level=warning msg="cleaning up after shim disconnected" id=0c047e2a7fef2a0ac42610379c1dcd6d07ae4b5d098e1b76de4c836c68a7e98a namespace=k8s.io Oct 9 01:01:23.429226 containerd[1468]: time="2024-10-09T01:01:23.429191544Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:01:23.516603 kubelet[2617]: I1009 01:01:23.515386 2617 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:01:23.558159 kubelet[2617]: I1009 01:01:23.556707 2617 topology_manager.go:215] "Topology Admit Handler" podUID="2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qjz98" Oct 9 01:01:23.561088 kubelet[2617]: I1009 01:01:23.559600 2617 topology_manager.go:215] "Topology Admit Handler" podUID="51fb5937-e607-4b18-8b5f-0e10bcffa8ee" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sjvsh" Oct 9 01:01:23.565672 kubelet[2617]: I1009 01:01:23.565005 2617 topology_manager.go:215] "Topology Admit Handler" podUID="1a82f38b-b623-4467-ae82-e233b16bd73c" podNamespace="calico-system" podName="calico-kube-controllers-d45c48964-hxd7l" Oct 9 01:01:23.576177 systemd[1]: Created slice kubepods-burstable-pod2b1505c0_40f4_4aa5_b3f7_24809bd7ea0c.slice - libcontainer container kubepods-burstable-pod2b1505c0_40f4_4aa5_b3f7_24809bd7ea0c.slice. Oct 9 01:01:23.585926 systemd[1]: Created slice kubepods-burstable-pod51fb5937_e607_4b18_8b5f_0e10bcffa8ee.slice - libcontainer container kubepods-burstable-pod51fb5937_e607_4b18_8b5f_0e10bcffa8ee.slice. Oct 9 01:01:23.598841 systemd[1]: Created slice kubepods-besteffort-pod1a82f38b_b623_4467_ae82_e233b16bd73c.slice - libcontainer container kubepods-besteffort-pod1a82f38b_b623_4467_ae82_e233b16bd73c.slice. Oct 9 01:01:23.693597 kubelet[2617]: I1009 01:01:23.693492 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c-config-volume\") pod \"coredns-7db6d8ff4d-qjz98\" (UID: \"2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c\") " pod="kube-system/coredns-7db6d8ff4d-qjz98" Oct 9 01:01:23.693597 kubelet[2617]: I1009 01:01:23.693587 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb55r\" (UniqueName: \"kubernetes.io/projected/51fb5937-e607-4b18-8b5f-0e10bcffa8ee-kube-api-access-zb55r\") pod \"coredns-7db6d8ff4d-sjvsh\" (UID: \"51fb5937-e607-4b18-8b5f-0e10bcffa8ee\") " pod="kube-system/coredns-7db6d8ff4d-sjvsh" Oct 9 01:01:23.693597 kubelet[2617]: I1009 01:01:23.693609 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a82f38b-b623-4467-ae82-e233b16bd73c-tigera-ca-bundle\") pod \"calico-kube-controllers-d45c48964-hxd7l\" (UID: \"1a82f38b-b623-4467-ae82-e233b16bd73c\") " pod="calico-system/calico-kube-controllers-d45c48964-hxd7l" Oct 9 01:01:23.693886 kubelet[2617]: I1009 01:01:23.693706 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51fb5937-e607-4b18-8b5f-0e10bcffa8ee-config-volume\") pod \"coredns-7db6d8ff4d-sjvsh\" (UID: \"51fb5937-e607-4b18-8b5f-0e10bcffa8ee\") " pod="kube-system/coredns-7db6d8ff4d-sjvsh" Oct 9 01:01:23.693886 kubelet[2617]: I1009 01:01:23.693763 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfjl\" (UniqueName: \"kubernetes.io/projected/1a82f38b-b623-4467-ae82-e233b16bd73c-kube-api-access-mpfjl\") pod \"calico-kube-controllers-d45c48964-hxd7l\" (UID: \"1a82f38b-b623-4467-ae82-e233b16bd73c\") " pod="calico-system/calico-kube-controllers-d45c48964-hxd7l" Oct 9 01:01:23.693886 kubelet[2617]: I1009 01:01:23.693789 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhg2j\" (UniqueName: \"kubernetes.io/projected/2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c-kube-api-access-vhg2j\") pod \"coredns-7db6d8ff4d-qjz98\" (UID: \"2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c\") " pod="kube-system/coredns-7db6d8ff4d-qjz98" Oct 9 01:01:23.710569 kubelet[2617]: E1009 01:01:23.710518 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:23.712844 containerd[1468]: time="2024-10-09T01:01:23.712808686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:01:23.884107 kubelet[2617]: E1009 01:01:23.883689 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:23.885088 containerd[1468]: time="2024-10-09T01:01:23.884968364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qjz98,Uid:2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c,Namespace:kube-system,Attempt:0,}" Oct 9 01:01:23.894354 kubelet[2617]: E1009 01:01:23.894310 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:23.896135 containerd[1468]: time="2024-10-09T01:01:23.896053765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sjvsh,Uid:51fb5937-e607-4b18-8b5f-0e10bcffa8ee,Namespace:kube-system,Attempt:0,}" Oct 9 01:01:23.906385 containerd[1468]: time="2024-10-09T01:01:23.905887734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d45c48964-hxd7l,Uid:1a82f38b-b623-4467-ae82-e233b16bd73c,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:24.219197 containerd[1468]: time="2024-10-09T01:01:24.219106261Z" level=error msg="Failed to destroy network for sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.222516 containerd[1468]: time="2024-10-09T01:01:24.222296082Z" level=error msg="Failed to destroy network for sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.228630 containerd[1468]: time="2024-10-09T01:01:24.227434089Z" level=error msg="encountered an error cleaning up failed sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.228630 containerd[1468]: time="2024-10-09T01:01:24.227562910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d45c48964-hxd7l,Uid:1a82f38b-b623-4467-ae82-e233b16bd73c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.228630 containerd[1468]: time="2024-10-09T01:01:24.227779244Z" level=error msg="Failed to destroy network for sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.228630 containerd[1468]: time="2024-10-09T01:01:24.228125235Z" level=error msg="encountered an error cleaning up failed sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.228630 containerd[1468]: time="2024-10-09T01:01:24.228192911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sjvsh,Uid:51fb5937-e607-4b18-8b5f-0e10bcffa8ee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.229866 kubelet[2617]: E1009 01:01:24.229326 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.229866 kubelet[2617]: E1009 01:01:24.229436 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d45c48964-hxd7l" Oct 9 01:01:24.229866 kubelet[2617]: E1009 01:01:24.229489 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d45c48964-hxd7l" Oct 9 01:01:24.230685 kubelet[2617]: E1009 01:01:24.229569 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d45c48964-hxd7l_calico-system(1a82f38b-b623-4467-ae82-e233b16bd73c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d45c48964-hxd7l_calico-system(1a82f38b-b623-4467-ae82-e233b16bd73c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d45c48964-hxd7l" podUID="1a82f38b-b623-4467-ae82-e233b16bd73c" Oct 9 01:01:24.230897 containerd[1468]: time="2024-10-09T01:01:24.230387740Z" level=error msg="encountered an error cleaning up failed sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.230897 containerd[1468]: time="2024-10-09T01:01:24.230504509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qjz98,Uid:2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.230978 kubelet[2617]: E1009 01:01:24.230816 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.230978 kubelet[2617]: E1009 01:01:24.230884 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qjz98" Oct 9 01:01:24.230978 kubelet[2617]: E1009 01:01:24.230919 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qjz98" Oct 9 01:01:24.231086 kubelet[2617]: E1009 01:01:24.230972 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qjz98_kube-system(2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qjz98_kube-system(2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qjz98" podUID="2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c" Oct 9 01:01:24.231086 kubelet[2617]: E1009 01:01:24.231043 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.231173 kubelet[2617]: E1009 01:01:24.231077 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sjvsh" Oct 9 01:01:24.231173 kubelet[2617]: E1009 01:01:24.231103 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sjvsh" Oct 9 01:01:24.231173 kubelet[2617]: E1009 01:01:24.231143 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-sjvsh_kube-system(51fb5937-e607-4b18-8b5f-0e10bcffa8ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-sjvsh_kube-system(51fb5937-e607-4b18-8b5f-0e10bcffa8ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sjvsh" podUID="51fb5937-e607-4b18-8b5f-0e10bcffa8ee" Oct 9 01:01:24.420061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede-shm.mount: Deactivated successfully. Oct 9 01:01:24.466064 systemd[1]: Created slice kubepods-besteffort-pod3003107e_be83_475f_b7b0_944d115c5adb.slice - libcontainer container kubepods-besteffort-pod3003107e_be83_475f_b7b0_944d115c5adb.slice. Oct 9 01:01:24.469983 containerd[1468]: time="2024-10-09T01:01:24.469616973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tt4q,Uid:3003107e-be83-475f-b7b0-944d115c5adb,Namespace:calico-system,Attempt:0,}" Oct 9 01:01:24.565198 containerd[1468]: time="2024-10-09T01:01:24.561744145Z" level=error msg="Failed to destroy network for sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.564929 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9-shm.mount: Deactivated successfully. Oct 9 01:01:24.568313 containerd[1468]: time="2024-10-09T01:01:24.568089205Z" level=error msg="encountered an error cleaning up failed sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.568313 containerd[1468]: time="2024-10-09T01:01:24.568235026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tt4q,Uid:3003107e-be83-475f-b7b0-944d115c5adb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.568928 kubelet[2617]: E1009 01:01:24.568881 2617 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.572243 kubelet[2617]: E1009 01:01:24.568967 2617 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tt4q" Oct 9 01:01:24.572243 kubelet[2617]: E1009 01:01:24.569010 2617 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tt4q" Oct 9 01:01:24.572243 kubelet[2617]: E1009 01:01:24.569075 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tt4q_calico-system(3003107e-be83-475f-b7b0-944d115c5adb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tt4q_calico-system(3003107e-be83-475f-b7b0-944d115c5adb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tt4q" podUID="3003107e-be83-475f-b7b0-944d115c5adb" Oct 9 01:01:24.715662 kubelet[2617]: I1009 01:01:24.715250 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Oct 9 01:01:24.719673 containerd[1468]: time="2024-10-09T01:01:24.717938228Z" level=info msg="StopPodSandbox for \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\"" Oct 9 01:01:24.719673 containerd[1468]: time="2024-10-09T01:01:24.718218814Z" level=info msg="Ensure that sandbox 9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede in task-service has been cleanup successfully" Oct 9 01:01:24.719673 containerd[1468]: time="2024-10-09T01:01:24.718666479Z" level=info msg="StopPodSandbox for \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\"" Oct 9 01:01:24.719673 containerd[1468]: time="2024-10-09T01:01:24.718868437Z" level=info msg="Ensure that sandbox 6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9 in task-service has been cleanup successfully" Oct 9 01:01:24.719942 kubelet[2617]: I1009 01:01:24.718104 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Oct 9 01:01:24.737033 kubelet[2617]: I1009 01:01:24.734925 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Oct 9 01:01:24.737441 containerd[1468]: time="2024-10-09T01:01:24.737399651Z" level=info msg="StopPodSandbox for \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\"" Oct 9 01:01:24.738789 containerd[1468]: time="2024-10-09T01:01:24.737825921Z" level=info msg="Ensure that sandbox a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b in task-service has been cleanup successfully" Oct 9 01:01:24.747705 kubelet[2617]: I1009 01:01:24.747673 2617 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Oct 9 01:01:24.749831 containerd[1468]: time="2024-10-09T01:01:24.748843590Z" level=info msg="StopPodSandbox for \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\"" Oct 9 01:01:24.749831 containerd[1468]: time="2024-10-09T01:01:24.749112382Z" level=info msg="Ensure that sandbox 01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7 in task-service has been cleanup successfully" Oct 9 01:01:24.868763 containerd[1468]: time="2024-10-09T01:01:24.868588256Z" level=error msg="StopPodSandbox for \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\" failed" error="failed to destroy network for sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.871968 kubelet[2617]: E1009 01:01:24.871832 2617 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Oct 9 01:01:24.872469 kubelet[2617]: E1009 01:01:24.872307 2617 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede"} Oct 9 01:01:24.872469 kubelet[2617]: E1009 01:01:24.872380 2617 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:01:24.872469 kubelet[2617]: E1009 01:01:24.872422 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qjz98" podUID="2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c" Oct 9 01:01:24.875127 containerd[1468]: time="2024-10-09T01:01:24.874874150Z" level=error msg="StopPodSandbox for \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\" failed" error="failed to destroy network for sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.877479 kubelet[2617]: E1009 01:01:24.877410 2617 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Oct 9 01:01:24.877479 kubelet[2617]: E1009 01:01:24.877465 2617 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9"} Oct 9 01:01:24.877933 kubelet[2617]: E1009 01:01:24.877502 2617 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3003107e-be83-475f-b7b0-944d115c5adb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:01:24.877933 kubelet[2617]: E1009 01:01:24.877532 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3003107e-be83-475f-b7b0-944d115c5adb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tt4q" podUID="3003107e-be83-475f-b7b0-944d115c5adb" Oct 9 01:01:24.896942 containerd[1468]: time="2024-10-09T01:01:24.895430945Z" level=error msg="StopPodSandbox for \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\" failed" error="failed to destroy network for sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.898084 kubelet[2617]: E1009 01:01:24.897832 2617 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Oct 9 01:01:24.898084 kubelet[2617]: E1009 01:01:24.897917 2617 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b"} Oct 9 01:01:24.898084 kubelet[2617]: E1009 01:01:24.897965 2617 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51fb5937-e607-4b18-8b5f-0e10bcffa8ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:01:24.898084 kubelet[2617]: E1009 01:01:24.897989 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51fb5937-e607-4b18-8b5f-0e10bcffa8ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sjvsh" podUID="51fb5937-e607-4b18-8b5f-0e10bcffa8ee" Oct 9 01:01:24.903762 containerd[1468]: time="2024-10-09T01:01:24.902274795Z" level=error msg="StopPodSandbox for \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\" failed" error="failed to destroy network for sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:01:24.903980 kubelet[2617]: E1009 01:01:24.902673 2617 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Oct 9 01:01:24.903980 kubelet[2617]: E1009 01:01:24.902725 2617 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7"} Oct 9 01:01:24.903980 kubelet[2617]: E1009 01:01:24.902761 2617 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a82f38b-b623-4467-ae82-e233b16bd73c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:01:24.903980 kubelet[2617]: E1009 01:01:24.902790 2617 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a82f38b-b623-4467-ae82-e233b16bd73c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d45c48964-hxd7l" podUID="1a82f38b-b623-4467-ae82-e233b16bd73c" Oct 9 01:01:25.335293 systemd[1]: Started sshd@7-143.110.225.158:22-139.178.68.195:60998.service - OpenSSH per-connection server daemon (139.178.68.195:60998). Oct 9 01:01:25.499541 sshd[3820]: Accepted publickey for core from 139.178.68.195 port 60998 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:25.502287 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:25.510532 systemd-logind[1449]: New session 8 of user core. Oct 9 01:01:25.517120 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:01:25.814704 sshd[3820]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:25.820252 systemd[1]: sshd@7-143.110.225.158:22-139.178.68.195:60998.service: Deactivated successfully. Oct 9 01:01:25.824961 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:01:25.826158 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:01:25.830816 systemd-logind[1449]: Removed session 8. Oct 9 01:01:29.654210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2529461070.mount: Deactivated successfully. Oct 9 01:01:29.878005 containerd[1468]: time="2024-10-09T01:01:29.840777564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 01:01:29.930663 containerd[1468]: time="2024-10-09T01:01:29.929722708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:29.955668 containerd[1468]: time="2024-10-09T01:01:29.955317383Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:29.984259 containerd[1468]: time="2024-10-09T01:01:29.982768674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:29.984259 containerd[1468]: time="2024-10-09T01:01:29.983398191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.270541888s" Oct 9 01:01:29.984259 containerd[1468]: time="2024-10-09T01:01:29.983429603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 01:01:30.039820 containerd[1468]: time="2024-10-09T01:01:30.039775300Z" level=info msg="CreateContainer within sandbox \"35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:01:30.166492 containerd[1468]: time="2024-10-09T01:01:30.166407901Z" level=info msg="CreateContainer within sandbox \"35c2e5bce31ec4aa9d8067c95b93ce3a6793f809aed32ef4a91c567577c816fb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0b93ecb56906f1c79fd59cc5aea164beb9b7fa5d2f2de266001ac3597b5d5f62\"" Oct 9 01:01:30.167867 containerd[1468]: time="2024-10-09T01:01:30.167621171Z" level=info msg="StartContainer for \"0b93ecb56906f1c79fd59cc5aea164beb9b7fa5d2f2de266001ac3597b5d5f62\"" Oct 9 01:01:30.305073 systemd[1]: Started cri-containerd-0b93ecb56906f1c79fd59cc5aea164beb9b7fa5d2f2de266001ac3597b5d5f62.scope - libcontainer container 0b93ecb56906f1c79fd59cc5aea164beb9b7fa5d2f2de266001ac3597b5d5f62. Oct 9 01:01:30.395346 containerd[1468]: time="2024-10-09T01:01:30.394394013Z" level=info msg="StartContainer for \"0b93ecb56906f1c79fd59cc5aea164beb9b7fa5d2f2de266001ac3597b5d5f62\" returns successfully" Oct 9 01:01:30.493068 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:01:30.494453 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:01:30.801563 kubelet[2617]: E1009 01:01:30.801370 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:30.834966 systemd[1]: Started sshd@8-143.110.225.158:22-139.178.68.195:56416.service - OpenSSH per-connection server daemon (139.178.68.195:56416). Oct 9 01:01:30.918173 kubelet[2617]: I1009 01:01:30.917778 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fbbm6" podStartSLOduration=3.596707543 podStartE2EDuration="14.91775445s" podCreationTimestamp="2024-10-09 01:01:16 +0000 UTC" firstStartedPulling="2024-10-09 01:01:18.680448118 +0000 UTC m=+46.413518472" lastFinishedPulling="2024-10-09 01:01:30.001495023 +0000 UTC m=+57.734565379" observedRunningTime="2024-10-09 01:01:30.907317744 +0000 UTC m=+58.640388117" watchObservedRunningTime="2024-10-09 01:01:30.91775445 +0000 UTC m=+58.650824846" Oct 9 01:01:30.987380 sshd[3902]: Accepted publickey for core from 139.178.68.195 port 56416 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:30.990747 sshd[3902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:31.005777 systemd-logind[1449]: New session 9 of user core. Oct 9 01:01:31.008857 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:01:31.235926 sshd[3902]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:31.245079 systemd[1]: sshd@8-143.110.225.158:22-139.178.68.195:56416.service: Deactivated successfully. Oct 9 01:01:31.250586 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:01:31.251672 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:01:31.255129 systemd-logind[1449]: Removed session 9. Oct 9 01:01:31.791473 kubelet[2617]: E1009 01:01:31.791361 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:31.818106 systemd[1]: run-containerd-runc-k8s.io-0b93ecb56906f1c79fd59cc5aea164beb9b7fa5d2f2de266001ac3597b5d5f62-runc.qPZGrk.mount: Deactivated successfully. Oct 9 01:01:32.460066 containerd[1468]: time="2024-10-09T01:01:32.459862638Z" level=info msg="StopPodSandbox for \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\"" Oct 9 01:01:32.460066 containerd[1468]: time="2024-10-09T01:01:32.459963446Z" level=info msg="TearDown network for sandbox \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\" successfully" Oct 9 01:01:32.460066 containerd[1468]: time="2024-10-09T01:01:32.459974190Z" level=info msg="StopPodSandbox for \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\" returns successfully" Oct 9 01:01:32.465326 containerd[1468]: time="2024-10-09T01:01:32.464988099Z" level=info msg="RemovePodSandbox for \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\"" Oct 9 01:01:32.465326 containerd[1468]: time="2024-10-09T01:01:32.465149970Z" level=info msg="Forcibly stopping sandbox \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\"" Oct 9 01:01:32.473039 containerd[1468]: time="2024-10-09T01:01:32.465273130Z" level=info msg="TearDown network for sandbox \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\" successfully" Oct 9 01:01:32.480651 containerd[1468]: time="2024-10-09T01:01:32.480367518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:01:32.480651 containerd[1468]: time="2024-10-09T01:01:32.480465518Z" level=info msg="RemovePodSandbox \"1b263ceb1bc05757a7573eadd7d9cc71f0a69f9b0fee02b7a30481a3836bdfb5\" returns successfully" Oct 9 01:01:32.481989 containerd[1468]: time="2024-10-09T01:01:32.481951076Z" level=info msg="StopPodSandbox for \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\"" Oct 9 01:01:32.482121 containerd[1468]: time="2024-10-09T01:01:32.482048355Z" level=info msg="TearDown network for sandbox \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\" successfully" Oct 9 01:01:32.482121 containerd[1468]: time="2024-10-09T01:01:32.482059469Z" level=info msg="StopPodSandbox for \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\" returns successfully" Oct 9 01:01:32.483744 containerd[1468]: time="2024-10-09T01:01:32.483710811Z" level=info msg="RemovePodSandbox for \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\"" Oct 9 01:01:32.483861 containerd[1468]: time="2024-10-09T01:01:32.483748680Z" level=info msg="Forcibly stopping sandbox \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\"" Oct 9 01:01:32.483861 containerd[1468]: time="2024-10-09T01:01:32.483821822Z" level=info msg="TearDown network for sandbox \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\" successfully" Oct 9 01:01:32.488923 containerd[1468]: time="2024-10-09T01:01:32.488862464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:01:32.489074 containerd[1468]: time="2024-10-09T01:01:32.488939782Z" level=info msg="RemovePodSandbox \"e2543c9b7629f908114de02e8d49508c9b84ea1bb8bd58833b5a5c6be98ce5d6\" returns successfully" Oct 9 01:01:35.457951 containerd[1468]: time="2024-10-09T01:01:35.457899825Z" level=info msg="StopPodSandbox for \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\"" Oct 9 01:01:35.458490 containerd[1468]: time="2024-10-09T01:01:35.458301846Z" level=info msg="StopPodSandbox for \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\"" Oct 9 01:01:35.462298 containerd[1468]: time="2024-10-09T01:01:35.462234281Z" level=info msg="StopPodSandbox for \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\"" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.596 [INFO][4157] k8s.go 608: Cleaning up netns ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.597 [INFO][4157] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" iface="eth0" netns="/var/run/netns/cni-bd900db7-28b2-c985-c9ec-dc28c745441a" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.599 [INFO][4157] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" iface="eth0" netns="/var/run/netns/cni-bd900db7-28b2-c985-c9ec-dc28c745441a" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.599 [INFO][4157] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" iface="eth0" netns="/var/run/netns/cni-bd900db7-28b2-c985-c9ec-dc28c745441a" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.599 [INFO][4157] k8s.go 615: Releasing IP address(es) ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.599 [INFO][4157] utils.go 188: Calico CNI releasing IP address ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.780 [INFO][4176] ipam_plugin.go 417: Releasing address using handleID ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" HandleID="k8s-pod-network.9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.782 [INFO][4176] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.785 [INFO][4176] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.801 [WARNING][4176] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" HandleID="k8s-pod-network.9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.801 [INFO][4176] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" HandleID="k8s-pod-network.9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.807 [INFO][4176] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:35.820254 containerd[1468]: 2024-10-09 01:01:35.813 [INFO][4157] k8s.go 621: Teardown processing complete. ContainerID="9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede" Oct 9 01:01:35.823987 containerd[1468]: time="2024-10-09T01:01:35.821839055Z" level=info msg="TearDown network for sandbox \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\" successfully" Oct 9 01:01:35.823987 containerd[1468]: time="2024-10-09T01:01:35.821878276Z" level=info msg="StopPodSandbox for \"9a731e844eb5546ad18c810c816a3868ea1066ee955caadb3d49250dce8d7ede\" returns successfully" Oct 9 01:01:35.824204 kubelet[2617]: E1009 01:01:35.824025 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:35.825051 systemd[1]: run-netns-cni\x2dbd900db7\x2d28b2\x2dc985\x2dc9ec\x2ddc28c745441a.mount: Deactivated successfully. Oct 9 01:01:35.827836 containerd[1468]: time="2024-10-09T01:01:35.825824695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qjz98,Uid:2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c,Namespace:kube-system,Attempt:1,}" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.631 [INFO][4158] k8s.go 608: Cleaning up netns ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.632 [INFO][4158] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" iface="eth0" netns="/var/run/netns/cni-57f6b1b6-90da-ac92-c32d-52a116dfd007" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.633 [INFO][4158] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" iface="eth0" netns="/var/run/netns/cni-57f6b1b6-90da-ac92-c32d-52a116dfd007" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.635 [INFO][4158] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" iface="eth0" netns="/var/run/netns/cni-57f6b1b6-90da-ac92-c32d-52a116dfd007" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.635 [INFO][4158] k8s.go 615: Releasing IP address(es) ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.635 [INFO][4158] utils.go 188: Calico CNI releasing IP address ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.782 [INFO][4183] ipam_plugin.go 417: Releasing address using handleID ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" HandleID="k8s-pod-network.01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.783 [INFO][4183] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.808 [INFO][4183] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.816 [WARNING][4183] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" HandleID="k8s-pod-network.01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.817 [INFO][4183] ipam_plugin.go 445: Releasing address using workloadID ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" HandleID="k8s-pod-network.01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.830 [INFO][4183] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:35.853132 containerd[1468]: 2024-10-09 01:01:35.843 [INFO][4158] k8s.go 621: Teardown processing complete. ContainerID="01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7" Oct 9 01:01:35.854435 containerd[1468]: time="2024-10-09T01:01:35.853561749Z" level=info msg="TearDown network for sandbox \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\" successfully" Oct 9 01:01:35.854435 containerd[1468]: time="2024-10-09T01:01:35.853601108Z" level=info msg="StopPodSandbox for \"01e61eba8816559374a5d4cc414410a39a82aa70c7511c72ac4c377e9c6afdc7\" returns successfully" Oct 9 01:01:35.859543 systemd[1]: run-netns-cni\x2d57f6b1b6\x2d90da\x2dac92\x2dc32d\x2d52a116dfd007.mount: Deactivated successfully. Oct 9 01:01:35.863560 containerd[1468]: time="2024-10-09T01:01:35.863058158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d45c48964-hxd7l,Uid:1a82f38b-b623-4467-ae82-e233b16bd73c,Namespace:calico-system,Attempt:1,}" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.588 [INFO][4153] k8s.go 608: Cleaning up netns ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.589 [INFO][4153] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" iface="eth0" netns="/var/run/netns/cni-026898a7-7d13-6078-1952-ed56be97d68e" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.591 [INFO][4153] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" iface="eth0" netns="/var/run/netns/cni-026898a7-7d13-6078-1952-ed56be97d68e" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.592 [INFO][4153] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" iface="eth0" netns="/var/run/netns/cni-026898a7-7d13-6078-1952-ed56be97d68e" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.592 [INFO][4153] k8s.go 615: Releasing IP address(es) ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.593 [INFO][4153] utils.go 188: Calico CNI releasing IP address ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.781 [INFO][4175] ipam_plugin.go 417: Releasing address using handleID ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" HandleID="k8s-pod-network.a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.784 [INFO][4175] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.830 [INFO][4175] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.860 [WARNING][4175] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" HandleID="k8s-pod-network.a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.860 [INFO][4175] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" HandleID="k8s-pod-network.a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.866 [INFO][4175] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:35.877073 containerd[1468]: 2024-10-09 01:01:35.870 [INFO][4153] k8s.go 621: Teardown processing complete. ContainerID="a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b" Oct 9 01:01:35.879062 containerd[1468]: time="2024-10-09T01:01:35.878823248Z" level=info msg="TearDown network for sandbox \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\" successfully" Oct 9 01:01:35.879062 containerd[1468]: time="2024-10-09T01:01:35.878866872Z" level=info msg="StopPodSandbox for \"a37377057365b3216b5c09f606e845acd605ce902f29557fdd8bfe69bedaaf3b\" returns successfully" Oct 9 01:01:35.883721 systemd[1]: run-netns-cni\x2d026898a7\x2d7d13\x2d6078\x2d1952\x2ded56be97d68e.mount: Deactivated successfully. Oct 9 01:01:35.899663 kubelet[2617]: E1009 01:01:35.895910 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:35.901262 containerd[1468]: time="2024-10-09T01:01:35.900863098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sjvsh,Uid:51fb5937-e607-4b18-8b5f-0e10bcffa8ee,Namespace:kube-system,Attempt:1,}" Oct 9 01:01:36.254478 systemd[1]: Started sshd@9-143.110.225.158:22-139.178.68.195:56420.service - OpenSSH per-connection server daemon (139.178.68.195:56420). Oct 9 01:01:36.271395 systemd-networkd[1375]: cali27d7d27f5fc: Link UP Oct 9 01:01:36.271671 systemd-networkd[1375]: cali27d7d27f5fc: Gained carrier Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:35.983 [INFO][4203] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.011 [INFO][4203] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0 coredns-7db6d8ff4d- kube-system 2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c 898 0 2024-10-09 01:00:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116.0.0-c-50f1e82448 coredns-7db6d8ff4d-qjz98 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27d7d27f5fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qjz98" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.012 [INFO][4203] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qjz98" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.123 [INFO][4247] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" HandleID="k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.145 [INFO][4247] ipam_plugin.go 270: Auto assigning IP ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" HandleID="k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116.0.0-c-50f1e82448", "pod":"coredns-7db6d8ff4d-qjz98", "timestamp":"2024-10-09 01:01:36.123410843 +0000 UTC"}, Hostname:"ci-4116.0.0-c-50f1e82448", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.146 [INFO][4247] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.146 [INFO][4247] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.146 [INFO][4247] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-c-50f1e82448' Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.149 [INFO][4247] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.163 [INFO][4247] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.190 [INFO][4247] ipam.go 489: Trying affinity for 192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.194 [INFO][4247] ipam.go 155: Attempting to load block cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.199 [INFO][4247] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.199 [INFO][4247] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.192/26 handle="k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.205 [INFO][4247] ipam.go 1685: Creating new handle: k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99 Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.216 [INFO][4247] ipam.go 1203: Writing block in order to claim IPs block=192.168.31.192/26 handle="k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.231 [INFO][4247] ipam.go 1216: Successfully claimed IPs: [192.168.31.193/26] block=192.168.31.192/26 handle="k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.231 [INFO][4247] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.193/26] handle="k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.231 [INFO][4247] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:36.325532 containerd[1468]: 2024-10-09 01:01:36.232 [INFO][4247] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.31.193/26] IPv6=[] ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" HandleID="k8s-pod-network.884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:36.330002 containerd[1468]: 2024-10-09 01:01:36.240 [INFO][4203] k8s.go 386: Populated endpoint ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qjz98" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"", Pod:"coredns-7db6d8ff4d-qjz98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27d7d27f5fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:36.330002 containerd[1468]: 2024-10-09 01:01:36.242 [INFO][4203] k8s.go 387: Calico CNI using IPs: [192.168.31.193/32] ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qjz98" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:36.330002 containerd[1468]: 2024-10-09 01:01:36.242 [INFO][4203] dataplane_linux.go 68: Setting the host side veth name to cali27d7d27f5fc ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qjz98" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:36.330002 containerd[1468]: 2024-10-09 01:01:36.271 [INFO][4203] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qjz98" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:36.330002 containerd[1468]: 2024-10-09 01:01:36.281 [INFO][4203] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qjz98" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99", Pod:"coredns-7db6d8ff4d-qjz98", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27d7d27f5fc", MAC:"1a:d8:e2:2a:f9:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:36.330002 containerd[1468]: 2024-10-09 01:01:36.314 [INFO][4203] k8s.go 500: Wrote updated endpoint to datastore ContainerID="884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qjz98" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--qjz98-eth0" Oct 9 01:01:36.397833 sshd[4272]: Accepted publickey for core from 139.178.68.195 port 56420 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:36.404324 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:36.409200 systemd-networkd[1375]: cali7216238c3bb: Link UP Oct 9 01:01:36.417654 systemd-networkd[1375]: cali7216238c3bb: Gained carrier Oct 9 01:01:36.431902 systemd-logind[1449]: New session 10 of user core. Oct 9 01:01:36.436964 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:35.998 [INFO][4205] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.035 [INFO][4205] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0 calico-kube-controllers-d45c48964- calico-system 1a82f38b-b623-4467-ae82-e233b16bd73c 899 0 2024-10-09 01:01:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d45c48964 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4116.0.0-c-50f1e82448 calico-kube-controllers-d45c48964-hxd7l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7216238c3bb [] []}} ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Namespace="calico-system" Pod="calico-kube-controllers-d45c48964-hxd7l" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.036 [INFO][4205] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Namespace="calico-system" Pod="calico-kube-controllers-d45c48964-hxd7l" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.133 [INFO][4249] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" HandleID="k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.151 [INFO][4249] ipam_plugin.go 270: Auto assigning IP ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" HandleID="k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310b80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116.0.0-c-50f1e82448", "pod":"calico-kube-controllers-d45c48964-hxd7l", "timestamp":"2024-10-09 01:01:36.133532036 +0000 UTC"}, Hostname:"ci-4116.0.0-c-50f1e82448", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.151 [INFO][4249] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.231 [INFO][4249] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.233 [INFO][4249] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-c-50f1e82448' Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.238 [INFO][4249] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.280 [INFO][4249] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.297 [INFO][4249] ipam.go 489: Trying affinity for 192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.302 [INFO][4249] ipam.go 155: Attempting to load block cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.320 [INFO][4249] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.321 [INFO][4249] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.192/26 handle="k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.340 [INFO][4249] ipam.go 1685: Creating new handle: k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.352 [INFO][4249] ipam.go 1203: Writing block in order to claim IPs block=192.168.31.192/26 handle="k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.367 [INFO][4249] ipam.go 1216: Successfully claimed IPs: [192.168.31.194/26] block=192.168.31.192/26 handle="k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.367 [INFO][4249] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.194/26] handle="k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.368 [INFO][4249] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:36.447865 containerd[1468]: 2024-10-09 01:01:36.368 [INFO][4249] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.31.194/26] IPv6=[] ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" HandleID="k8s-pod-network.5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:36.452478 containerd[1468]: 2024-10-09 01:01:36.385 [INFO][4205] k8s.go 386: Populated endpoint ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Namespace="calico-system" Pod="calico-kube-controllers-d45c48964-hxd7l" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0", GenerateName:"calico-kube-controllers-d45c48964-", Namespace:"calico-system", SelfLink:"", UID:"1a82f38b-b623-4467-ae82-e233b16bd73c", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d45c48964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"", Pod:"calico-kube-controllers-d45c48964-hxd7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7216238c3bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:36.452478 containerd[1468]: 2024-10-09 01:01:36.385 [INFO][4205] k8s.go 387: Calico CNI using IPs: [192.168.31.194/32] ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Namespace="calico-system" Pod="calico-kube-controllers-d45c48964-hxd7l" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:36.452478 containerd[1468]: 2024-10-09 01:01:36.385 [INFO][4205] dataplane_linux.go 68: Setting the host side veth name to cali7216238c3bb ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Namespace="calico-system" Pod="calico-kube-controllers-d45c48964-hxd7l" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:36.452478 containerd[1468]: 2024-10-09 01:01:36.418 [INFO][4205] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Namespace="calico-system" Pod="calico-kube-controllers-d45c48964-hxd7l" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:36.452478 containerd[1468]: 2024-10-09 01:01:36.418 [INFO][4205] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Namespace="calico-system" Pod="calico-kube-controllers-d45c48964-hxd7l" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0", GenerateName:"calico-kube-controllers-d45c48964-", Namespace:"calico-system", SelfLink:"", UID:"1a82f38b-b623-4467-ae82-e233b16bd73c", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d45c48964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb", Pod:"calico-kube-controllers-d45c48964-hxd7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7216238c3bb", MAC:"ca:15:87:16:56:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:36.452478 containerd[1468]: 2024-10-09 01:01:36.436 [INFO][4205] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb" Namespace="calico-system" Pod="calico-kube-controllers-d45c48964-hxd7l" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--kube--controllers--d45c48964--hxd7l-eth0" Oct 9 01:01:36.479869 containerd[1468]: time="2024-10-09T01:01:36.479648555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:36.479869 containerd[1468]: time="2024-10-09T01:01:36.479809072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:36.479869 containerd[1468]: time="2024-10-09T01:01:36.479834765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:36.484446 containerd[1468]: time="2024-10-09T01:01:36.479979761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:36.512117 systemd-networkd[1375]: cali728c8213979: Link UP Oct 9 01:01:36.514539 systemd-networkd[1375]: cali728c8213979: Gained carrier Oct 9 01:01:36.536229 systemd[1]: Started cri-containerd-884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99.scope - libcontainer container 884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99. Oct 9 01:01:36.570925 containerd[1468]: time="2024-10-09T01:01:36.570695524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:36.570925 containerd[1468]: time="2024-10-09T01:01:36.570867091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:36.570925 containerd[1468]: time="2024-10-09T01:01:36.570900718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:36.575667 containerd[1468]: time="2024-10-09T01:01:36.574512932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.060 [INFO][4229] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.103 [INFO][4229] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0 coredns-7db6d8ff4d- kube-system 51fb5937-e607-4b18-8b5f-0e10bcffa8ee 897 0 2024-10-09 01:00:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116.0.0-c-50f1e82448 coredns-7db6d8ff4d-sjvsh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali728c8213979 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sjvsh" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.103 [INFO][4229] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sjvsh" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.223 [INFO][4261] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" HandleID="k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.242 [INFO][4261] ipam_plugin.go 270: Auto assigning IP ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" HandleID="k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037e100), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116.0.0-c-50f1e82448", "pod":"coredns-7db6d8ff4d-sjvsh", "timestamp":"2024-10-09 01:01:36.223752971 +0000 UTC"}, Hostname:"ci-4116.0.0-c-50f1e82448", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.244 [INFO][4261] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.369 [INFO][4261] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.369 [INFO][4261] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-c-50f1e82448' Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.381 [INFO][4261] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.408 [INFO][4261] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.441 [INFO][4261] ipam.go 489: Trying affinity for 192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.455 [INFO][4261] ipam.go 155: Attempting to load block cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.461 [INFO][4261] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.461 [INFO][4261] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.192/26 handle="k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.467 [INFO][4261] ipam.go 1685: Creating new handle: k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372 Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.485 [INFO][4261] ipam.go 1203: Writing block in order to claim IPs block=192.168.31.192/26 handle="k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.496 [INFO][4261] ipam.go 1216: Successfully claimed IPs: [192.168.31.195/26] block=192.168.31.192/26 handle="k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.496 [INFO][4261] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.195/26] handle="k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.497 [INFO][4261] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:36.587862 containerd[1468]: 2024-10-09 01:01:36.497 [INFO][4261] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.31.195/26] IPv6=[] ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" HandleID="k8s-pod-network.cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Workload="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:36.591774 containerd[1468]: 2024-10-09 01:01:36.503 [INFO][4229] k8s.go 386: Populated endpoint ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sjvsh" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"51fb5937-e607-4b18-8b5f-0e10bcffa8ee", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"", Pod:"coredns-7db6d8ff4d-sjvsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali728c8213979", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:36.591774 containerd[1468]: 2024-10-09 01:01:36.504 [INFO][4229] k8s.go 387: Calico CNI using IPs: [192.168.31.195/32] ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sjvsh" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:36.591774 containerd[1468]: 2024-10-09 01:01:36.504 [INFO][4229] dataplane_linux.go 68: Setting the host side veth name to cali728c8213979 ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sjvsh" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:36.591774 containerd[1468]: 2024-10-09 01:01:36.517 [INFO][4229] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sjvsh" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:36.591774 containerd[1468]: 2024-10-09 01:01:36.528 [INFO][4229] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sjvsh" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"51fb5937-e607-4b18-8b5f-0e10bcffa8ee", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 0, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372", Pod:"coredns-7db6d8ff4d-sjvsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali728c8213979", MAC:"36:35:64:bc:6e:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:36.591774 containerd[1468]: 2024-10-09 01:01:36.579 [INFO][4229] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sjvsh" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-coredns--7db6d8ff4d--sjvsh-eth0" Oct 9 01:01:36.638083 systemd[1]: Started cri-containerd-5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb.scope - libcontainer container 5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb. Oct 9 01:01:36.693077 containerd[1468]: time="2024-10-09T01:01:36.692950948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:36.693515 containerd[1468]: time="2024-10-09T01:01:36.693349872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:36.693515 containerd[1468]: time="2024-10-09T01:01:36.693371969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:36.693602 containerd[1468]: time="2024-10-09T01:01:36.693543733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:36.731170 containerd[1468]: time="2024-10-09T01:01:36.731120569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qjz98,Uid:2b1505c0-40f4-4aa5-b3f7-24809bd7ea0c,Namespace:kube-system,Attempt:1,} returns sandbox id \"884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99\"" Oct 9 01:01:36.735984 kubelet[2617]: E1009 01:01:36.735941 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:36.756175 containerd[1468]: time="2024-10-09T01:01:36.755972797Z" level=info msg="CreateContainer within sandbox \"884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:01:36.761875 systemd[1]: Started cri-containerd-cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372.scope - libcontainer container cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372. Oct 9 01:01:36.805545 sshd[4272]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:36.816742 containerd[1468]: time="2024-10-09T01:01:36.816301046Z" level=info msg="CreateContainer within sandbox \"884aa267f3a8632b3204c2b8f559a2d4607025e2915acde39eb5e4c91bee0b99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fca59df3c73cb8c4d5b2a328ee4ce78477a6d0e0c17fdbb62a67542143f4bf26\"" Oct 9 01:01:36.819081 containerd[1468]: time="2024-10-09T01:01:36.819044361Z" level=info msg="StartContainer for \"fca59df3c73cb8c4d5b2a328ee4ce78477a6d0e0c17fdbb62a67542143f4bf26\"" Oct 9 01:01:36.819845 systemd[1]: sshd@9-143.110.225.158:22-139.178.68.195:56420.service: Deactivated successfully. Oct 9 01:01:36.850187 containerd[1468]: time="2024-10-09T01:01:36.849921279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d45c48964-hxd7l,Uid:1a82f38b-b623-4467-ae82-e233b16bd73c,Namespace:calico-system,Attempt:1,} returns sandbox id \"5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb\"" Oct 9 01:01:36.861668 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:01:36.863675 containerd[1468]: time="2024-10-09T01:01:36.863178639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:01:36.874682 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:01:36.879420 systemd-logind[1449]: Removed session 10. Oct 9 01:01:36.915998 containerd[1468]: time="2024-10-09T01:01:36.915954243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sjvsh,Uid:51fb5937-e607-4b18-8b5f-0e10bcffa8ee,Namespace:kube-system,Attempt:1,} returns sandbox id \"cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372\"" Oct 9 01:01:36.919213 kubelet[2617]: E1009 01:01:36.919175 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:36.931961 containerd[1468]: time="2024-10-09T01:01:36.931906036Z" level=info msg="CreateContainer within sandbox \"cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:01:36.947113 systemd[1]: run-containerd-runc-k8s.io-fca59df3c73cb8c4d5b2a328ee4ce78477a6d0e0c17fdbb62a67542143f4bf26-runc.hsgzkR.mount: Deactivated successfully. Oct 9 01:01:36.956979 systemd[1]: Started cri-containerd-fca59df3c73cb8c4d5b2a328ee4ce78477a6d0e0c17fdbb62a67542143f4bf26.scope - libcontainer container fca59df3c73cb8c4d5b2a328ee4ce78477a6d0e0c17fdbb62a67542143f4bf26. Oct 9 01:01:36.977726 containerd[1468]: time="2024-10-09T01:01:36.976319736Z" level=info msg="CreateContainer within sandbox \"cd52a6ae61f16c75adc249abe3ed0605414636be4409da46b067ec99334c8372\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"722758ab6e83e552027068a0273e473c47428e11293c0f56f922acd432763f95\"" Oct 9 01:01:36.980718 containerd[1468]: time="2024-10-09T01:01:36.978541241Z" level=info msg="StartContainer for \"722758ab6e83e552027068a0273e473c47428e11293c0f56f922acd432763f95\"" Oct 9 01:01:37.040948 systemd[1]: Started cri-containerd-722758ab6e83e552027068a0273e473c47428e11293c0f56f922acd432763f95.scope - libcontainer container 722758ab6e83e552027068a0273e473c47428e11293c0f56f922acd432763f95. Oct 9 01:01:37.072114 containerd[1468]: time="2024-10-09T01:01:37.070065122Z" level=info msg="StartContainer for \"fca59df3c73cb8c4d5b2a328ee4ce78477a6d0e0c17fdbb62a67542143f4bf26\" returns successfully" Oct 9 01:01:37.095135 containerd[1468]: time="2024-10-09T01:01:37.095078286Z" level=info msg="StartContainer for \"722758ab6e83e552027068a0273e473c47428e11293c0f56f922acd432763f95\" returns successfully" Oct 9 01:01:37.501950 systemd-networkd[1375]: cali27d7d27f5fc: Gained IPv6LL Oct 9 01:01:37.821934 systemd-networkd[1375]: cali728c8213979: Gained IPv6LL Oct 9 01:01:37.829125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570895635.mount: Deactivated successfully. Oct 9 01:01:37.870791 kubelet[2617]: E1009 01:01:37.870108 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:37.874003 kubelet[2617]: E1009 01:01:37.873961 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:37.915314 kubelet[2617]: I1009 01:01:37.915248 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sjvsh" podStartSLOduration=51.915223026 podStartE2EDuration="51.915223026s" podCreationTimestamp="2024-10-09 01:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:37.895312095 +0000 UTC m=+65.628382469" watchObservedRunningTime="2024-10-09 01:01:37.915223026 +0000 UTC m=+65.648293399" Oct 9 01:01:37.939872 kubelet[2617]: I1009 01:01:37.939808 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qjz98" podStartSLOduration=51.939773646 podStartE2EDuration="51.939773646s" podCreationTimestamp="2024-10-09 01:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:01:37.91598713 +0000 UTC m=+65.649057503" watchObservedRunningTime="2024-10-09 01:01:37.939773646 +0000 UTC m=+65.672844019" Oct 9 01:01:38.078581 systemd-networkd[1375]: cali7216238c3bb: Gained IPv6LL Oct 9 01:01:38.467181 containerd[1468]: time="2024-10-09T01:01:38.466243099Z" level=info msg="StopPodSandbox for \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\"" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.641 [INFO][4579] k8s.go 608: Cleaning up netns ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.641 [INFO][4579] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" iface="eth0" netns="/var/run/netns/cni-0658b892-a975-2bc5-3694-bfc873ca3b19" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.642 [INFO][4579] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" iface="eth0" netns="/var/run/netns/cni-0658b892-a975-2bc5-3694-bfc873ca3b19" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.644 [INFO][4579] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" iface="eth0" netns="/var/run/netns/cni-0658b892-a975-2bc5-3694-bfc873ca3b19" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.644 [INFO][4579] k8s.go 615: Releasing IP address(es) ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.644 [INFO][4579] utils.go 188: Calico CNI releasing IP address ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.690 [INFO][4587] ipam_plugin.go 417: Releasing address using handleID ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" HandleID="k8s-pod-network.6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Workload="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.690 [INFO][4587] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.690 [INFO][4587] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.697 [WARNING][4587] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" HandleID="k8s-pod-network.6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Workload="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.697 [INFO][4587] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" HandleID="k8s-pod-network.6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Workload="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.700 [INFO][4587] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:38.711383 containerd[1468]: 2024-10-09 01:01:38.706 [INFO][4579] k8s.go 621: Teardown processing complete. ContainerID="6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9" Oct 9 01:01:38.715384 containerd[1468]: time="2024-10-09T01:01:38.714273536Z" level=info msg="TearDown network for sandbox \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\" successfully" Oct 9 01:01:38.715384 containerd[1468]: time="2024-10-09T01:01:38.714311243Z" level=info msg="StopPodSandbox for \"6278ae22d3a9640c67feabab778b33700ef10012dbfabeb25a837e86f48bc9c9\" returns successfully" Oct 9 01:01:38.718122 systemd[1]: run-netns-cni\x2d0658b892\x2da975\x2d2bc5\x2d3694\x2dbfc873ca3b19.mount: Deactivated successfully. Oct 9 01:01:38.721289 containerd[1468]: time="2024-10-09T01:01:38.719991000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tt4q,Uid:3003107e-be83-475f-b7b0-944d115c5adb,Namespace:calico-system,Attempt:1,}" Oct 9 01:01:38.898470 kubelet[2617]: E1009 01:01:38.897197 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:38.904135 kubelet[2617]: E1009 01:01:38.904094 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:38.993628 systemd-networkd[1375]: cali925e1a67884: Link UP Oct 9 01:01:38.997229 systemd-networkd[1375]: cali925e1a67884: Gained carrier Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.784 [INFO][4594] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.803 [INFO][4594] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0 csi-node-driver- calico-system 3003107e-be83-475f-b7b0-944d115c5adb 956 0 2024-10-09 01:01:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4116.0.0-c-50f1e82448 csi-node-driver-2tt4q eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali925e1a67884 [] []}} ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Namespace="calico-system" Pod="csi-node-driver-2tt4q" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.803 [INFO][4594] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Namespace="calico-system" Pod="csi-node-driver-2tt4q" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.883 [INFO][4604] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" HandleID="k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Workload="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.902 [INFO][4604] ipam_plugin.go 270: Auto assigning IP ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" HandleID="k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Workload="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116.0.0-c-50f1e82448", "pod":"csi-node-driver-2tt4q", "timestamp":"2024-10-09 01:01:38.883899022 +0000 UTC"}, Hostname:"ci-4116.0.0-c-50f1e82448", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.903 [INFO][4604] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.903 [INFO][4604] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.903 [INFO][4604] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-c-50f1e82448' Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.909 [INFO][4604] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.924 [INFO][4604] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.936 [INFO][4604] ipam.go 489: Trying affinity for 192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.941 [INFO][4604] ipam.go 155: Attempting to load block cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.946 [INFO][4604] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.946 [INFO][4604] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.192/26 handle="k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.950 [INFO][4604] ipam.go 1685: Creating new handle: k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.959 [INFO][4604] ipam.go 1203: Writing block in order to claim IPs block=192.168.31.192/26 handle="k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.972 [INFO][4604] ipam.go 1216: Successfully claimed IPs: [192.168.31.196/26] block=192.168.31.192/26 handle="k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.973 [INFO][4604] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.196/26] handle="k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.973 [INFO][4604] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:01:39.046612 containerd[1468]: 2024-10-09 01:01:38.973 [INFO][4604] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.31.196/26] IPv6=[] ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" HandleID="k8s-pod-network.993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Workload="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:39.047403 containerd[1468]: 2024-10-09 01:01:38.980 [INFO][4594] k8s.go 386: Populated endpoint ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Namespace="calico-system" Pod="csi-node-driver-2tt4q" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3003107e-be83-475f-b7b0-944d115c5adb", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"", Pod:"csi-node-driver-2tt4q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali925e1a67884", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:39.047403 containerd[1468]: 2024-10-09 01:01:38.982 [INFO][4594] k8s.go 387: Calico CNI using IPs: [192.168.31.196/32] ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Namespace="calico-system" Pod="csi-node-driver-2tt4q" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:39.047403 containerd[1468]: 2024-10-09 01:01:38.982 [INFO][4594] dataplane_linux.go 68: Setting the host side veth name to cali925e1a67884 ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Namespace="calico-system" Pod="csi-node-driver-2tt4q" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:39.047403 containerd[1468]: 2024-10-09 01:01:39.000 [INFO][4594] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Namespace="calico-system" Pod="csi-node-driver-2tt4q" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:39.047403 containerd[1468]: 2024-10-09 01:01:39.011 [INFO][4594] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Namespace="calico-system" Pod="csi-node-driver-2tt4q" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3003107e-be83-475f-b7b0-944d115c5adb", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e", Pod:"csi-node-driver-2tt4q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali925e1a67884", MAC:"ba:ea:07:63:e8:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:01:39.047403 containerd[1468]: 2024-10-09 01:01:39.035 [INFO][4594] k8s.go 500: Wrote updated endpoint to datastore ContainerID="993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e" Namespace="calico-system" Pod="csi-node-driver-2tt4q" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-csi--node--driver--2tt4q-eth0" Oct 9 01:01:39.150665 containerd[1468]: time="2024-10-09T01:01:39.149833407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:01:39.150665 containerd[1468]: time="2024-10-09T01:01:39.149931459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:01:39.150665 containerd[1468]: time="2024-10-09T01:01:39.149954318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:39.150665 containerd[1468]: time="2024-10-09T01:01:39.150054106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:01:39.222191 systemd[1]: Started cri-containerd-993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e.scope - libcontainer container 993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e. Oct 9 01:01:39.293617 containerd[1468]: time="2024-10-09T01:01:39.293550870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tt4q,Uid:3003107e-be83-475f-b7b0-944d115c5adb,Namespace:calico-system,Attempt:1,} returns sandbox id \"993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e\"" Oct 9 01:01:39.794422 containerd[1468]: time="2024-10-09T01:01:39.793213557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:39.795534 containerd[1468]: time="2024-10-09T01:01:39.795367331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 01:01:39.796799 containerd[1468]: time="2024-10-09T01:01:39.796050789Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:39.802695 containerd[1468]: time="2024-10-09T01:01:39.802342755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:39.804366 containerd[1468]: time="2024-10-09T01:01:39.803953373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.939721985s" Oct 9 01:01:39.804366 containerd[1468]: time="2024-10-09T01:01:39.804009498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 01:01:39.808350 containerd[1468]: time="2024-10-09T01:01:39.808301628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:01:39.835846 containerd[1468]: time="2024-10-09T01:01:39.835793584Z" level=info msg="CreateContainer within sandbox \"5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:01:39.862685 containerd[1468]: time="2024-10-09T01:01:39.861895337Z" level=info msg="CreateContainer within sandbox \"5062efb48ba6f1341f9d1ff48c24e52eaabb76c110628991af99533bde1899bb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05865b8f772ce82eea4e80fdd20939b09dc2fa227b4d08fdf1e2bb9061b5abf4\"" Oct 9 01:01:39.865425 containerd[1468]: time="2024-10-09T01:01:39.863835419Z" level=info msg="StartContainer for \"05865b8f772ce82eea4e80fdd20939b09dc2fa227b4d08fdf1e2bb9061b5abf4\"" Oct 9 01:01:39.908020 systemd[1]: Started cri-containerd-05865b8f772ce82eea4e80fdd20939b09dc2fa227b4d08fdf1e2bb9061b5abf4.scope - libcontainer container 05865b8f772ce82eea4e80fdd20939b09dc2fa227b4d08fdf1e2bb9061b5abf4. Oct 9 01:01:39.912107 kubelet[2617]: E1009 01:01:39.911917 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:39.914039 kubelet[2617]: E1009 01:01:39.911917 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:39.992151 containerd[1468]: time="2024-10-09T01:01:39.991956959Z" level=info msg="StartContainer for \"05865b8f772ce82eea4e80fdd20939b09dc2fa227b4d08fdf1e2bb9061b5abf4\" returns successfully" Oct 9 01:01:40.957810 systemd-networkd[1375]: cali925e1a67884: Gained IPv6LL Oct 9 01:01:41.045694 kubelet[2617]: I1009 01:01:41.045527 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d45c48964-hxd7l" podStartSLOduration=24.100002235 podStartE2EDuration="27.045495336s" podCreationTimestamp="2024-10-09 01:01:14 +0000 UTC" firstStartedPulling="2024-10-09 01:01:36.860059241 +0000 UTC m=+64.593129593" lastFinishedPulling="2024-10-09 01:01:39.805552327 +0000 UTC m=+67.538622694" observedRunningTime="2024-10-09 01:01:41.008585598 +0000 UTC m=+68.741655973" watchObservedRunningTime="2024-10-09 01:01:41.045495336 +0000 UTC m=+68.778565709" Oct 9 01:01:41.343721 containerd[1468]: time="2024-10-09T01:01:41.342821393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:41.343721 containerd[1468]: time="2024-10-09T01:01:41.343547404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 01:01:41.345349 containerd[1468]: time="2024-10-09T01:01:41.345310738Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:41.347809 containerd[1468]: time="2024-10-09T01:01:41.347738799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:41.348669 containerd[1468]: time="2024-10-09T01:01:41.348481474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.540113322s" Oct 9 01:01:41.348669 containerd[1468]: time="2024-10-09T01:01:41.348531175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 01:01:41.351613 containerd[1468]: time="2024-10-09T01:01:41.351574486Z" level=info msg="CreateContainer within sandbox \"993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:01:41.385457 containerd[1468]: time="2024-10-09T01:01:41.384631783Z" level=info msg="CreateContainer within sandbox \"993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6585761c63ccc68ceb4054c8313c207bb1f7d90c3d2401a5be82fe8106976b5b\"" Oct 9 01:01:41.385650 containerd[1468]: time="2024-10-09T01:01:41.385514183Z" level=info msg="StartContainer for \"6585761c63ccc68ceb4054c8313c207bb1f7d90c3d2401a5be82fe8106976b5b\"" Oct 9 01:01:41.443935 systemd[1]: Started cri-containerd-6585761c63ccc68ceb4054c8313c207bb1f7d90c3d2401a5be82fe8106976b5b.scope - libcontainer container 6585761c63ccc68ceb4054c8313c207bb1f7d90c3d2401a5be82fe8106976b5b. Oct 9 01:01:41.493430 containerd[1468]: time="2024-10-09T01:01:41.493087677Z" level=info msg="StartContainer for \"6585761c63ccc68ceb4054c8313c207bb1f7d90c3d2401a5be82fe8106976b5b\" returns successfully" Oct 9 01:01:41.495964 containerd[1468]: time="2024-10-09T01:01:41.495297750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:01:41.823585 systemd[1]: Started sshd@10-143.110.225.158:22-139.178.68.195:40296.service - OpenSSH per-connection server daemon (139.178.68.195:40296). Oct 9 01:01:41.934023 sshd[4812]: Accepted publickey for core from 139.178.68.195 port 40296 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:41.937490 sshd[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:41.949258 systemd-logind[1449]: New session 11 of user core. Oct 9 01:01:41.954704 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:01:42.218249 sshd[4812]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:42.230144 systemd[1]: sshd@10-143.110.225.158:22-139.178.68.195:40296.service: Deactivated successfully. Oct 9 01:01:42.236818 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:01:42.240707 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:01:42.253783 systemd[1]: Started sshd@11-143.110.225.158:22-139.178.68.195:40302.service - OpenSSH per-connection server daemon (139.178.68.195:40302). Oct 9 01:01:42.256746 systemd-logind[1449]: Removed session 11. Oct 9 01:01:42.323177 sshd[4837]: Accepted publickey for core from 139.178.68.195 port 40302 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:42.325514 sshd[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:42.336926 systemd-logind[1449]: New session 12 of user core. Oct 9 01:01:42.342001 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:01:42.562365 sshd[4837]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:42.578367 systemd[1]: sshd@11-143.110.225.158:22-139.178.68.195:40302.service: Deactivated successfully. Oct 9 01:01:42.583948 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:01:42.585303 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:01:42.602465 systemd[1]: Started sshd@12-143.110.225.158:22-139.178.68.195:40312.service - OpenSSH per-connection server daemon (139.178.68.195:40312). Oct 9 01:01:42.609034 systemd-logind[1449]: Removed session 12. Oct 9 01:01:42.680981 sshd[4850]: Accepted publickey for core from 139.178.68.195 port 40312 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:42.685482 sshd[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:42.694698 systemd-logind[1449]: New session 13 of user core. Oct 9 01:01:42.700004 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:01:43.017912 sshd[4850]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:43.025275 systemd[1]: sshd@12-143.110.225.158:22-139.178.68.195:40312.service: Deactivated successfully. Oct 9 01:01:43.025626 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:01:43.028941 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:01:43.034259 systemd-logind[1449]: Removed session 13. Oct 9 01:01:43.270970 containerd[1468]: time="2024-10-09T01:01:43.270790753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.272578 containerd[1468]: time="2024-10-09T01:01:43.272386544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 01:01:43.273462 containerd[1468]: time="2024-10-09T01:01:43.273394008Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.277758 containerd[1468]: time="2024-10-09T01:01:43.277618920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:01:43.279191 containerd[1468]: time="2024-10-09T01:01:43.278975001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.782641381s" Oct 9 01:01:43.279191 containerd[1468]: time="2024-10-09T01:01:43.279042164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 01:01:43.283433 containerd[1468]: time="2024-10-09T01:01:43.282937404Z" level=info msg="CreateContainer within sandbox \"993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:01:43.307527 containerd[1468]: time="2024-10-09T01:01:43.307471253Z" level=info msg="CreateContainer within sandbox \"993011c6e9e91cfdceeeaa5a0ee29b73dadd69775c84712a682d1e0f78c6aa8e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c851acd96d77190da9e8156485be24a8cfed90ba58a3fd883a9c344d1adcf63e\"" Oct 9 01:01:43.310307 containerd[1468]: time="2024-10-09T01:01:43.308654351Z" level=info msg="StartContainer for \"c851acd96d77190da9e8156485be24a8cfed90ba58a3fd883a9c344d1adcf63e\"" Oct 9 01:01:43.363133 systemd[1]: Started cri-containerd-c851acd96d77190da9e8156485be24a8cfed90ba58a3fd883a9c344d1adcf63e.scope - libcontainer container c851acd96d77190da9e8156485be24a8cfed90ba58a3fd883a9c344d1adcf63e. Oct 9 01:01:43.470515 containerd[1468]: time="2024-10-09T01:01:43.470419737Z" level=info msg="StartContainer for \"c851acd96d77190da9e8156485be24a8cfed90ba58a3fd883a9c344d1adcf63e\" returns successfully" Oct 9 01:01:43.715577 kubelet[2617]: I1009 01:01:43.715426 2617 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:01:43.720360 kubelet[2617]: I1009 01:01:43.720249 2617 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:01:44.026431 kubelet[2617]: I1009 01:01:44.026204 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2tt4q" podStartSLOduration=27.045007482 podStartE2EDuration="31.026173488s" podCreationTimestamp="2024-10-09 01:01:13 +0000 UTC" firstStartedPulling="2024-10-09 01:01:39.298958789 +0000 UTC m=+67.032029142" lastFinishedPulling="2024-10-09 01:01:43.280124791 +0000 UTC m=+71.013195148" observedRunningTime="2024-10-09 01:01:44.023287128 +0000 UTC m=+71.756357503" watchObservedRunningTime="2024-10-09 01:01:44.026173488 +0000 UTC m=+71.759243862" Oct 9 01:01:47.215601 systemd[1]: run-containerd-runc-k8s.io-0b93ecb56906f1c79fd59cc5aea164beb9b7fa5d2f2de266001ac3597b5d5f62-runc.qZ2YOM.mount: Deactivated successfully. Oct 9 01:01:48.038536 systemd[1]: Started sshd@13-143.110.225.158:22-139.178.68.195:40322.service - OpenSSH per-connection server daemon (139.178.68.195:40322). Oct 9 01:01:48.150468 sshd[5043]: Accepted publickey for core from 139.178.68.195 port 40322 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:48.153451 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:48.161605 systemd-logind[1449]: New session 14 of user core. Oct 9 01:01:48.165952 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:01:48.502409 sshd[5043]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:48.509934 systemd[1]: sshd@13-143.110.225.158:22-139.178.68.195:40322.service: Deactivated successfully. Oct 9 01:01:48.513556 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:01:48.517276 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:01:48.519412 systemd-logind[1449]: Removed session 14. Oct 9 01:01:49.341860 kubelet[2617]: E1009 01:01:49.341530 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:50.050666 kubelet[2617]: E1009 01:01:50.048921 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:50.111982 kernel: bpftool[5132]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:01:50.460135 kubelet[2617]: E1009 01:01:50.459966 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:01:50.578466 systemd-networkd[1375]: vxlan.calico: Link UP Oct 9 01:01:50.578482 systemd-networkd[1375]: vxlan.calico: Gained carrier Oct 9 01:01:52.285949 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Oct 9 01:01:53.262244 systemd[1]: run-containerd-runc-k8s.io-05865b8f772ce82eea4e80fdd20939b09dc2fa227b4d08fdf1e2bb9061b5abf4-runc.EGKlnA.mount: Deactivated successfully. Oct 9 01:01:53.530185 systemd[1]: Started sshd@14-143.110.225.158:22-139.178.68.195:54324.service - OpenSSH per-connection server daemon (139.178.68.195:54324). Oct 9 01:01:53.612870 sshd[5229]: Accepted publickey for core from 139.178.68.195 port 54324 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:53.615290 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:53.620146 systemd-logind[1449]: New session 15 of user core. Oct 9 01:01:53.634929 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:01:53.837023 sshd[5229]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:53.841601 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:01:53.842736 systemd[1]: sshd@14-143.110.225.158:22-139.178.68.195:54324.service: Deactivated successfully. Oct 9 01:01:53.847425 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:01:53.851022 systemd-logind[1449]: Removed session 15. Oct 9 01:01:58.857318 systemd[1]: Started sshd@15-143.110.225.158:22-139.178.68.195:54340.service - OpenSSH per-connection server daemon (139.178.68.195:54340). Oct 9 01:01:58.925878 sshd[5270]: Accepted publickey for core from 139.178.68.195 port 54340 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:01:58.928264 sshd[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:01:58.934282 systemd-logind[1449]: New session 16 of user core. Oct 9 01:01:58.938963 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:01:59.111943 sshd[5270]: pam_unix(sshd:session): session closed for user core Oct 9 01:01:59.118256 systemd[1]: sshd@15-143.110.225.158:22-139.178.68.195:54340.service: Deactivated successfully. Oct 9 01:01:59.121782 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:01:59.123396 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:01:59.125292 systemd-logind[1449]: Removed session 16. Oct 9 01:01:59.457549 kubelet[2617]: E1009 01:01:59.457299 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:02:00.458695 kubelet[2617]: E1009 01:02:00.458434 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:02:02.458574 kubelet[2617]: E1009 01:02:02.458514 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:02:04.137589 systemd[1]: Started sshd@16-143.110.225.158:22-139.178.68.195:37508.service - OpenSSH per-connection server daemon (139.178.68.195:37508). Oct 9 01:02:04.214483 sshd[5296]: Accepted publickey for core from 139.178.68.195 port 37508 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:04.218861 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:04.227298 systemd-logind[1449]: New session 17 of user core. Oct 9 01:02:04.230985 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:02:04.429452 sshd[5296]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:04.441519 systemd[1]: sshd@16-143.110.225.158:22-139.178.68.195:37508.service: Deactivated successfully. Oct 9 01:02:04.444193 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:02:04.447104 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:02:04.456316 systemd[1]: Started sshd@17-143.110.225.158:22-139.178.68.195:37520.service - OpenSSH per-connection server daemon (139.178.68.195:37520). Oct 9 01:02:04.458912 systemd-logind[1449]: Removed session 17. Oct 9 01:02:04.516733 sshd[5309]: Accepted publickey for core from 139.178.68.195 port 37520 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:04.519303 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:04.528242 systemd-logind[1449]: New session 18 of user core. Oct 9 01:02:04.539981 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:02:04.898593 sshd[5309]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:04.916424 systemd[1]: Started sshd@18-143.110.225.158:22-139.178.68.195:37536.service - OpenSSH per-connection server daemon (139.178.68.195:37536). Oct 9 01:02:04.917093 systemd[1]: sshd@17-143.110.225.158:22-139.178.68.195:37520.service: Deactivated successfully. Oct 9 01:02:04.920101 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:02:04.925009 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:02:04.928794 systemd-logind[1449]: Removed session 18. Oct 9 01:02:04.997468 sshd[5318]: Accepted publickey for core from 139.178.68.195 port 37536 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:05.000188 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:05.007871 systemd-logind[1449]: New session 19 of user core. Oct 9 01:02:05.016167 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:02:07.310764 sshd[5318]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:07.328415 systemd[1]: sshd@18-143.110.225.158:22-139.178.68.195:37536.service: Deactivated successfully. Oct 9 01:02:07.335485 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:02:07.340010 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:02:07.358330 systemd[1]: Started sshd@19-143.110.225.158:22-139.178.68.195:37546.service - OpenSSH per-connection server daemon (139.178.68.195:37546). Oct 9 01:02:07.365605 systemd-logind[1449]: Removed session 19. Oct 9 01:02:07.445164 sshd[5337]: Accepted publickey for core from 139.178.68.195 port 37546 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:07.450951 sshd[5337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:07.464982 systemd-logind[1449]: New session 20 of user core. Oct 9 01:02:07.470896 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:02:08.074777 sshd[5337]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:08.090602 systemd[1]: sshd@19-143.110.225.158:22-139.178.68.195:37546.service: Deactivated successfully. Oct 9 01:02:08.096418 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:02:08.101877 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:02:08.110157 systemd[1]: Started sshd@20-143.110.225.158:22-139.178.68.195:37556.service - OpenSSH per-connection server daemon (139.178.68.195:37556). Oct 9 01:02:08.115237 systemd-logind[1449]: Removed session 20. Oct 9 01:02:08.175335 sshd[5351]: Accepted publickey for core from 139.178.68.195 port 37556 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:08.177573 sshd[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:08.185503 systemd-logind[1449]: New session 21 of user core. Oct 9 01:02:08.190996 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 9 01:02:08.350476 sshd[5351]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:08.357225 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Oct 9 01:02:08.358141 systemd[1]: sshd@20-143.110.225.158:22-139.178.68.195:37556.service: Deactivated successfully. Oct 9 01:02:08.360950 systemd[1]: session-21.scope: Deactivated successfully. Oct 9 01:02:08.363581 systemd-logind[1449]: Removed session 21. Oct 9 01:02:13.365064 systemd[1]: Started sshd@21-143.110.225.158:22-139.178.68.195:40468.service - OpenSSH per-connection server daemon (139.178.68.195:40468). Oct 9 01:02:13.427453 sshd[5378]: Accepted publickey for core from 139.178.68.195 port 40468 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:13.429964 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:13.436833 systemd-logind[1449]: New session 22 of user core. Oct 9 01:02:13.444169 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 9 01:02:13.602033 sshd[5378]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:13.608941 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Oct 9 01:02:13.609230 systemd[1]: sshd@21-143.110.225.158:22-139.178.68.195:40468.service: Deactivated successfully. Oct 9 01:02:13.613133 systemd[1]: session-22.scope: Deactivated successfully. Oct 9 01:02:13.617893 systemd-logind[1449]: Removed session 22. Oct 9 01:02:17.192981 systemd[1]: run-containerd-runc-k8s.io-0b93ecb56906f1c79fd59cc5aea164beb9b7fa5d2f2de266001ac3597b5d5f62-runc.UFHxN1.mount: Deactivated successfully. Oct 9 01:02:17.291127 kubelet[2617]: E1009 01:02:17.291028 2617 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Oct 9 01:02:18.620039 systemd[1]: Started sshd@22-143.110.225.158:22-139.178.68.195:40476.service - OpenSSH per-connection server daemon (139.178.68.195:40476). Oct 9 01:02:18.692863 sshd[5420]: Accepted publickey for core from 139.178.68.195 port 40476 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:18.695158 sshd[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:18.706094 systemd-logind[1449]: New session 23 of user core. Oct 9 01:02:18.712608 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 9 01:02:18.892123 sshd[5420]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:18.898063 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Oct 9 01:02:18.899371 systemd[1]: sshd@22-143.110.225.158:22-139.178.68.195:40476.service: Deactivated successfully. Oct 9 01:02:18.904847 systemd[1]: session-23.scope: Deactivated successfully. Oct 9 01:02:18.907373 systemd-logind[1449]: Removed session 23. Oct 9 01:02:22.653480 kubelet[2617]: I1009 01:02:22.648724 2617 topology_manager.go:215] "Topology Admit Handler" podUID="80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19" podNamespace="calico-apiserver" podName="calico-apiserver-745496cf57-xc6j4" Oct 9 01:02:22.732285 systemd[1]: Created slice kubepods-besteffort-pod80ba7eb3_92ab_4e6f_9e60_ad9cd3df9b19.slice - libcontainer container kubepods-besteffort-pod80ba7eb3_92ab_4e6f_9e60_ad9cd3df9b19.slice. Oct 9 01:02:22.818182 kubelet[2617]: I1009 01:02:22.817987 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19-calico-apiserver-certs\") pod \"calico-apiserver-745496cf57-xc6j4\" (UID: \"80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19\") " pod="calico-apiserver/calico-apiserver-745496cf57-xc6j4" Oct 9 01:02:22.818182 kubelet[2617]: I1009 01:02:22.818088 2617 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2mc\" (UniqueName: \"kubernetes.io/projected/80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19-kube-api-access-hb2mc\") pod \"calico-apiserver-745496cf57-xc6j4\" (UID: \"80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19\") " pod="calico-apiserver/calico-apiserver-745496cf57-xc6j4" Oct 9 01:02:23.041328 containerd[1468]: time="2024-10-09T01:02:23.041236098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745496cf57-xc6j4,Uid:80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:02:23.333275 systemd-networkd[1375]: cali4079fa462c3: Link UP Oct 9 01:02:23.333534 systemd-networkd[1375]: cali4079fa462c3: Gained carrier Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.186 [INFO][5445] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0 calico-apiserver-745496cf57- calico-apiserver 80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19 1255 0 2024-10-09 01:02:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:745496cf57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116.0.0-c-50f1e82448 calico-apiserver-745496cf57-xc6j4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4079fa462c3 [] []}} ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Namespace="calico-apiserver" Pod="calico-apiserver-745496cf57-xc6j4" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.188 [INFO][5445] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Namespace="calico-apiserver" Pod="calico-apiserver-745496cf57-xc6j4" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.251 [INFO][5455] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" HandleID="k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.264 [INFO][5455] ipam_plugin.go 270: Auto assigning IP ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" HandleID="k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116.0.0-c-50f1e82448", "pod":"calico-apiserver-745496cf57-xc6j4", "timestamp":"2024-10-09 01:02:23.25178788 +0000 UTC"}, Hostname:"ci-4116.0.0-c-50f1e82448", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.264 [INFO][5455] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.264 [INFO][5455] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.264 [INFO][5455] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116.0.0-c-50f1e82448' Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.269 [INFO][5455] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.278 [INFO][5455] ipam.go 372: Looking up existing affinities for host host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.286 [INFO][5455] ipam.go 489: Trying affinity for 192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.294 [INFO][5455] ipam.go 155: Attempting to load block cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.299 [INFO][5455] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.192/26 host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.299 [INFO][5455] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.192/26 handle="k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.301 [INFO][5455] ipam.go 1685: Creating new handle: k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.311 [INFO][5455] ipam.go 1203: Writing block in order to claim IPs block=192.168.31.192/26 handle="k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.324 [INFO][5455] ipam.go 1216: Successfully claimed IPs: [192.168.31.197/26] block=192.168.31.192/26 handle="k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.324 [INFO][5455] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.197/26] handle="k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" host="ci-4116.0.0-c-50f1e82448" Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.324 [INFO][5455] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:02:23.363308 containerd[1468]: 2024-10-09 01:02:23.325 [INFO][5455] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.31.197/26] IPv6=[] ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" HandleID="k8s-pod-network.f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Workload="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" Oct 9 01:02:23.366130 containerd[1468]: 2024-10-09 01:02:23.328 [INFO][5445] k8s.go 386: Populated endpoint ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Namespace="calico-apiserver" Pod="calico-apiserver-745496cf57-xc6j4" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0", GenerateName:"calico-apiserver-745496cf57-", Namespace:"calico-apiserver", SelfLink:"", UID:"80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745496cf57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"", Pod:"calico-apiserver-745496cf57-xc6j4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4079fa462c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:23.366130 containerd[1468]: 2024-10-09 01:02:23.328 [INFO][5445] k8s.go 387: Calico CNI using IPs: [192.168.31.197/32] ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Namespace="calico-apiserver" Pod="calico-apiserver-745496cf57-xc6j4" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" Oct 9 01:02:23.366130 containerd[1468]: 2024-10-09 01:02:23.328 [INFO][5445] dataplane_linux.go 68: Setting the host side veth name to cali4079fa462c3 ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Namespace="calico-apiserver" Pod="calico-apiserver-745496cf57-xc6j4" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" Oct 9 01:02:23.366130 containerd[1468]: 2024-10-09 01:02:23.333 [INFO][5445] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Namespace="calico-apiserver" Pod="calico-apiserver-745496cf57-xc6j4" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" Oct 9 01:02:23.366130 containerd[1468]: 2024-10-09 01:02:23.336 [INFO][5445] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Namespace="calico-apiserver" Pod="calico-apiserver-745496cf57-xc6j4" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0", GenerateName:"calico-apiserver-745496cf57-", Namespace:"calico-apiserver", SelfLink:"", UID:"80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 2, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"745496cf57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116.0.0-c-50f1e82448", ContainerID:"f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f", Pod:"calico-apiserver-745496cf57-xc6j4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4079fa462c3", MAC:"c6:5a:75:56:73:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:02:23.366130 containerd[1468]: 2024-10-09 01:02:23.353 [INFO][5445] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f" Namespace="calico-apiserver" Pod="calico-apiserver-745496cf57-xc6j4" WorkloadEndpoint="ci--4116.0.0--c--50f1e82448-k8s-calico--apiserver--745496cf57--xc6j4-eth0" Oct 9 01:02:23.417960 containerd[1468]: time="2024-10-09T01:02:23.417552862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:02:23.417960 containerd[1468]: time="2024-10-09T01:02:23.417737322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:02:23.417960 containerd[1468]: time="2024-10-09T01:02:23.417787556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:23.418555 containerd[1468]: time="2024-10-09T01:02:23.418216832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:02:23.459309 systemd[1]: Started cri-containerd-f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f.scope - libcontainer container f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f. Oct 9 01:02:23.525402 containerd[1468]: time="2024-10-09T01:02:23.525276290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-745496cf57-xc6j4,Uid:80ba7eb3-92ab-4e6f-9e60-ad9cd3df9b19,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f\"" Oct 9 01:02:23.528152 containerd[1468]: time="2024-10-09T01:02:23.527690974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:02:23.911579 systemd[1]: Started sshd@23-143.110.225.158:22-139.178.68.195:34894.service - OpenSSH per-connection server daemon (139.178.68.195:34894). Oct 9 01:02:23.997910 sshd[5517]: Accepted publickey for core from 139.178.68.195 port 34894 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:24.000140 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:24.006155 systemd-logind[1449]: New session 24 of user core. Oct 9 01:02:24.013156 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 9 01:02:24.214533 sshd[5517]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:24.220346 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Oct 9 01:02:24.221453 systemd[1]: sshd@23-143.110.225.158:22-139.178.68.195:34894.service: Deactivated successfully. Oct 9 01:02:24.224115 systemd[1]: session-24.scope: Deactivated successfully. Oct 9 01:02:24.225488 systemd-logind[1449]: Removed session 24. Oct 9 01:02:25.309934 systemd-networkd[1375]: cali4079fa462c3: Gained IPv6LL Oct 9 01:02:26.780872 containerd[1468]: time="2024-10-09T01:02:26.780702854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:26.782026 containerd[1468]: time="2024-10-09T01:02:26.781944215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 01:02:26.784190 containerd[1468]: time="2024-10-09T01:02:26.784056388Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:26.789480 containerd[1468]: time="2024-10-09T01:02:26.789426961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:02:26.790496 containerd[1468]: time="2024-10-09T01:02:26.790427453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.262698176s" Oct 9 01:02:26.790615 containerd[1468]: time="2024-10-09T01:02:26.790507620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 01:02:26.796971 containerd[1468]: time="2024-10-09T01:02:26.796306487Z" level=info msg="CreateContainer within sandbox \"f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:02:26.826726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811133902.mount: Deactivated successfully. Oct 9 01:02:26.857360 containerd[1468]: time="2024-10-09T01:02:26.857201096Z" level=info msg="CreateContainer within sandbox \"f6e084496f2f0f56e0adb7d4d17eeed9fd175b3ba6ad78e45269b75291fa045f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e6f7b9f26f5bbbb31cb296f243875d7103326ddefd7c211bac022ff11163ed9\"" Oct 9 01:02:26.860434 containerd[1468]: time="2024-10-09T01:02:26.858221594Z" level=info msg="StartContainer for \"7e6f7b9f26f5bbbb31cb296f243875d7103326ddefd7c211bac022ff11163ed9\"" Oct 9 01:02:26.922124 systemd[1]: Started cri-containerd-7e6f7b9f26f5bbbb31cb296f243875d7103326ddefd7c211bac022ff11163ed9.scope - libcontainer container 7e6f7b9f26f5bbbb31cb296f243875d7103326ddefd7c211bac022ff11163ed9. Oct 9 01:02:27.053165 containerd[1468]: time="2024-10-09T01:02:27.053053103Z" level=info msg="StartContainer for \"7e6f7b9f26f5bbbb31cb296f243875d7103326ddefd7c211bac022ff11163ed9\" returns successfully" Oct 9 01:02:27.184631 kubelet[2617]: I1009 01:02:27.184242 2617 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-745496cf57-xc6j4" podStartSLOduration=1.9182362 podStartE2EDuration="5.184223631s" podCreationTimestamp="2024-10-09 01:02:22 +0000 UTC" firstStartedPulling="2024-10-09 01:02:23.527045232 +0000 UTC m=+111.260115584" lastFinishedPulling="2024-10-09 01:02:26.793032643 +0000 UTC m=+114.526103015" observedRunningTime="2024-10-09 01:02:27.183726525 +0000 UTC m=+114.916796898" watchObservedRunningTime="2024-10-09 01:02:27.184223631 +0000 UTC m=+114.917294004" Oct 9 01:02:29.240207 systemd[1]: Started sshd@24-143.110.225.158:22-139.178.68.195:34898.service - OpenSSH per-connection server daemon (139.178.68.195:34898). Oct 9 01:02:29.366483 sshd[5601]: Accepted publickey for core from 139.178.68.195 port 34898 ssh2: RSA SHA256:rAraWF6dAhtbVQzAuCRwvYKxEoENakeAe95MuXIlOkk Oct 9 01:02:29.367777 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:02:29.377314 systemd-logind[1449]: New session 25 of user core. Oct 9 01:02:29.382001 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 9 01:02:29.694387 sshd[5601]: pam_unix(sshd:session): session closed for user core Oct 9 01:02:29.699862 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Oct 9 01:02:29.700489 systemd[1]: sshd@24-143.110.225.158:22-139.178.68.195:34898.service: Deactivated successfully. Oct 9 01:02:29.703494 systemd[1]: session-25.scope: Deactivated successfully. Oct 9 01:02:29.705003 systemd-logind[1449]: Removed session 25.