Jan 30 13:59:12.975484 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:59:12.975517 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:59:12.975531 kernel: BIOS-provided physical RAM map: Jan 30 13:59:12.975538 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:59:12.975544 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:59:12.975554 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:59:12.975564 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 13:59:12.975571 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 13:59:12.975578 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:59:12.975587 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:59:12.975594 kernel: NX (Execute Disable) protection: active Jan 30 13:59:12.975601 kernel: APIC: Static calls initialized Jan 30 13:59:12.975611 kernel: SMBIOS 2.8 present. Jan 30 13:59:12.975618 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 13:59:12.975627 kernel: Hypervisor detected: KVM Jan 30 13:59:12.975637 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:59:12.975648 kernel: kvm-clock: using sched offset of 2974156223 cycles Jan 30 13:59:12.975657 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:59:12.975665 kernel: tsc: Detected 2494.136 MHz processor Jan 30 13:59:12.975673 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:59:12.975685 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:59:12.975697 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 13:59:12.975709 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:59:12.975717 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:59:12.975730 kernel: ACPI: Early table checksum verification disabled Jan 30 13:59:12.975741 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 13:59:12.975750 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:12.975762 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:12.975773 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:12.975786 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 13:59:12.975795 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:12.975804 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:12.975815 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:12.975827 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:59:12.975835 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 13:59:12.975842 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 13:59:12.975850 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 13:59:12.975858 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 13:59:12.975866 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 13:59:12.975878 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 13:59:12.975900 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 13:59:12.975908 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:59:12.975916 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:59:12.975925 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:59:12.975933 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:59:12.975945 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 13:59:12.975953 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 13:59:12.975969 kernel: Zone ranges: Jan 30 13:59:12.975981 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:59:12.975994 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 13:59:12.976005 kernel: Normal empty Jan 30 13:59:12.976013 kernel: Movable zone start for each node Jan 30 13:59:12.976024 kernel: Early memory node ranges Jan 30 13:59:12.976032 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:59:12.976040 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 13:59:12.976048 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 13:59:12.976061 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:59:12.976073 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:59:12.976125 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 13:59:12.976135 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:59:12.976147 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:59:12.976162 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:59:12.976172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:59:12.976180 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:59:12.976189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:59:12.976201 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:59:12.976209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:59:12.976218 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:59:12.976226 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:59:12.976235 kernel: TSC deadline timer available Jan 30 13:59:12.976243 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:59:12.976251 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:59:12.976260 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 13:59:12.976273 kernel: Booting paravirtualized kernel on KVM Jan 30 13:59:12.976281 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:59:12.976293 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:59:12.976302 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:59:12.976310 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:59:12.976319 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:59:12.976342 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:59:12.976353 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:59:12.976362 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:59:12.976370 kernel: random: crng init done Jan 30 13:59:12.976383 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:59:12.976391 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:59:12.976399 kernel: Fallback order for Node 0: 0 Jan 30 13:59:12.976408 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 13:59:12.976416 kernel: Policy zone: DMA32 Jan 30 13:59:12.976424 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:59:12.976433 kernel: Memory: 1971196K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125156K reserved, 0K cma-reserved) Jan 30 13:59:12.976441 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:59:12.976452 kernel: Kernel/User page tables isolation: enabled Jan 30 13:59:12.976461 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:59:12.976469 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:59:12.976479 kernel: Dynamic Preempt: voluntary Jan 30 13:59:12.976492 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:59:12.976506 kernel: rcu: RCU event tracing is enabled. Jan 30 13:59:12.976519 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:59:12.976532 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:59:12.976546 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:59:12.976556 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:59:12.976569 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:59:12.976580 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:59:12.976592 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:59:12.976600 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:59:12.976613 kernel: Console: colour VGA+ 80x25 Jan 30 13:59:12.976624 kernel: printk: console [tty0] enabled Jan 30 13:59:12.976635 kernel: printk: console [ttyS0] enabled Jan 30 13:59:12.976643 kernel: ACPI: Core revision 20230628 Jan 30 13:59:12.976652 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:59:12.976666 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:59:12.976677 kernel: x2apic enabled Jan 30 13:59:12.976690 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:59:12.976704 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:59:12.976717 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Jan 30 13:59:12.976730 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494136) Jan 30 13:59:12.976741 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:59:12.976754 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:59:12.976783 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:59:12.976793 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:59:12.976802 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:59:12.976814 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:59:12.976823 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 13:59:12.976832 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:59:12.976841 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:59:12.976850 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:59:12.976859 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:59:12.976876 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:59:12.976885 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:59:12.976894 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:59:12.976903 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:59:12.976911 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:59:12.976920 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:59:12.976929 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:59:12.976938 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:59:12.976950 kernel: landlock: Up and running. Jan 30 13:59:12.976959 kernel: SELinux: Initializing. Jan 30 13:59:12.976968 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:59:12.976977 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:59:12.976986 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 13:59:12.976995 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:59:12.977004 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:59:12.977013 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:59:12.977022 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 13:59:12.977034 kernel: signal: max sigframe size: 1776 Jan 30 13:59:12.977043 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:59:12.977052 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:59:12.977061 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:59:12.977070 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:59:12.977079 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:59:12.977098 kernel: .... node #0, CPUs: #1 Jan 30 13:59:12.977107 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:59:12.977120 kernel: smpboot: Max logical packages: 1 Jan 30 13:59:12.977133 kernel: smpboot: Total of 2 processors activated (9976.54 BogoMIPS) Jan 30 13:59:12.977142 kernel: devtmpfs: initialized Jan 30 13:59:12.977151 kernel: x86/mm: Memory block size: 128MB Jan 30 13:59:12.977160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:59:12.977169 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:59:12.977178 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:59:12.977186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:59:12.977195 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:59:12.977204 kernel: audit: type=2000 audit(1738245552.016:1): state=initialized audit_enabled=0 res=1 Jan 30 13:59:12.977216 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:59:12.977225 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:59:12.977234 kernel: cpuidle: using governor menu Jan 30 13:59:12.977242 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:59:12.977251 kernel: dca service started, version 1.12.1 Jan 30 13:59:12.977260 kernel: PCI: Using configuration type 1 for base access Jan 30 13:59:12.977269 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:59:12.977278 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:59:12.977287 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:59:12.977303 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:59:12.977312 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:59:12.977321 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:59:12.977334 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:59:12.977843 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:59:12.977854 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:59:12.977863 kernel: ACPI: Interpreter enabled Jan 30 13:59:12.977872 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:59:12.977881 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:59:12.977895 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:59:12.977904 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:59:12.977913 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:59:12.977922 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:59:12.978185 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:59:12.978297 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:59:12.978391 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:59:12.978408 kernel: acpiphp: Slot [3] registered Jan 30 13:59:12.978417 kernel: acpiphp: Slot [4] registered Jan 30 13:59:12.978426 kernel: acpiphp: Slot [5] registered Jan 30 13:59:12.978435 kernel: acpiphp: Slot [6] registered Jan 30 13:59:12.978444 kernel: acpiphp: Slot [7] registered Jan 30 13:59:12.978453 kernel: acpiphp: Slot [8] registered Jan 30 13:59:12.978462 kernel: acpiphp: Slot [9] registered Jan 30 13:59:12.978471 kernel: acpiphp: Slot [10] registered Jan 30 13:59:12.978480 kernel: acpiphp: Slot [11] registered Jan 30 13:59:12.978492 kernel: acpiphp: Slot [12] registered Jan 30 13:59:12.978501 kernel: acpiphp: Slot [13] registered Jan 30 13:59:12.978510 kernel: acpiphp: Slot [14] registered Jan 30 13:59:12.978519 kernel: acpiphp: Slot [15] registered Jan 30 13:59:12.978528 kernel: acpiphp: Slot [16] registered Jan 30 13:59:12.978537 kernel: acpiphp: Slot [17] registered Jan 30 13:59:12.978546 kernel: acpiphp: Slot [18] registered Jan 30 13:59:12.978554 kernel: acpiphp: Slot [19] registered Jan 30 13:59:12.978563 kernel: acpiphp: Slot [20] registered Jan 30 13:59:12.978573 kernel: acpiphp: Slot [21] registered Jan 30 13:59:12.978585 kernel: acpiphp: Slot [22] registered Jan 30 13:59:12.978593 kernel: acpiphp: Slot [23] registered Jan 30 13:59:12.978602 kernel: acpiphp: Slot [24] registered Jan 30 13:59:12.978611 kernel: acpiphp: Slot [25] registered Jan 30 13:59:12.978619 kernel: acpiphp: Slot [26] registered Jan 30 13:59:12.978628 kernel: acpiphp: Slot [27] registered Jan 30 13:59:12.978637 kernel: acpiphp: Slot [28] registered Jan 30 13:59:12.978646 kernel: acpiphp: Slot [29] registered Jan 30 13:59:12.978655 kernel: acpiphp: Slot [30] registered Jan 30 13:59:12.978667 kernel: acpiphp: Slot [31] registered Jan 30 13:59:12.978675 kernel: PCI host bridge to bus 0000:00 Jan 30 13:59:12.978828 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:59:12.978957 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:59:12.979064 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:59:12.979817 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:59:12.979929 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 13:59:12.980014 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:59:12.980186 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:59:12.980363 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:59:12.980478 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:59:12.980574 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 13:59:12.980668 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:59:12.980761 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:59:12.980865 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:59:12.980963 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:59:12.981145 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 13:59:12.981303 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 13:59:12.981435 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:59:12.981555 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:59:12.981695 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:59:12.981865 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:59:12.981990 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:59:12.982129 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 13:59:12.982232 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 13:59:12.982355 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:59:12.984238 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:59:12.984457 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:59:12.984562 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 13:59:12.984658 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 13:59:12.984756 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 13:59:12.984949 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:59:12.985056 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 13:59:12.987337 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 13:59:12.987541 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 13:59:12.987735 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 13:59:12.987889 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 13:59:12.988029 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 13:59:12.988201 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 13:59:12.988322 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:59:12.988450 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:59:12.988591 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 13:59:12.988686 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 13:59:12.988819 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:59:12.988946 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 13:59:12.989047 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 13:59:12.990340 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 13:59:12.990505 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:59:12.990616 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 13:59:12.990710 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 13:59:12.990723 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:59:12.990733 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:59:12.990742 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:59:12.990752 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:59:12.990890 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:59:12.990916 kernel: iommu: Default domain type: Translated Jan 30 13:59:12.990926 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:59:12.990936 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:59:12.990945 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:59:12.990959 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:59:12.990970 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 13:59:12.991120 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:59:12.991227 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:59:12.991340 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:59:12.991353 kernel: vgaarb: loaded Jan 30 13:59:12.991362 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:59:12.991372 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:59:12.991381 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:59:12.991391 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:59:12.991401 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:59:12.991410 kernel: pnp: PnP ACPI init Jan 30 13:59:12.991419 kernel: pnp: PnP ACPI: found 4 devices Jan 30 13:59:12.991433 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:59:12.991442 kernel: NET: Registered PF_INET protocol family Jan 30 13:59:12.991451 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:59:12.991461 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:59:12.991470 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:59:12.991479 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:59:12.991488 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:59:12.991497 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:59:12.991507 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:59:12.991520 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:59:12.991529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:59:12.991538 kernel: NET: Registered PF_XDP protocol family Jan 30 13:59:12.991656 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:59:12.991762 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:59:12.991847 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:59:12.991934 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:59:12.992026 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 13:59:12.994084 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:59:12.994257 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:59:12.994275 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:59:12.994385 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 32334 usecs Jan 30 13:59:12.994399 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:59:12.994409 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:59:12.994419 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39654230, max_idle_ns: 440795207432 ns Jan 30 13:59:12.994428 kernel: Initialise system trusted keyrings Jan 30 13:59:12.994447 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:59:12.994459 kernel: Key type asymmetric registered Jan 30 13:59:12.994471 kernel: Asymmetric key parser 'x509' registered Jan 30 13:59:12.994479 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:59:12.994488 kernel: io scheduler mq-deadline registered Jan 30 13:59:12.994502 kernel: io scheduler kyber registered Jan 30 13:59:12.994511 kernel: io scheduler bfq registered Jan 30 13:59:12.994524 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:59:12.994537 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:59:12.994547 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:59:12.994560 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:59:12.994569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:59:12.994578 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:59:12.994587 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:59:12.994596 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:59:12.994605 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:59:12.994615 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:59:12.994794 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:59:12.994932 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:59:12.995051 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:59:12 UTC (1738245552) Jan 30 13:59:12.995199 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:59:12.995216 kernel: intel_pstate: CPU model not supported Jan 30 13:59:12.995226 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:59:12.995235 kernel: Segment Routing with IPv6 Jan 30 13:59:12.995245 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:59:12.995259 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:59:12.995279 kernel: Key type dns_resolver registered Jan 30 13:59:12.995294 kernel: IPI shorthand broadcast: enabled Jan 30 13:59:12.995306 kernel: sched_clock: Marking stable (923005488, 85564422)->(1025163179, -16593269) Jan 30 13:59:12.995316 kernel: registered taskstats version 1 Jan 30 13:59:12.995325 kernel: Loading compiled-in X.509 certificates Jan 30 13:59:12.995334 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:59:12.995345 kernel: Key type .fscrypt registered Jan 30 13:59:12.995357 kernel: Key type fscrypt-provisioning registered Jan 30 13:59:12.995371 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:59:12.995390 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:59:12.995402 kernel: ima: No architecture policies found Jan 30 13:59:12.995411 kernel: clk: Disabling unused clocks Jan 30 13:59:12.995420 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:59:12.995430 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:59:12.995463 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:59:12.995477 kernel: Run /init as init process Jan 30 13:59:12.995487 kernel: with arguments: Jan 30 13:59:12.995497 kernel: /init Jan 30 13:59:12.995510 kernel: with environment: Jan 30 13:59:12.995519 kernel: HOME=/ Jan 30 13:59:12.995528 kernel: TERM=linux Jan 30 13:59:12.995538 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:59:12.995551 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:59:12.995567 systemd[1]: Detected virtualization kvm. Jan 30 13:59:12.995587 systemd[1]: Detected architecture x86-64. Jan 30 13:59:12.995597 systemd[1]: Running in initrd. Jan 30 13:59:12.995609 systemd[1]: No hostname configured, using default hostname. Jan 30 13:59:12.995619 systemd[1]: Hostname set to . Jan 30 13:59:12.995629 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:59:12.995639 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:59:12.995649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:59:12.995659 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:59:12.995671 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:59:12.995681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:59:12.995694 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:59:12.995708 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:59:12.995727 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:59:12.995741 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:59:12.995755 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:59:12.995769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:59:12.995787 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:59:12.995801 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:59:12.995816 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:59:12.995835 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:59:12.995850 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:59:12.995860 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:59:12.995878 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:59:12.995894 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:59:12.995908 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:59:12.995925 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:59:12.995942 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:59:12.995953 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:59:12.995963 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:59:12.995972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:59:12.995986 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:59:12.995996 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:59:12.996011 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:59:12.996022 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:59:12.996033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:12.996048 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:59:12.996132 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 13:59:12.996164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:59:12.996174 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:59:12.996185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:59:12.996201 systemd-journald[183]: Journal started Jan 30 13:59:12.996225 systemd-journald[183]: Runtime Journal (/run/log/journal/793e8a7df22b45928ac94caa68c3f13a) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:59:13.005350 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:59:13.010732 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 13:59:13.023437 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:59:13.048344 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:59:13.052633 kernel: Bridge firewalling registered Jan 30 13:59:13.048970 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 13:59:13.055697 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:59:13.056783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:13.058168 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:59:13.066519 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:59:13.071519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:59:13.077415 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:59:13.079243 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:59:13.099018 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:59:13.107555 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:59:13.108446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:59:13.110172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:59:13.114332 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:59:13.129235 dracut-cmdline[215]: dracut-dracut-053 Jan 30 13:59:13.133977 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:59:13.157926 systemd-resolved[220]: Positive Trust Anchors: Jan 30 13:59:13.157955 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:59:13.158008 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:59:13.164509 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 30 13:59:13.166850 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:59:13.167860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:59:13.237166 kernel: SCSI subsystem initialized Jan 30 13:59:13.248132 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:59:13.261121 kernel: iscsi: registered transport (tcp) Jan 30 13:59:13.284289 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:59:13.284378 kernel: QLogic iSCSI HBA Driver Jan 30 13:59:13.338228 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:59:13.345418 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:59:13.385401 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:59:13.385482 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:59:13.385497 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:59:13.429163 kernel: raid6: avx2x4 gen() 17408 MB/s Jan 30 13:59:13.446123 kernel: raid6: avx2x2 gen() 17529 MB/s Jan 30 13:59:13.463514 kernel: raid6: avx2x1 gen() 13138 MB/s Jan 30 13:59:13.463593 kernel: raid6: using algorithm avx2x2 gen() 17529 MB/s Jan 30 13:59:13.481229 kernel: raid6: .... xor() 19595 MB/s, rmw enabled Jan 30 13:59:13.481332 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:59:13.504139 kernel: xor: automatically using best checksumming function avx Jan 30 13:59:13.678156 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:59:13.693319 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:59:13.700363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:59:13.724743 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 30 13:59:13.730827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:59:13.740316 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:59:13.757132 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 30 13:59:13.796128 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:59:13.803377 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:59:13.860871 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:59:13.867321 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:59:13.894621 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:59:13.897799 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:59:13.899438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:59:13.900902 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:59:13.907383 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:59:13.929165 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:59:13.944593 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 13:59:13.993537 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:59:13.993702 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 13:59:13.993825 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:59:13.993839 kernel: GPT:9289727 != 125829119 Jan 30 13:59:13.993851 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:59:13.993863 kernel: GPT:9289727 != 125829119 Jan 30 13:59:13.993874 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:59:13.993889 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:13.993901 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 13:59:14.038887 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:59:14.038915 kernel: ACPI: bus type USB registered Jan 30 13:59:14.038934 kernel: usbcore: registered new interface driver usbfs Jan 30 13:59:14.038952 kernel: usbcore: registered new interface driver hub Jan 30 13:59:14.038969 kernel: usbcore: registered new device driver usb Jan 30 13:59:14.038987 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Jan 30 13:59:14.039227 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:59:14.039260 kernel: AES CTR mode by8 optimization enabled Jan 30 13:59:14.039278 kernel: libata version 3.00 loaded. Jan 30 13:59:14.039296 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:59:14.053877 kernel: scsi host1: ata_piix Jan 30 13:59:14.054060 kernel: scsi host2: ata_piix Jan 30 13:59:14.054191 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 13:59:14.054205 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 13:59:14.055058 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:59:14.056004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:59:14.057532 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:59:14.058438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:59:14.059561 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:14.060515 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:14.067697 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:14.103118 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Jan 30 13:59:14.114116 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (459) Jan 30 13:59:14.117131 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:59:14.141591 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:14.147486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:59:14.152702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:59:14.156962 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:59:14.157585 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:59:14.164329 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:59:14.167587 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:59:14.172340 disk-uuid[535]: Primary Header is updated. Jan 30 13:59:14.172340 disk-uuid[535]: Secondary Entries is updated. Jan 30 13:59:14.172340 disk-uuid[535]: Secondary Header is updated. Jan 30 13:59:14.184223 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:14.190153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:14.203142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:59:14.218472 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:14.235778 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 13:59:14.246243 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 13:59:14.246503 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 13:59:14.246690 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 13:59:14.246943 kernel: hub 1-0:1.0: USB hub found Jan 30 13:59:14.247170 kernel: hub 1-0:1.0: 2 ports detected Jan 30 13:59:15.201537 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:59:15.201606 disk-uuid[536]: The operation has completed successfully. Jan 30 13:59:15.247422 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:59:15.247532 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:59:15.257296 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:59:15.262038 sh[566]: Success Jan 30 13:59:15.276111 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:59:15.349534 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:59:15.350949 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:59:15.353288 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:59:15.390184 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:59:15.390255 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:59:15.390269 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:59:15.392278 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:59:15.392346 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:59:15.403453 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:59:15.404549 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:59:15.414338 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:59:15.417032 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:59:15.453621 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:15.453709 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:59:15.453743 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:59:15.459124 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:59:15.471637 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:59:15.472779 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:15.478862 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:59:15.487376 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:59:15.575301 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:59:15.584468 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:59:15.613368 systemd-networkd[750]: lo: Link UP Jan 30 13:59:15.613378 systemd-networkd[750]: lo: Gained carrier Jan 30 13:59:15.615633 ignition[674]: Ignition 2.19.0 Jan 30 13:59:15.615640 ignition[674]: Stage: fetch-offline Jan 30 13:59:15.615674 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:15.615684 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:15.615806 ignition[674]: parsed url from cmdline: "" Jan 30 13:59:15.618223 systemd-networkd[750]: Enumeration completed Jan 30 13:59:15.615810 ignition[674]: no config URL provided Jan 30 13:59:15.618678 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:59:15.615815 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:59:15.618682 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 13:59:15.615823 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:59:15.619532 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:59:15.615829 ignition[674]: failed to fetch config: resource requires networking Jan 30 13:59:15.619810 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:59:15.616115 ignition[674]: Ignition finished successfully Jan 30 13:59:15.619813 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:59:15.620464 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:59:15.622421 systemd[1]: Reached target network.target - Network. Jan 30 13:59:15.623204 systemd-networkd[750]: eth0: Link UP Jan 30 13:59:15.623210 systemd-networkd[750]: eth0: Gained carrier Jan 30 13:59:15.623224 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:59:15.629297 systemd-networkd[750]: eth1: Link UP Jan 30 13:59:15.629305 systemd-networkd[750]: eth1: Gained carrier Jan 30 13:59:15.629317 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:59:15.629667 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:59:15.647206 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.8/20 acquired from 169.254.169.253 Jan 30 13:59:15.651213 systemd-networkd[750]: eth0: DHCPv4 address 164.92.85.159/20, gateway 164.92.80.1 acquired from 169.254.169.253 Jan 30 13:59:15.655049 ignition[758]: Ignition 2.19.0 Jan 30 13:59:15.655059 ignition[758]: Stage: fetch Jan 30 13:59:15.655278 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:15.655289 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:15.655414 ignition[758]: parsed url from cmdline: "" Jan 30 13:59:15.655418 ignition[758]: no config URL provided Jan 30 13:59:15.655423 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:59:15.655432 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:59:15.655452 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 13:59:15.688272 ignition[758]: GET result: OK Jan 30 13:59:15.688469 ignition[758]: parsing config with SHA512: 6774f39d6d3a7d366ffe6927cc2801ff49b5313bd71a2305de6bed05d2fcf59dd0e348f6abaab76141d2ec3caa7064b239495d83b31364431134e81f21e5d3e3 Jan 30 13:59:15.694346 unknown[758]: fetched base config from "system" Jan 30 13:59:15.694358 unknown[758]: fetched base config from "system" Jan 30 13:59:15.694976 ignition[758]: fetch: fetch complete Jan 30 13:59:15.694365 unknown[758]: fetched user config from "digitalocean" Jan 30 13:59:15.694983 ignition[758]: fetch: fetch passed Jan 30 13:59:15.695041 ignition[758]: Ignition finished successfully Jan 30 13:59:15.696760 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:59:15.702392 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:59:15.731621 ignition[766]: Ignition 2.19.0 Jan 30 13:59:15.731635 ignition[766]: Stage: kargs Jan 30 13:59:15.731836 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:15.731848 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:15.733006 ignition[766]: kargs: kargs passed Jan 30 13:59:15.733065 ignition[766]: Ignition finished successfully Jan 30 13:59:15.734362 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:59:15.740388 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:59:15.757083 ignition[772]: Ignition 2.19.0 Jan 30 13:59:15.757112 ignition[772]: Stage: disks Jan 30 13:59:15.757303 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:15.757314 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:15.763512 ignition[772]: disks: disks passed Jan 30 13:59:15.763969 ignition[772]: Ignition finished successfully Jan 30 13:59:15.765444 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:59:15.766079 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:59:15.766626 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:59:15.767468 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:59:15.768289 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:59:15.768892 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:59:15.775341 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:59:15.792013 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:59:15.795323 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:59:15.803254 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:59:15.906128 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:59:15.906360 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:59:15.907459 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:59:15.916254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:59:15.918670 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:59:15.920370 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 13:59:15.928136 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (788) Jan 30 13:59:15.932119 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:15.931068 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:59:15.935847 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:59:15.935874 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:59:15.931524 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:59:15.931577 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:59:15.937508 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:59:15.940176 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:59:15.949190 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:59:15.959571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:59:16.010154 coreos-metadata[790]: Jan 30 13:59:16.009 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:59:16.016148 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:59:16.020435 coreos-metadata[790]: Jan 30 13:59:16.020 INFO Fetch successful Jan 30 13:59:16.023359 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:59:16.024110 coreos-metadata[791]: Jan 30 13:59:16.023 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:59:16.028052 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 13:59:16.028863 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 13:59:16.034189 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:59:16.037527 coreos-metadata[791]: Jan 30 13:59:16.037 INFO Fetch successful Jan 30 13:59:16.041661 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:59:16.043320 coreos-metadata[791]: Jan 30 13:59:16.043 INFO wrote hostname ci-4081.3.0-9-9df89b74d7 to /sysroot/etc/hostname Jan 30 13:59:16.046856 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:59:16.146217 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:59:16.150301 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:59:16.153345 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:59:16.164112 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:16.192191 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:59:16.196703 ignition[908]: INFO : Ignition 2.19.0 Jan 30 13:59:16.197352 ignition[908]: INFO : Stage: mount Jan 30 13:59:16.197976 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:16.199143 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:16.200347 ignition[908]: INFO : mount: mount passed Jan 30 13:59:16.200808 ignition[908]: INFO : Ignition finished successfully Jan 30 13:59:16.202660 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:59:16.207278 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:59:16.389180 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:59:16.394423 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:59:16.405146 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (921) Jan 30 13:59:16.407422 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:59:16.407496 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:59:16.408303 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:59:16.412134 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:59:16.414224 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:59:16.441149 ignition[937]: INFO : Ignition 2.19.0 Jan 30 13:59:16.441149 ignition[937]: INFO : Stage: files Jan 30 13:59:16.442214 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:16.442214 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:16.443287 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:59:16.443732 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:59:16.443732 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:59:16.446484 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:59:16.447250 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:59:16.448083 unknown[937]: wrote ssh authorized keys file for user: core Jan 30 13:59:16.448712 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:59:16.449441 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:59:16.450025 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:59:16.450025 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:59:16.450025 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:59:16.486748 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:59:16.553970 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:59:16.553970 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:59:16.555684 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:59:17.031599 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 30 13:59:17.091768 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:59:17.091768 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:59:17.093267 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:59:17.102561 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:59:17.232528 systemd-networkd[750]: eth0: Gained IPv6LL Jan 30 13:59:17.406559 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 30 13:59:17.552507 systemd-networkd[750]: eth1: Gained IPv6LL Jan 30 13:59:17.630851 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:59:17.630851 ignition[937]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:59:17.632578 ignition[937]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:59:17.638842 ignition[937]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:59:17.638842 ignition[937]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:59:17.638842 ignition[937]: INFO : files: files passed Jan 30 13:59:17.638842 ignition[937]: INFO : Ignition finished successfully Jan 30 13:59:17.635074 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:59:17.642287 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:59:17.644281 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:59:17.648605 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:59:17.648730 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:59:17.670106 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:59:17.670106 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:59:17.672989 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:59:17.675245 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:59:17.675880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:59:17.680323 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:59:17.716177 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:59:17.716279 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:59:17.717382 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:59:17.718353 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:59:17.719204 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:59:17.725337 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:59:17.740920 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:59:17.747418 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:59:17.759136 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:59:17.760206 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:59:17.761175 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:59:17.762080 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:59:17.762658 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:59:17.763866 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:59:17.764776 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:59:17.765686 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:59:17.766596 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:59:17.767593 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:59:17.768456 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:59:17.769241 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:59:17.769794 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:59:17.770776 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:59:17.771479 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:59:17.772050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:59:17.772238 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:59:17.772972 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:59:17.773802 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:59:17.774563 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:59:17.774975 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:59:17.775466 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:59:17.775597 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:59:17.776536 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:59:17.776707 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:59:17.777498 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:59:17.777624 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:59:17.778142 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:59:17.778236 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:59:17.789502 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:59:17.792253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:59:17.792487 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:59:17.794411 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:59:17.796933 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:59:17.797155 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:59:17.797711 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:59:17.797839 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:59:17.811338 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:59:17.811455 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:59:17.817108 ignition[991]: INFO : Ignition 2.19.0 Jan 30 13:59:17.817108 ignition[991]: INFO : Stage: umount Jan 30 13:59:17.817108 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:59:17.817108 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:59:17.820533 ignition[991]: INFO : umount: umount passed Jan 30 13:59:17.820533 ignition[991]: INFO : Ignition finished successfully Jan 30 13:59:17.820473 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:59:17.820575 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:59:17.824006 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:59:17.824155 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:59:17.824713 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:59:17.824762 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:59:17.828165 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:59:17.828233 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:59:17.829022 systemd[1]: Stopped target network.target - Network. Jan 30 13:59:17.829818 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:59:17.829878 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:59:17.839256 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:59:17.839718 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:59:17.849180 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:59:17.851280 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:59:17.851612 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:59:17.851945 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:59:17.851999 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:59:17.852381 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:59:17.852433 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:59:17.852772 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:59:17.852821 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:59:17.855503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:59:17.855568 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:59:17.858282 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:59:17.858812 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:59:17.861755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:59:17.865943 systemd-networkd[750]: eth1: DHCPv6 lease lost Jan 30 13:59:17.869168 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 30 13:59:17.873161 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:59:17.873279 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:59:17.875536 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:59:17.876003 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:59:17.877305 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:59:17.877400 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:59:17.879548 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:59:17.879601 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:59:17.880033 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:59:17.880081 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:59:17.887258 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:59:17.888020 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:59:17.888101 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:59:17.888503 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:59:17.888547 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:59:17.888875 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:59:17.888912 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:59:17.889677 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:59:17.889718 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:59:17.890560 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:59:17.907684 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:59:17.907835 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:59:17.909712 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:59:17.909764 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:59:17.910317 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:59:17.910353 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:59:17.911586 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:59:17.911638 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:59:17.913042 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:59:17.913116 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:59:17.913935 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:59:17.913979 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:59:17.918423 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:59:17.919450 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:59:17.919962 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:59:17.920825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:59:17.920888 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:17.921695 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:59:17.921786 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:59:17.935310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:59:17.935437 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:59:17.936838 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:59:17.947514 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:59:17.957111 systemd[1]: Switching root. Jan 30 13:59:17.984041 systemd-journald[183]: Journal stopped Jan 30 13:59:19.048982 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 13:59:19.049076 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:59:19.052673 kernel: SELinux: policy capability open_perms=1 Jan 30 13:59:19.052701 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:59:19.052713 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:59:19.052725 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:59:19.052747 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:59:19.052759 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:59:19.052771 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:59:19.052786 systemd[1]: Successfully loaded SELinux policy in 36.334ms. Jan 30 13:59:19.052810 kernel: audit: type=1403 audit(1738245558.180:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:59:19.052824 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.225ms. Jan 30 13:59:19.052837 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:59:19.052850 systemd[1]: Detected virtualization kvm. Jan 30 13:59:19.052863 systemd[1]: Detected architecture x86-64. Jan 30 13:59:19.052879 systemd[1]: Detected first boot. Jan 30 13:59:19.052897 systemd[1]: Hostname set to . Jan 30 13:59:19.052911 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:59:19.052927 zram_generator::config[1050]: No configuration found. Jan 30 13:59:19.052942 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:59:19.052955 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:59:19.052967 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:59:19.052981 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:59:19.052997 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:59:19.053009 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:59:19.053025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:59:19.053043 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:59:19.053068 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:59:19.053100 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:59:19.053117 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:59:19.053129 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:59:19.053148 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:59:19.053160 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:59:19.053173 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:59:19.053186 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:59:19.053198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:59:19.053210 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:59:19.053223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:59:19.053235 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:59:19.053248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:59:19.053264 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:59:19.053276 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:59:19.053288 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:59:19.053301 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:59:19.053312 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:59:19.053325 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:59:19.053338 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:59:19.053353 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:59:19.053365 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:59:19.053377 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:59:19.053389 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:59:19.053401 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:59:19.053415 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:59:19.053427 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:59:19.053440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:19.053452 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:59:19.053466 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:59:19.053479 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:59:19.053492 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:59:19.053504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:19.053517 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:59:19.053529 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:59:19.053546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:59:19.053558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:59:19.053571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:59:19.053589 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:59:19.053604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:59:19.053616 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:59:19.053628 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:59:19.053642 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:59:19.053654 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:59:19.053666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:59:19.053679 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:59:19.053698 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:59:19.053710 kernel: loop: module loaded Jan 30 13:59:19.053721 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:59:19.053735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:19.053747 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:59:19.053804 systemd-journald[1148]: Collecting audit messages is disabled. Jan 30 13:59:19.053831 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:59:19.053847 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:59:19.053860 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:59:19.053872 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:59:19.053884 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:59:19.053896 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:59:19.053910 systemd-journald[1148]: Journal started Jan 30 13:59:19.053937 systemd-journald[1148]: Runtime Journal (/run/log/journal/793e8a7df22b45928ac94caa68c3f13a) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:59:19.063239 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:59:19.063297 kernel: fuse: init (API version 7.39) Jan 30 13:59:19.058605 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:59:19.059492 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:59:19.059688 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:59:19.064853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:59:19.065076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:59:19.065872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:59:19.066081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:59:19.069290 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:59:19.069496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:59:19.071272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:59:19.072313 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:59:19.073332 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:59:19.078117 kernel: ACPI: bus type drm_connector registered Jan 30 13:59:19.077952 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:59:19.079626 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:59:19.081957 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:59:19.085410 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:59:19.101066 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:59:19.107252 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:59:19.113280 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:59:19.116216 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:59:19.125408 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:59:19.129561 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:59:19.131313 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:59:19.140313 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:59:19.141022 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:59:19.152296 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:59:19.159324 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:59:19.168095 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:59:19.168767 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:59:19.173909 systemd-journald[1148]: Time spent on flushing to /var/log/journal/793e8a7df22b45928ac94caa68c3f13a is 37.457ms for 979 entries. Jan 30 13:59:19.173909 systemd-journald[1148]: System Journal (/var/log/journal/793e8a7df22b45928ac94caa68c3f13a) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:59:19.225680 systemd-journald[1148]: Received client request to flush runtime journal. Jan 30 13:59:19.190921 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:59:19.192044 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:59:19.203512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:59:19.211828 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:59:19.217644 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:59:19.232594 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:59:19.242313 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:59:19.251224 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 30 13:59:19.251244 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 30 13:59:19.256455 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:59:19.269448 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:59:19.302071 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:59:19.310496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:59:19.336987 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 30 13:59:19.337009 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jan 30 13:59:19.342640 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:59:19.910243 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:59:19.915347 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:59:19.953701 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Jan 30 13:59:19.980899 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:59:19.988405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:59:20.007516 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:59:20.047434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1227) Jan 30 13:59:20.100029 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:59:20.101276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:20.101462 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:20.109287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:59:20.112340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:59:20.125527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:59:20.125938 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:59:20.125992 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:59:20.126053 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:20.135035 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:59:20.143652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:59:20.143848 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:59:20.145745 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:59:20.146054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:59:20.150628 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:59:20.150856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:59:20.160740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:59:20.160819 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:59:20.229654 systemd-networkd[1226]: lo: Link UP Jan 30 13:59:20.229662 systemd-networkd[1226]: lo: Gained carrier Jan 30 13:59:20.232720 systemd-networkd[1226]: Enumeration completed Jan 30 13:59:20.232927 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:59:20.234414 systemd-networkd[1226]: eth0: Configuring with /run/systemd/network/10-42:32:9f:03:62:14.network. Jan 30 13:59:20.235256 systemd-networkd[1226]: eth1: Configuring with /run/systemd/network/10-76:18:5e:d3:b8:4d.network. Jan 30 13:59:20.236043 systemd-networkd[1226]: eth0: Link UP Jan 30 13:59:20.236049 systemd-networkd[1226]: eth0: Gained carrier Jan 30 13:59:20.240501 systemd-networkd[1226]: eth1: Link UP Jan 30 13:59:20.240512 systemd-networkd[1226]: eth1: Gained carrier Jan 30 13:59:20.241342 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:59:20.249155 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:59:20.256122 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:59:20.273379 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:59:20.298173 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:59:20.331285 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:59:20.358204 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:59:20.373130 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:59:20.373251 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:59:20.378596 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:20.385111 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:59:20.385195 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:59:20.385220 kernel: [drm] features: -context_init Jan 30 13:59:20.387585 kernel: [drm] number of scanouts: 1 Jan 30 13:59:20.387662 kernel: [drm] number of cap sets: 0 Jan 30 13:59:20.392258 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:59:20.401791 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:59:20.401890 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:59:20.413326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:59:20.413670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:20.419357 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:59:20.431587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:20.445432 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:59:20.445783 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:20.498032 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:59:20.549925 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:59:20.572881 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:59:20.584445 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:59:20.588319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:59:20.604110 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:59:20.634073 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:59:20.634441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:59:20.639492 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:59:20.648343 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:59:20.674844 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:59:20.675397 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:59:20.681212 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 13:59:20.682386 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:59:20.682441 systemd[1]: Reached target machines.target - Containers. Jan 30 13:59:20.684898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:59:20.702120 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 13:59:20.704250 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 13:59:20.706421 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:59:20.709084 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:59:20.717384 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:59:20.719861 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:59:20.725105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:20.733411 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:59:20.738106 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:59:20.745301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:59:20.746491 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:59:20.755199 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:59:20.764527 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:59:20.775396 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 13:59:20.797446 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:59:20.829381 kernel: loop1: detected capacity change from 0 to 8 Jan 30 13:59:20.854429 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:59:20.901750 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:59:20.957128 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 13:59:20.977185 kernel: loop5: detected capacity change from 0 to 8 Jan 30 13:59:20.980142 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 13:59:21.000528 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 13:59:21.015802 (sd-merge)[1316]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 13:59:21.016405 (sd-merge)[1316]: Merged extensions into '/usr'. Jan 30 13:59:21.021158 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:59:21.021186 systemd[1]: Reloading... Jan 30 13:59:21.097574 zram_generator::config[1341]: No configuration found. Jan 30 13:59:21.296954 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:59:21.323065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:59:21.385305 systemd[1]: Reloading finished in 363 ms. Jan 30 13:59:21.404163 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:59:21.407569 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:59:21.420430 systemd[1]: Starting ensure-sysext.service... Jan 30 13:59:21.427367 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:59:21.435078 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:59:21.437157 systemd[1]: Reloading... Jan 30 13:59:21.481773 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:59:21.483008 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:59:21.484048 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:59:21.485557 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 30 13:59:21.485692 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Jan 30 13:59:21.488921 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:59:21.489233 systemd-tmpfiles[1395]: Skipping /boot Jan 30 13:59:21.501802 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:59:21.502312 systemd-tmpfiles[1395]: Skipping /boot Jan 30 13:59:21.527870 zram_generator::config[1423]: No configuration found. Jan 30 13:59:21.702052 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:59:21.774432 systemd[1]: Reloading finished in 336 ms. Jan 30 13:59:21.798846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:59:21.812467 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:59:21.825368 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:59:21.832304 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:59:21.842169 systemd-networkd[1226]: eth0: Gained IPv6LL Jan 30 13:59:21.844206 systemd-networkd[1226]: eth1: Gained IPv6LL Jan 30 13:59:21.847330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:59:21.855811 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:59:21.867774 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:59:21.879226 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:21.879479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:21.885577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:59:21.897707 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:59:21.912409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:59:21.914033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:21.914218 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:21.919594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:59:21.924078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:59:21.935710 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:21.936077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:21.937710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:59:21.942769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:21.942988 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:21.943951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:59:21.948992 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:59:21.959727 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:59:21.964958 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:59:21.975519 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:59:21.989802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:59:21.990042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:59:21.997699 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:59:21.998188 augenrules[1510]: No rules Jan 30 13:59:22.000551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:59:22.011241 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:59:22.017716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:22.018041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:59:22.025513 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:59:22.031613 systemd-resolved[1483]: Positive Trust Anchors: Jan 30 13:59:22.031696 systemd-resolved[1483]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:59:22.031734 systemd-resolved[1483]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:59:22.038785 systemd-resolved[1483]: Using system hostname 'ci-4081.3.0-9-9df89b74d7'. Jan 30 13:59:22.039373 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:59:22.039978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:59:22.040055 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:59:22.044319 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:59:22.047407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:59:22.048605 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:59:22.049113 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:59:22.050184 systemd[1]: Finished ensure-sysext.service. Jan 30 13:59:22.051038 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:59:22.051941 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:59:22.062558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:59:22.062909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:59:22.068779 systemd[1]: Reached target network.target - Network. Jan 30 13:59:22.071934 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:59:22.072550 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:59:22.073067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:59:22.081493 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:59:22.088277 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:59:22.153587 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:59:22.156982 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:59:22.157656 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:59:22.158860 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:59:22.159468 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:59:22.159907 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:59:22.159938 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:59:22.160375 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:59:22.160963 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:59:22.161882 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:59:22.162421 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:59:22.163847 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:59:22.167882 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:59:22.173149 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:59:22.176333 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:59:22.176853 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:59:22.177302 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:59:22.177915 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:59:22.177963 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:59:22.178000 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:59:22.181383 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:59:22.188320 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:59:22.198342 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:59:22.204320 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:59:22.212345 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:59:22.212792 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:59:22.219853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:22.221060 jq[1543]: false Jan 30 13:59:22.237339 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:59:22.251709 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:59:22.264494 dbus-daemon[1542]: [system] SELinux support is enabled Jan 30 13:59:22.266227 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:59:22.272071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:59:22.279838 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:59:22.294431 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:59:22.298398 coreos-metadata[1540]: Jan 30 13:59:22.298 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:59:22.299213 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:59:22.735080 systemd-timesyncd[1533]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Jan 30 13:59:22.735135 systemd-timesyncd[1533]: Initial clock synchronization to Thu 2025-01-30 13:59:22.734917 UTC. Jan 30 13:59:22.737401 systemd-resolved[1483]: Clock change detected. Flushing caches. Jan 30 13:59:22.744746 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:59:22.759394 coreos-metadata[1540]: Jan 30 13:59:22.759 INFO Fetch successful Jan 30 13:59:22.760461 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:59:22.761858 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:59:22.778201 extend-filesystems[1546]: Found loop4 Jan 30 13:59:22.784557 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:59:22.787457 extend-filesystems[1546]: Found loop5 Jan 30 13:59:22.787457 extend-filesystems[1546]: Found loop6 Jan 30 13:59:22.787457 extend-filesystems[1546]: Found loop7 Jan 30 13:59:22.787457 extend-filesystems[1546]: Found vda Jan 30 13:59:22.787457 extend-filesystems[1546]: Found vda1 Jan 30 13:59:22.787457 extend-filesystems[1546]: Found vda2 Jan 30 13:59:22.784866 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:59:22.818339 jq[1569]: true Jan 30 13:59:22.825649 extend-filesystems[1546]: Found vda3 Jan 30 13:59:22.825649 extend-filesystems[1546]: Found usr Jan 30 13:59:22.825649 extend-filesystems[1546]: Found vda4 Jan 30 13:59:22.825649 extend-filesystems[1546]: Found vda6 Jan 30 13:59:22.825649 extend-filesystems[1546]: Found vda7 Jan 30 13:59:22.825649 extend-filesystems[1546]: Found vda9 Jan 30 13:59:22.825649 extend-filesystems[1546]: Checking size of /dev/vda9 Jan 30 13:59:22.845138 update_engine[1567]: I20250130 13:59:22.834414 1567 main.cc:92] Flatcar Update Engine starting Jan 30 13:59:22.799281 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:59:22.863220 extend-filesystems[1546]: Resized partition /dev/vda9 Jan 30 13:59:22.799642 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:59:22.867844 update_engine[1567]: I20250130 13:59:22.863514 1567 update_check_scheduler.cc:74] Next update check in 7m15s Jan 30 13:59:22.821627 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:59:22.835175 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:59:22.835448 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:59:22.875838 extend-filesystems[1594]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:59:22.891283 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 13:59:22.896039 jq[1586]: true Jan 30 13:59:22.900527 (ntainerd)[1597]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:59:22.915366 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:59:22.932663 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:59:22.935587 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:59:22.935865 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:59:22.935920 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:59:22.937904 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:59:22.938042 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 13:59:22.938070 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:59:22.960303 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1232) Jan 30 13:59:22.943044 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:59:22.949485 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:59:22.986852 tar[1584]: linux-amd64/helm Jan 30 13:59:23.029473 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 13:59:23.029520 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:59:23.029520 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 13:59:23.029520 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 13:59:23.044714 extend-filesystems[1546]: Resized filesystem in /dev/vda9 Jan 30 13:59:23.044714 extend-filesystems[1546]: Found vdb Jan 30 13:59:23.058628 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:59:23.058906 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:59:23.063980 systemd-logind[1563]: New seat seat0. Jan 30 13:59:23.076456 systemd-logind[1563]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:59:23.076481 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:59:23.076805 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:59:23.130014 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:59:23.132050 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:59:23.155490 systemd[1]: Starting sshkeys.service... Jan 30 13:59:23.198105 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:59:23.209493 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:59:23.216273 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:59:23.286029 coreos-metadata[1634]: Jan 30 13:59:23.285 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:59:23.297697 coreos-metadata[1634]: Jan 30 13:59:23.297 INFO Fetch successful Jan 30 13:59:23.313342 unknown[1634]: wrote ssh authorized keys file for user: core Jan 30 13:59:23.329316 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:59:23.349168 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:59:23.357942 update-ssh-keys[1653]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:59:23.365637 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:59:23.367338 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:59:23.372512 systemd[1]: Finished sshkeys.service. Jan 30 13:59:23.408656 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:59:23.408946 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:59:23.424636 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:59:23.472791 containerd[1597]: time="2025-01-30T13:59:23.472664517Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:59:23.478071 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:59:23.486857 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:59:23.498510 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:59:23.503614 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:59:23.537378 containerd[1597]: time="2025-01-30T13:59:23.536976641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539598 containerd[1597]: time="2025-01-30T13:59:23.539210201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539598 containerd[1597]: time="2025-01-30T13:59:23.539264746Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:59:23.539598 containerd[1597]: time="2025-01-30T13:59:23.539283888Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:59:23.539598 containerd[1597]: time="2025-01-30T13:59:23.539462365Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:59:23.539598 containerd[1597]: time="2025-01-30T13:59:23.539481004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539598 containerd[1597]: time="2025-01-30T13:59:23.539533388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539598 containerd[1597]: time="2025-01-30T13:59:23.539545283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539950 containerd[1597]: time="2025-01-30T13:59:23.539814572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539950 containerd[1597]: time="2025-01-30T13:59:23.539832655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539950 containerd[1597]: time="2025-01-30T13:59:23.539847594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539950 containerd[1597]: time="2025-01-30T13:59:23.539856916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:23.539950 containerd[1597]: time="2025-01-30T13:59:23.539934677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:23.540651 containerd[1597]: time="2025-01-30T13:59:23.540139866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:59:23.540651 containerd[1597]: time="2025-01-30T13:59:23.540358019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:59:23.540651 containerd[1597]: time="2025-01-30T13:59:23.540379507Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:59:23.540651 containerd[1597]: time="2025-01-30T13:59:23.540490825Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:59:23.540651 containerd[1597]: time="2025-01-30T13:59:23.540574430Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:59:23.547278 containerd[1597]: time="2025-01-30T13:59:23.546618356Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:59:23.547278 containerd[1597]: time="2025-01-30T13:59:23.546708253Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:59:23.547278 containerd[1597]: time="2025-01-30T13:59:23.546733543Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:59:23.547278 containerd[1597]: time="2025-01-30T13:59:23.546777489Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:59:23.547278 containerd[1597]: time="2025-01-30T13:59:23.546805916Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:59:23.547278 containerd[1597]: time="2025-01-30T13:59:23.547072769Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:59:23.549517 containerd[1597]: time="2025-01-30T13:59:23.549467384Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:59:23.549760 containerd[1597]: time="2025-01-30T13:59:23.549725940Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:59:23.549927 containerd[1597]: time="2025-01-30T13:59:23.549761559Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:59:23.549927 containerd[1597]: time="2025-01-30T13:59:23.549784010Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:59:23.549927 containerd[1597]: time="2025-01-30T13:59:23.549806747Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:59:23.549927 containerd[1597]: time="2025-01-30T13:59:23.549857044Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:59:23.549927 containerd[1597]: time="2025-01-30T13:59:23.549878969Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:59:23.549927 containerd[1597]: time="2025-01-30T13:59:23.549899753Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:59:23.549927 containerd[1597]: time="2025-01-30T13:59:23.549920788Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.549949477Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.549968468Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.549987555Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550017118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550037275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550055286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550074686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550092244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550111196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550127343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550144933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550165906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550193749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551422 containerd[1597]: time="2025-01-30T13:59:23.550212601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551837 containerd[1597]: time="2025-01-30T13:59:23.550228444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551837 containerd[1597]: time="2025-01-30T13:59:23.551736564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551837 containerd[1597]: time="2025-01-30T13:59:23.551790501Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:59:23.551837 containerd[1597]: time="2025-01-30T13:59:23.551832449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551927 containerd[1597]: time="2025-01-30T13:59:23.551851413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.551927 containerd[1597]: time="2025-01-30T13:59:23.551866428Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:59:23.553306 containerd[1597]: time="2025-01-30T13:59:23.552766791Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:59:23.553306 containerd[1597]: time="2025-01-30T13:59:23.552817302Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:59:23.553306 containerd[1597]: time="2025-01-30T13:59:23.552836941Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:59:23.553306 containerd[1597]: time="2025-01-30T13:59:23.552854414Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:59:23.553306 containerd[1597]: time="2025-01-30T13:59:23.552867940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.553306 containerd[1597]: time="2025-01-30T13:59:23.552900929Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:59:23.553306 containerd[1597]: time="2025-01-30T13:59:23.552915660Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:59:23.553306 containerd[1597]: time="2025-01-30T13:59:23.552928640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:59:23.553572 containerd[1597]: time="2025-01-30T13:59:23.553432928Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:59:23.553572 containerd[1597]: time="2025-01-30T13:59:23.553524132Z" level=info msg="Connect containerd service" Jan 30 13:59:23.553810 containerd[1597]: time="2025-01-30T13:59:23.553617642Z" level=info msg="using legacy CRI server" Jan 30 13:59:23.553810 containerd[1597]: time="2025-01-30T13:59:23.553642655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:59:23.555005 containerd[1597]: time="2025-01-30T13:59:23.553858409Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:59:23.557429 containerd[1597]: time="2025-01-30T13:59:23.556822940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:59:23.557429 containerd[1597]: time="2025-01-30T13:59:23.557003292Z" level=info msg="Start subscribing containerd event" Jan 30 13:59:23.557429 containerd[1597]: time="2025-01-30T13:59:23.557074049Z" level=info msg="Start recovering state" Jan 30 13:59:23.557429 containerd[1597]: time="2025-01-30T13:59:23.557176609Z" level=info msg="Start event monitor" Jan 30 13:59:23.557429 containerd[1597]: time="2025-01-30T13:59:23.557203258Z" level=info msg="Start snapshots syncer" Jan 30 13:59:23.557429 containerd[1597]: time="2025-01-30T13:59:23.557218391Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:59:23.557429 containerd[1597]: time="2025-01-30T13:59:23.557228059Z" level=info msg="Start streaming server" Jan 30 13:59:23.562263 containerd[1597]: time="2025-01-30T13:59:23.561438710Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:59:23.562263 containerd[1597]: time="2025-01-30T13:59:23.561511124Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:59:23.562263 containerd[1597]: time="2025-01-30T13:59:23.561965194Z" level=info msg="containerd successfully booted in 0.091483s" Jan 30 13:59:23.562134 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:59:23.925399 tar[1584]: linux-amd64/LICENSE Jan 30 13:59:23.925594 tar[1584]: linux-amd64/README.md Jan 30 13:59:23.942915 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:59:24.275437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:24.279874 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:59:24.282959 systemd[1]: Startup finished in 6.542s (kernel) + 5.704s (userspace) = 12.247s. Jan 30 13:59:24.285194 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:59:24.963290 kubelet[1690]: E0130 13:59:24.963202 1690 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:59:24.966436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:59:24.966745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:59:26.021159 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:59:26.031848 systemd[1]: Started sshd@0-164.92.85.159:22-147.75.109.163:38894.service - OpenSSH per-connection server daemon (147.75.109.163:38894). Jan 30 13:59:26.095220 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 38894 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:26.098602 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:26.111991 systemd-logind[1563]: New session 1 of user core. Jan 30 13:59:26.113540 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:59:26.125689 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:59:26.144129 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:59:26.151609 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:59:26.161289 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:59:26.266593 systemd[1709]: Queued start job for default target default.target. Jan 30 13:59:26.267201 systemd[1709]: Created slice app.slice - User Application Slice. Jan 30 13:59:26.267247 systemd[1709]: Reached target paths.target - Paths. Jan 30 13:59:26.267269 systemd[1709]: Reached target timers.target - Timers. Jan 30 13:59:26.281993 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:59:26.298364 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:59:26.298490 systemd[1709]: Reached target sockets.target - Sockets. Jan 30 13:59:26.298514 systemd[1709]: Reached target basic.target - Basic System. Jan 30 13:59:26.298581 systemd[1709]: Reached target default.target - Main User Target. Jan 30 13:59:26.298624 systemd[1709]: Startup finished in 129ms. Jan 30 13:59:26.299305 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:59:26.307270 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:59:26.380796 systemd[1]: Started sshd@1-164.92.85.159:22-147.75.109.163:38904.service - OpenSSH per-connection server daemon (147.75.109.163:38904). Jan 30 13:59:26.434986 sshd[1721]: Accepted publickey for core from 147.75.109.163 port 38904 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:26.437666 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:26.444202 systemd-logind[1563]: New session 2 of user core. Jan 30 13:59:26.453818 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:59:26.527535 sshd[1721]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:26.538688 systemd[1]: Started sshd@2-164.92.85.159:22-147.75.109.163:38908.service - OpenSSH per-connection server daemon (147.75.109.163:38908). Jan 30 13:59:26.539750 systemd[1]: sshd@1-164.92.85.159:22-147.75.109.163:38904.service: Deactivated successfully. Jan 30 13:59:26.544946 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:59:26.546445 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:59:26.549816 systemd-logind[1563]: Removed session 2. Jan 30 13:59:26.579262 sshd[1726]: Accepted publickey for core from 147.75.109.163 port 38908 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:26.580855 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:26.587315 systemd-logind[1563]: New session 3 of user core. Jan 30 13:59:26.596166 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:59:26.656546 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:26.668403 systemd[1]: Started sshd@3-164.92.85.159:22-147.75.109.163:38924.service - OpenSSH per-connection server daemon (147.75.109.163:38924). Jan 30 13:59:26.669118 systemd[1]: sshd@2-164.92.85.159:22-147.75.109.163:38908.service: Deactivated successfully. Jan 30 13:59:26.675464 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:59:26.677077 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:59:26.679295 systemd-logind[1563]: Removed session 3. Jan 30 13:59:26.704638 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 38924 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:26.706470 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:26.712411 systemd-logind[1563]: New session 4 of user core. Jan 30 13:59:26.720705 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:59:26.784761 sshd[1734]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:26.790972 systemd[1]: sshd@3-164.92.85.159:22-147.75.109.163:38924.service: Deactivated successfully. Jan 30 13:59:26.794985 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:59:26.796057 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:59:26.802710 systemd[1]: Started sshd@4-164.92.85.159:22-147.75.109.163:38936.service - OpenSSH per-connection server daemon (147.75.109.163:38936). Jan 30 13:59:26.803998 systemd-logind[1563]: Removed session 4. Jan 30 13:59:26.861378 sshd[1745]: Accepted publickey for core from 147.75.109.163 port 38936 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:26.863188 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:26.869092 systemd-logind[1563]: New session 5 of user core. Jan 30 13:59:26.884731 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:59:26.954006 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:59:26.954362 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:59:26.973767 sudo[1749]: pam_unix(sudo:session): session closed for user root Jan 30 13:59:26.978608 sshd[1745]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:26.987614 systemd[1]: Started sshd@5-164.92.85.159:22-147.75.109.163:38950.service - OpenSSH per-connection server daemon (147.75.109.163:38950). Jan 30 13:59:26.989167 systemd[1]: sshd@4-164.92.85.159:22-147.75.109.163:38936.service: Deactivated successfully. Jan 30 13:59:26.990884 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:59:26.992161 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:59:26.993959 systemd-logind[1563]: Removed session 5. Jan 30 13:59:27.027847 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 38950 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:27.029607 sshd[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:27.034450 systemd-logind[1563]: New session 6 of user core. Jan 30 13:59:27.040634 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:59:27.100215 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:59:27.100575 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:59:27.105687 sudo[1759]: pam_unix(sudo:session): session closed for user root Jan 30 13:59:27.112518 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:59:27.112818 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:59:27.144026 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:59:27.147947 auditctl[1762]: No rules Jan 30 13:59:27.148768 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:59:27.149169 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:59:27.157661 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:59:27.201228 augenrules[1781]: No rules Jan 30 13:59:27.202021 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:59:27.203914 sudo[1758]: pam_unix(sudo:session): session closed for user root Jan 30 13:59:27.209599 sshd[1751]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:27.214626 systemd[1]: sshd@5-164.92.85.159:22-147.75.109.163:38950.service: Deactivated successfully. Jan 30 13:59:27.217526 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:59:27.224630 systemd[1]: Started sshd@6-164.92.85.159:22-147.75.109.163:38952.service - OpenSSH per-connection server daemon (147.75.109.163:38952). Jan 30 13:59:27.225129 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:59:27.226630 systemd-logind[1563]: Removed session 6. Jan 30 13:59:27.267214 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 38952 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:59:27.268993 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:59:27.274462 systemd-logind[1563]: New session 7 of user core. Jan 30 13:59:27.287675 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:59:27.348413 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:59:27.348800 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:59:27.750557 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:59:27.752024 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:59:28.170598 dockerd[1811]: time="2025-01-30T13:59:28.170269695Z" level=info msg="Starting up" Jan 30 13:59:28.275991 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3776989857-merged.mount: Deactivated successfully. Jan 30 13:59:28.328167 systemd[1]: var-lib-docker-metacopy\x2dcheck2304625179-merged.mount: Deactivated successfully. Jan 30 13:59:28.347477 dockerd[1811]: time="2025-01-30T13:59:28.347202140Z" level=info msg="Loading containers: start." Jan 30 13:59:28.465262 kernel: Initializing XFRM netlink socket Jan 30 13:59:28.554063 systemd-networkd[1226]: docker0: Link UP Jan 30 13:59:28.572831 dockerd[1811]: time="2025-01-30T13:59:28.572417440Z" level=info msg="Loading containers: done." Jan 30 13:59:28.593752 dockerd[1811]: time="2025-01-30T13:59:28.593682607Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:59:28.593964 dockerd[1811]: time="2025-01-30T13:59:28.593846981Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:59:28.594040 dockerd[1811]: time="2025-01-30T13:59:28.594001752Z" level=info msg="Daemon has completed initialization" Jan 30 13:59:28.637065 dockerd[1811]: time="2025-01-30T13:59:28.636843731Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:59:28.637211 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:59:29.583351 containerd[1597]: time="2025-01-30T13:59:29.583061476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:59:30.097079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317561128.mount: Deactivated successfully. Jan 30 13:59:31.301953 containerd[1597]: time="2025-01-30T13:59:31.301895940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:31.303333 containerd[1597]: time="2025-01-30T13:59:31.303247717Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:59:31.304008 containerd[1597]: time="2025-01-30T13:59:31.303956302Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:31.306732 containerd[1597]: time="2025-01-30T13:59:31.306687870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:31.308074 containerd[1597]: time="2025-01-30T13:59:31.307877747Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.724775625s" Jan 30 13:59:31.308074 containerd[1597]: time="2025-01-30T13:59:31.307921098Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:59:31.339280 containerd[1597]: time="2025-01-30T13:59:31.339229032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:59:32.858292 containerd[1597]: time="2025-01-30T13:59:32.858092963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:32.859516 containerd[1597]: time="2025-01-30T13:59:32.859457457Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:59:32.860169 containerd[1597]: time="2025-01-30T13:59:32.859869503Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:32.863060 containerd[1597]: time="2025-01-30T13:59:32.863000090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:32.864259 containerd[1597]: time="2025-01-30T13:59:32.864170760Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.524889591s" Jan 30 13:59:32.864259 containerd[1597]: time="2025-01-30T13:59:32.864211473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:59:32.896251 containerd[1597]: time="2025-01-30T13:59:32.896167638Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:59:34.014676 containerd[1597]: time="2025-01-30T13:59:34.014626025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:34.016360 containerd[1597]: time="2025-01-30T13:59:34.016304047Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:59:34.017071 containerd[1597]: time="2025-01-30T13:59:34.016577992Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:34.019935 containerd[1597]: time="2025-01-30T13:59:34.019616470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:34.020965 containerd[1597]: time="2025-01-30T13:59:34.020790365Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.124577755s" Jan 30 13:59:34.020965 containerd[1597]: time="2025-01-30T13:59:34.020834698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:59:34.047398 containerd[1597]: time="2025-01-30T13:59:34.047356743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:59:35.149697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849517924.mount: Deactivated successfully. Jan 30 13:59:35.150785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:59:35.158508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:35.328606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:35.341768 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:59:35.414670 kubelet[2058]: E0130 13:59:35.413998 2058 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:59:35.418760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:59:35.419005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:59:35.702759 containerd[1597]: time="2025-01-30T13:59:35.702595749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:35.704098 containerd[1597]: time="2025-01-30T13:59:35.704039604Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:59:35.704981 containerd[1597]: time="2025-01-30T13:59:35.704925492Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:35.707632 containerd[1597]: time="2025-01-30T13:59:35.707544641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:35.708586 containerd[1597]: time="2025-01-30T13:59:35.708415066Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.661017176s" Jan 30 13:59:35.708586 containerd[1597]: time="2025-01-30T13:59:35.708458706Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:59:35.746880 containerd[1597]: time="2025-01-30T13:59:35.746539757Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:59:35.748175 systemd-resolved[1483]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 13:59:36.291283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060337564.mount: Deactivated successfully. Jan 30 13:59:37.096155 containerd[1597]: time="2025-01-30T13:59:37.096082913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:37.097644 containerd[1597]: time="2025-01-30T13:59:37.097577528Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:59:37.098349 containerd[1597]: time="2025-01-30T13:59:37.097997261Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:37.101267 containerd[1597]: time="2025-01-30T13:59:37.100829078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:37.102438 containerd[1597]: time="2025-01-30T13:59:37.102284754Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.355702701s" Jan 30 13:59:37.102438 containerd[1597]: time="2025-01-30T13:59:37.102330707Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:59:37.132501 containerd[1597]: time="2025-01-30T13:59:37.132452453Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:59:37.535710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243926140.mount: Deactivated successfully. Jan 30 13:59:37.541905 containerd[1597]: time="2025-01-30T13:59:37.540761954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:37.541905 containerd[1597]: time="2025-01-30T13:59:37.541682199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:59:37.541905 containerd[1597]: time="2025-01-30T13:59:37.541846430Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:37.544043 containerd[1597]: time="2025-01-30T13:59:37.544003927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:37.544973 containerd[1597]: time="2025-01-30T13:59:37.544937042Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 412.441364ms" Jan 30 13:59:37.545067 containerd[1597]: time="2025-01-30T13:59:37.544978254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:59:37.580139 containerd[1597]: time="2025-01-30T13:59:37.580100548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:59:38.069201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924873240.mount: Deactivated successfully. Jan 30 13:59:38.850520 systemd-resolved[1483]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 13:59:39.736142 containerd[1597]: time="2025-01-30T13:59:39.735646646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:39.737186 containerd[1597]: time="2025-01-30T13:59:39.737113858Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:59:39.737891 containerd[1597]: time="2025-01-30T13:59:39.737445908Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:39.741447 containerd[1597]: time="2025-01-30T13:59:39.741366170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:39.743404 containerd[1597]: time="2025-01-30T13:59:39.743161653Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.162816511s" Jan 30 13:59:39.743404 containerd[1597]: time="2025-01-30T13:59:39.743224676Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:59:43.127530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:43.134572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:43.164744 systemd[1]: Reloading requested from client PID 2235 ('systemctl') (unit session-7.scope)... Jan 30 13:59:43.164782 systemd[1]: Reloading... Jan 30 13:59:43.295272 zram_generator::config[2275]: No configuration found. Jan 30 13:59:43.448885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:59:43.525321 systemd[1]: Reloading finished in 359 ms. Jan 30 13:59:43.573046 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:59:43.573193 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:59:43.573689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:43.577559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:43.715533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:43.718089 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:59:43.778719 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:59:43.778719 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:59:43.778719 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:59:43.782110 kubelet[2337]: I0130 13:59:43.782011 2337 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:59:44.203430 kubelet[2337]: I0130 13:59:44.202974 2337 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:59:44.203430 kubelet[2337]: I0130 13:59:44.203008 2337 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:59:44.203430 kubelet[2337]: I0130 13:59:44.203290 2337 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:59:44.221022 kubelet[2337]: I0130 13:59:44.220704 2337 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:59:44.221337 kubelet[2337]: E0130 13:59:44.221315 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://164.92.85.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.235578 kubelet[2337]: I0130 13:59:44.235543 2337 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:59:44.237388 kubelet[2337]: I0130 13:59:44.237315 2337 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:59:44.237571 kubelet[2337]: I0130 13:59:44.237382 2337 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-9-9df89b74d7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:59:44.237671 kubelet[2337]: I0130 13:59:44.237580 2337 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:59:44.237671 kubelet[2337]: I0130 13:59:44.237592 2337 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:59:44.237771 kubelet[2337]: I0130 13:59:44.237757 2337 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:59:44.238573 kubelet[2337]: I0130 13:59:44.238554 2337 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:59:44.238573 kubelet[2337]: I0130 13:59:44.238577 2337 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:59:44.238674 kubelet[2337]: I0130 13:59:44.238612 2337 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:59:44.238674 kubelet[2337]: I0130 13:59:44.238635 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:59:44.241676 kubelet[2337]: W0130 13:59:44.241465 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.85.159:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.241676 kubelet[2337]: E0130 13:59:44.241518 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.85.159:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.241676 kubelet[2337]: W0130 13:59:44.241572 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.85.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-9-9df89b74d7&limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.241676 kubelet[2337]: E0130 13:59:44.241607 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.85.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-9-9df89b74d7&limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.241894 kubelet[2337]: I0130 13:59:44.241780 2337 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:59:44.244082 kubelet[2337]: I0130 13:59:44.243307 2337 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:59:44.244082 kubelet[2337]: W0130 13:59:44.243380 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:59:44.244227 kubelet[2337]: I0130 13:59:44.244213 2337 server.go:1264] "Started kubelet" Jan 30 13:59:44.255421 kubelet[2337]: E0130 13:59:44.255165 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.92.85.159:6443/api/v1/namespaces/default/events\": dial tcp 164.92.85.159:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-9-9df89b74d7.181f7d2635442bbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-9-9df89b74d7,UID:ci-4081.3.0-9-9df89b74d7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-9-9df89b74d7,},FirstTimestamp:2025-01-30 13:59:44.244190143 +0000 UTC m=+0.522078570,LastTimestamp:2025-01-30 13:59:44.244190143 +0000 UTC m=+0.522078570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-9-9df89b74d7,}" Jan 30 13:59:44.257348 kubelet[2337]: I0130 13:59:44.255787 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:59:44.257348 kubelet[2337]: I0130 13:59:44.256297 2337 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:59:44.257348 kubelet[2337]: I0130 13:59:44.256353 2337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:59:44.259764 kubelet[2337]: I0130 13:59:44.259730 2337 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:59:44.260172 kubelet[2337]: I0130 13:59:44.260149 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:59:44.267438 kubelet[2337]: I0130 13:59:44.267396 2337 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:59:44.268196 kubelet[2337]: I0130 13:59:44.268167 2337 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:59:44.268362 kubelet[2337]: I0130 13:59:44.268344 2337 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:59:44.269377 kubelet[2337]: W0130 13:59:44.269324 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.85.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.269483 kubelet[2337]: E0130 13:59:44.269387 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.85.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.269483 kubelet[2337]: E0130 13:59:44.269472 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.85.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-9-9df89b74d7?timeout=10s\": dial tcp 164.92.85.159:6443: connect: connection refused" interval="200ms" Jan 30 13:59:44.269688 kubelet[2337]: I0130 13:59:44.269672 2337 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:59:44.269756 kubelet[2337]: I0130 13:59:44.269742 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:59:44.272196 kubelet[2337]: I0130 13:59:44.272172 2337 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:59:44.285591 kubelet[2337]: E0130 13:59:44.284386 2337 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:59:44.289277 kubelet[2337]: I0130 13:59:44.284851 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:59:44.302216 kubelet[2337]: I0130 13:59:44.302181 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:59:44.302543 kubelet[2337]: I0130 13:59:44.302523 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:59:44.302642 kubelet[2337]: I0130 13:59:44.302633 2337 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:59:44.302785 kubelet[2337]: E0130 13:59:44.302758 2337 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:59:44.307840 kubelet[2337]: W0130 13:59:44.307796 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.85.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.307840 kubelet[2337]: E0130 13:59:44.307840 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.85.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:44.309613 kubelet[2337]: I0130 13:59:44.309461 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:59:44.309613 kubelet[2337]: I0130 13:59:44.309482 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:59:44.309613 kubelet[2337]: I0130 13:59:44.309501 2337 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:59:44.311027 kubelet[2337]: I0130 13:59:44.310984 2337 policy_none.go:49] "None policy: Start" Jan 30 13:59:44.312038 kubelet[2337]: I0130 13:59:44.312019 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:59:44.312181 kubelet[2337]: I0130 13:59:44.312048 2337 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:59:44.322268 kubelet[2337]: I0130 13:59:44.321921 2337 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:59:44.324461 kubelet[2337]: I0130 13:59:44.324301 2337 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:59:44.324660 kubelet[2337]: I0130 13:59:44.324640 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:59:44.326084 kubelet[2337]: E0130 13:59:44.326039 2337 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:44.369492 kubelet[2337]: I0130 13:59:44.369423 2337 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.369978 kubelet[2337]: E0130 13:59:44.369938 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.85.159:6443/api/v1/nodes\": dial tcp 164.92.85.159:6443: connect: connection refused" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.404153 kubelet[2337]: I0130 13:59:44.403254 2337 topology_manager.go:215] "Topology Admit Handler" podUID="b975b3094c512980958309fefeb26256" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.404486 kubelet[2337]: I0130 13:59:44.404425 2337 topology_manager.go:215] "Topology Admit Handler" podUID="8467671c20b5859c7cf6205a4823d31c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.405976 kubelet[2337]: I0130 13:59:44.405304 2337 topology_manager.go:215] "Topology Admit Handler" podUID="5554adb80b1ae9610807c70b0b8e16c8" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.470370 kubelet[2337]: E0130 13:59:44.470185 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.85.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-9-9df89b74d7?timeout=10s\": dial tcp 164.92.85.159:6443: connect: connection refused" interval="400ms" Jan 30 13:59:44.569788 kubelet[2337]: I0130 13:59:44.569728 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b975b3094c512980958309fefeb26256-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-9-9df89b74d7\" (UID: \"b975b3094c512980958309fefeb26256\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.569788 kubelet[2337]: I0130 13:59:44.569775 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b975b3094c512980958309fefeb26256-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-9-9df89b74d7\" (UID: \"b975b3094c512980958309fefeb26256\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.569788 kubelet[2337]: I0130 13:59:44.569799 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.570015 kubelet[2337]: I0130 13:59:44.569817 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.570015 kubelet[2337]: I0130 13:59:44.569835 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.570015 kubelet[2337]: I0130 13:59:44.569851 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b975b3094c512980958309fefeb26256-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-9-9df89b74d7\" (UID: \"b975b3094c512980958309fefeb26256\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.570015 kubelet[2337]: I0130 13:59:44.569884 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.570015 kubelet[2337]: I0130 13:59:44.569898 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.570141 kubelet[2337]: I0130 13:59:44.569913 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5554adb80b1ae9610807c70b0b8e16c8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-9-9df89b74d7\" (UID: \"5554adb80b1ae9610807c70b0b8e16c8\") " pod="kube-system/kube-scheduler-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.571081 kubelet[2337]: I0130 13:59:44.571046 2337 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.571494 kubelet[2337]: E0130 13:59:44.571467 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.85.159:6443/api/v1/nodes\": dial tcp 164.92.85.159:6443: connect: connection refused" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.709458 kubelet[2337]: E0130 13:59:44.709393 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:44.710512 containerd[1597]: time="2025-01-30T13:59:44.710219665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-9-9df89b74d7,Uid:b975b3094c512980958309fefeb26256,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:44.712033 kubelet[2337]: E0130 13:59:44.711407 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:44.712033 kubelet[2337]: E0130 13:59:44.711761 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:44.713052 systemd-resolved[1483]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 30 13:59:44.713979 containerd[1597]: time="2025-01-30T13:59:44.713824209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-9-9df89b74d7,Uid:5554adb80b1ae9610807c70b0b8e16c8,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:44.714193 containerd[1597]: time="2025-01-30T13:59:44.714169422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-9-9df89b74d7,Uid:8467671c20b5859c7cf6205a4823d31c,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:44.871583 kubelet[2337]: E0130 13:59:44.870870 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.85.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-9-9df89b74d7?timeout=10s\": dial tcp 164.92.85.159:6443: connect: connection refused" interval="800ms" Jan 30 13:59:44.972606 kubelet[2337]: I0130 13:59:44.972558 2337 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:44.972944 kubelet[2337]: E0130 13:59:44.972904 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.85.159:6443/api/v1/nodes\": dial tcp 164.92.85.159:6443: connect: connection refused" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:45.121295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163981499.mount: Deactivated successfully. Jan 30 13:59:45.125048 containerd[1597]: time="2025-01-30T13:59:45.124911044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:45.125931 containerd[1597]: time="2025-01-30T13:59:45.125765996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:59:45.128113 containerd[1597]: time="2025-01-30T13:59:45.126517474Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:45.128113 containerd[1597]: time="2025-01-30T13:59:45.127268283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:59:45.128113 containerd[1597]: time="2025-01-30T13:59:45.127627104Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:45.128113 containerd[1597]: time="2025-01-30T13:59:45.128073389Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:59:45.128322 containerd[1597]: time="2025-01-30T13:59:45.128301710Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:45.131901 containerd[1597]: time="2025-01-30T13:59:45.131860365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:45.133328 containerd[1597]: time="2025-01-30T13:59:45.133280665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.938705ms" Jan 30 13:59:45.137216 containerd[1597]: time="2025-01-30T13:59:45.136735450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.823373ms" Jan 30 13:59:45.139924 containerd[1597]: time="2025-01-30T13:59:45.139888631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.668284ms" Jan 30 13:59:45.180002 kubelet[2337]: W0130 13:59:45.179926 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.92.85.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:45.180002 kubelet[2337]: E0130 13:59:45.179972 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://164.92.85.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:45.231606 kubelet[2337]: W0130 13:59:45.231539 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.92.85.159:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:45.231917 kubelet[2337]: E0130 13:59:45.231897 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://164.92.85.159:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:45.282890 containerd[1597]: time="2025-01-30T13:59:45.282690199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:45.282890 containerd[1597]: time="2025-01-30T13:59:45.282835186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:45.282890 containerd[1597]: time="2025-01-30T13:59:45.282850790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:45.284834 containerd[1597]: time="2025-01-30T13:59:45.284477480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:45.291566 containerd[1597]: time="2025-01-30T13:59:45.291196660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:45.291566 containerd[1597]: time="2025-01-30T13:59:45.291406304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:45.291566 containerd[1597]: time="2025-01-30T13:59:45.291478858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:45.294036 containerd[1597]: time="2025-01-30T13:59:45.293936397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:45.294163 containerd[1597]: time="2025-01-30T13:59:45.294133295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:45.294194 containerd[1597]: time="2025-01-30T13:59:45.294164867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:45.294644 containerd[1597]: time="2025-01-30T13:59:45.292055821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:45.295324 containerd[1597]: time="2025-01-30T13:59:45.295258619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:45.389125 containerd[1597]: time="2025-01-30T13:59:45.388407333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-9-9df89b74d7,Uid:8467671c20b5859c7cf6205a4823d31c,Namespace:kube-system,Attempt:0,} returns sandbox id \"480b9b2b6cd1470bca072d5429fe9540c5b3dbad7ab816026d04dc26edb10c19\"" Jan 30 13:59:45.396565 kubelet[2337]: E0130 13:59:45.396529 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:45.402496 containerd[1597]: time="2025-01-30T13:59:45.402448364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-9-9df89b74d7,Uid:b975b3094c512980958309fefeb26256,Namespace:kube-system,Attempt:0,} returns sandbox id \"660f799923b3e142f37890a17fbfb6df8116c12ae2f8a1070eb9c8f1de3700e1\"" Jan 30 13:59:45.403871 kubelet[2337]: E0130 13:59:45.403839 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:45.405582 containerd[1597]: time="2025-01-30T13:59:45.405543911Z" level=info msg="CreateContainer within sandbox \"480b9b2b6cd1470bca072d5429fe9540c5b3dbad7ab816026d04dc26edb10c19\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:59:45.408569 containerd[1597]: time="2025-01-30T13:59:45.408473655Z" level=info msg="CreateContainer within sandbox \"660f799923b3e142f37890a17fbfb6df8116c12ae2f8a1070eb9c8f1de3700e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:59:45.414432 containerd[1597]: time="2025-01-30T13:59:45.414385780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-9-9df89b74d7,Uid:5554adb80b1ae9610807c70b0b8e16c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d46056913fb09f6981bcd6741e406cf203dd5446a237adfc369dec3b555f4c3\"" Jan 30 13:59:45.416140 kubelet[2337]: E0130 13:59:45.416029 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:45.418957 containerd[1597]: time="2025-01-30T13:59:45.418797556Z" level=info msg="CreateContainer within sandbox \"5d46056913fb09f6981bcd6741e406cf203dd5446a237adfc369dec3b555f4c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:59:45.427038 containerd[1597]: time="2025-01-30T13:59:45.426950616Z" level=info msg="CreateContainer within sandbox \"480b9b2b6cd1470bca072d5429fe9540c5b3dbad7ab816026d04dc26edb10c19\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c92f5b720f86d57a53761722389f707d1be186768f8e42f25f6e54c4e11940f8\"" Jan 30 13:59:45.430394 containerd[1597]: time="2025-01-30T13:59:45.429987552Z" level=info msg="StartContainer for \"c92f5b720f86d57a53761722389f707d1be186768f8e42f25f6e54c4e11940f8\"" Jan 30 13:59:45.431600 containerd[1597]: time="2025-01-30T13:59:45.431549787Z" level=info msg="CreateContainer within sandbox \"660f799923b3e142f37890a17fbfb6df8116c12ae2f8a1070eb9c8f1de3700e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f74948b3792a5c7e42b0c3914c38c299252063af32f749cb383c5bc458d0ce98\"" Jan 30 13:59:45.433892 containerd[1597]: time="2025-01-30T13:59:45.433862501Z" level=info msg="StartContainer for \"f74948b3792a5c7e42b0c3914c38c299252063af32f749cb383c5bc458d0ce98\"" Jan 30 13:59:45.437910 containerd[1597]: time="2025-01-30T13:59:45.437852348Z" level=info msg="CreateContainer within sandbox \"5d46056913fb09f6981bcd6741e406cf203dd5446a237adfc369dec3b555f4c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f826e7ac9785b2302aa59cacf399374fc743401877e32c9ed262a0d25d547c57\"" Jan 30 13:59:45.438728 containerd[1597]: time="2025-01-30T13:59:45.438691699Z" level=info msg="StartContainer for \"f826e7ac9785b2302aa59cacf399374fc743401877e32c9ed262a0d25d547c57\"" Jan 30 13:59:45.538007 kubelet[2337]: W0130 13:59:45.537079 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.92.85.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:45.538836 kubelet[2337]: E0130 13:59:45.538639 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://164.92.85.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:45.558025 containerd[1597]: time="2025-01-30T13:59:45.557787885Z" level=info msg="StartContainer for \"f74948b3792a5c7e42b0c3914c38c299252063af32f749cb383c5bc458d0ce98\" returns successfully" Jan 30 13:59:45.581022 containerd[1597]: time="2025-01-30T13:59:45.579730641Z" level=info msg="StartContainer for \"c92f5b720f86d57a53761722389f707d1be186768f8e42f25f6e54c4e11940f8\" returns successfully" Jan 30 13:59:45.589620 containerd[1597]: time="2025-01-30T13:59:45.589219655Z" level=info msg="StartContainer for \"f826e7ac9785b2302aa59cacf399374fc743401877e32c9ed262a0d25d547c57\" returns successfully" Jan 30 13:59:45.672526 kubelet[2337]: E0130 13:59:45.672372 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.92.85.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-9-9df89b74d7?timeout=10s\": dial tcp 164.92.85.159:6443: connect: connection refused" interval="1.6s" Jan 30 13:59:45.774223 kubelet[2337]: I0130 13:59:45.774180 2337 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:45.775295 kubelet[2337]: E0130 13:59:45.774506 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://164.92.85.159:6443/api/v1/nodes\": dial tcp 164.92.85.159:6443: connect: connection refused" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:45.778943 kubelet[2337]: W0130 13:59:45.778868 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.92.85.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-9-9df89b74d7&limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:45.778943 kubelet[2337]: E0130 13:59:45.778942 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://164.92.85.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-9-9df89b74d7&limit=500&resourceVersion=0": dial tcp 164.92.85.159:6443: connect: connection refused Jan 30 13:59:46.330922 kubelet[2337]: E0130 13:59:46.330875 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:46.336294 kubelet[2337]: E0130 13:59:46.336261 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:46.342521 kubelet[2337]: E0130 13:59:46.341597 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:47.343004 kubelet[2337]: E0130 13:59:47.342968 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:47.378408 kubelet[2337]: I0130 13:59:47.378372 2337 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:47.499269 kubelet[2337]: I0130 13:59:47.497980 2337 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:47.511457 kubelet[2337]: E0130 13:59:47.510742 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:47.611559 kubelet[2337]: E0130 13:59:47.611120 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:47.711760 kubelet[2337]: E0130 13:59:47.711714 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:47.812046 kubelet[2337]: E0130 13:59:47.811969 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:47.912707 kubelet[2337]: E0130 13:59:47.912531 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.013276 kubelet[2337]: E0130 13:59:48.013199 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.113960 kubelet[2337]: E0130 13:59:48.113905 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.214797 kubelet[2337]: E0130 13:59:48.214654 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.315828 kubelet[2337]: E0130 13:59:48.315743 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.416287 kubelet[2337]: E0130 13:59:48.416226 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.516508 kubelet[2337]: E0130 13:59:48.516361 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.617101 kubelet[2337]: E0130 13:59:48.617023 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.718149 kubelet[2337]: E0130 13:59:48.718078 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:48.818605 kubelet[2337]: E0130 13:59:48.818476 2337 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-9-9df89b74d7\" not found" Jan 30 13:59:49.244215 kubelet[2337]: I0130 13:59:49.244067 2337 apiserver.go:52] "Watching apiserver" Jan 30 13:59:49.269302 kubelet[2337]: I0130 13:59:49.269184 2337 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:59:49.282435 kubelet[2337]: W0130 13:59:49.281895 2337 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:49.282617 kubelet[2337]: E0130 13:59:49.282600 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:49.345808 kubelet[2337]: E0130 13:59:49.345748 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:49.666610 systemd[1]: Reloading requested from client PID 2609 ('systemctl') (unit session-7.scope)... Jan 30 13:59:49.666627 systemd[1]: Reloading... Jan 30 13:59:49.756299 zram_generator::config[2649]: No configuration found. Jan 30 13:59:49.909008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:59:49.995274 systemd[1]: Reloading finished in 328 ms. Jan 30 13:59:50.038360 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:50.050811 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:59:50.051118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:50.060628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:50.183716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:50.194738 (kubelet)[2709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:59:50.256481 kubelet[2709]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:59:50.257313 kubelet[2709]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:59:50.257313 kubelet[2709]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:59:50.257313 kubelet[2709]: I0130 13:59:50.256747 2709 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:59:50.265373 kubelet[2709]: I0130 13:59:50.265330 2709 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:59:50.265373 kubelet[2709]: I0130 13:59:50.265367 2709 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:59:50.265686 kubelet[2709]: I0130 13:59:50.265665 2709 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:59:50.269373 kubelet[2709]: I0130 13:59:50.269332 2709 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:59:50.270958 kubelet[2709]: I0130 13:59:50.270741 2709 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:59:50.282068 kubelet[2709]: I0130 13:59:50.282031 2709 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:59:50.282716 kubelet[2709]: I0130 13:59:50.282661 2709 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:59:50.282947 kubelet[2709]: I0130 13:59:50.282714 2709 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-9-9df89b74d7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:59:50.283091 kubelet[2709]: I0130 13:59:50.282966 2709 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:59:50.283091 kubelet[2709]: I0130 13:59:50.282982 2709 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:59:50.283091 kubelet[2709]: I0130 13:59:50.283037 2709 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:59:50.283272 kubelet[2709]: I0130 13:59:50.283190 2709 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:59:50.283711 kubelet[2709]: I0130 13:59:50.283208 2709 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:59:50.283772 kubelet[2709]: I0130 13:59:50.283740 2709 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:59:50.283772 kubelet[2709]: I0130 13:59:50.283767 2709 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:59:50.285963 kubelet[2709]: I0130 13:59:50.285534 2709 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:59:50.285963 kubelet[2709]: I0130 13:59:50.285724 2709 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:59:50.286417 kubelet[2709]: I0130 13:59:50.286394 2709 server.go:1264] "Started kubelet" Jan 30 13:59:50.289584 kubelet[2709]: I0130 13:59:50.288588 2709 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:59:50.292944 kubelet[2709]: I0130 13:59:50.292504 2709 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:59:50.296022 kubelet[2709]: I0130 13:59:50.295996 2709 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:59:50.297358 kubelet[2709]: I0130 13:59:50.297205 2709 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:59:50.299944 kubelet[2709]: I0130 13:59:50.298360 2709 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:59:50.302463 kubelet[2709]: I0130 13:59:50.302442 2709 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:59:50.309961 kubelet[2709]: I0130 13:59:50.309921 2709 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:59:50.310329 kubelet[2709]: I0130 13:59:50.310317 2709 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:59:50.314704 kubelet[2709]: I0130 13:59:50.314659 2709 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:59:50.314806 kubelet[2709]: I0130 13:59:50.314751 2709 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:59:50.320518 kubelet[2709]: E0130 13:59:50.320479 2709 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:59:50.322614 kubelet[2709]: I0130 13:59:50.322520 2709 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:59:50.324710 kubelet[2709]: I0130 13:59:50.324553 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:59:50.327460 kubelet[2709]: I0130 13:59:50.327092 2709 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:59:50.327460 kubelet[2709]: I0130 13:59:50.327129 2709 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:59:50.327460 kubelet[2709]: I0130 13:59:50.327152 2709 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:59:50.327460 kubelet[2709]: E0130 13:59:50.327199 2709 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:59:50.406633 kubelet[2709]: I0130 13:59:50.406598 2709 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.412131 kubelet[2709]: I0130 13:59:50.411663 2709 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:59:50.412131 kubelet[2709]: I0130 13:59:50.411689 2709 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:59:50.412131 kubelet[2709]: I0130 13:59:50.411715 2709 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:59:50.412131 kubelet[2709]: I0130 13:59:50.411937 2709 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:59:50.412131 kubelet[2709]: I0130 13:59:50.411957 2709 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:59:50.412131 kubelet[2709]: I0130 13:59:50.412010 2709 policy_none.go:49] "None policy: Start" Jan 30 13:59:50.414735 kubelet[2709]: I0130 13:59:50.413308 2709 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:59:50.414735 kubelet[2709]: I0130 13:59:50.413341 2709 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:59:50.414735 kubelet[2709]: I0130 13:59:50.413523 2709 state_mem.go:75] "Updated machine memory state" Jan 30 13:59:50.417329 kubelet[2709]: I0130 13:59:50.417146 2709 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:59:50.417425 kubelet[2709]: I0130 13:59:50.417386 2709 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:59:50.418299 kubelet[2709]: I0130 13:59:50.417507 2709 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:59:50.423020 kubelet[2709]: I0130 13:59:50.422973 2709 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.424124 kubelet[2709]: I0130 13:59:50.423281 2709 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.428768 kubelet[2709]: I0130 13:59:50.427442 2709 topology_manager.go:215] "Topology Admit Handler" podUID="5554adb80b1ae9610807c70b0b8e16c8" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.428768 kubelet[2709]: I0130 13:59:50.427968 2709 topology_manager.go:215] "Topology Admit Handler" podUID="b975b3094c512980958309fefeb26256" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.428768 kubelet[2709]: I0130 13:59:50.428029 2709 topology_manager.go:215] "Topology Admit Handler" podUID="8467671c20b5859c7cf6205a4823d31c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.442453 kubelet[2709]: W0130 13:59:50.442363 2709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:50.446499 kubelet[2709]: W0130 13:59:50.446208 2709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:50.447978 kubelet[2709]: W0130 13:59:50.447763 2709 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:50.447978 kubelet[2709]: E0130 13:59:50.447847 2709 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-9-9df89b74d7\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.613851 kubelet[2709]: I0130 13:59:50.613397 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5554adb80b1ae9610807c70b0b8e16c8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-9-9df89b74d7\" (UID: \"5554adb80b1ae9610807c70b0b8e16c8\") " pod="kube-system/kube-scheduler-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.615493 kubelet[2709]: I0130 13:59:50.614130 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b975b3094c512980958309fefeb26256-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-9-9df89b74d7\" (UID: \"b975b3094c512980958309fefeb26256\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.615493 kubelet[2709]: I0130 13:59:50.614187 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b975b3094c512980958309fefeb26256-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-9-9df89b74d7\" (UID: \"b975b3094c512980958309fefeb26256\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.615493 kubelet[2709]: I0130 13:59:50.614220 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.615493 kubelet[2709]: I0130 13:59:50.614274 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.615806 kubelet[2709]: I0130 13:59:50.615496 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b975b3094c512980958309fefeb26256-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-9-9df89b74d7\" (UID: \"b975b3094c512980958309fefeb26256\") " pod="kube-system/kube-apiserver-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.615806 kubelet[2709]: I0130 13:59:50.615563 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.615806 kubelet[2709]: I0130 13:59:50.615590 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.615806 kubelet[2709]: I0130 13:59:50.615639 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8467671c20b5859c7cf6205a4823d31c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-9-9df89b74d7\" (UID: \"8467671c20b5859c7cf6205a4823d31c\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" Jan 30 13:59:50.680061 sudo[2739]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:59:50.680502 sudo[2739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:59:50.743873 kubelet[2709]: E0130 13:59:50.743831 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:50.748204 kubelet[2709]: E0130 13:59:50.748140 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:50.750308 kubelet[2709]: E0130 13:59:50.749949 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:51.285732 sudo[2739]: pam_unix(sudo:session): session closed for user root Jan 30 13:59:51.289020 kubelet[2709]: I0130 13:59:51.287149 2709 apiserver.go:52] "Watching apiserver" Jan 30 13:59:51.310909 kubelet[2709]: I0130 13:59:51.310747 2709 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:59:51.369853 kubelet[2709]: E0130 13:59:51.369815 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:51.372910 kubelet[2709]: E0130 13:59:51.372589 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:51.373669 kubelet[2709]: E0130 13:59:51.373631 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:51.404907 kubelet[2709]: I0130 13:59:51.404821 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-9-9df89b74d7" podStartSLOduration=1.404764971 podStartE2EDuration="1.404764971s" podCreationTimestamp="2025-01-30 13:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:59:51.402610312 +0000 UTC m=+1.201103700" watchObservedRunningTime="2025-01-30 13:59:51.404764971 +0000 UTC m=+1.203258327" Jan 30 13:59:51.438337 kubelet[2709]: I0130 13:59:51.438276 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-9-9df89b74d7" podStartSLOduration=1.43825673 podStartE2EDuration="1.43825673s" podCreationTimestamp="2025-01-30 13:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:59:51.42037799 +0000 UTC m=+1.218871350" watchObservedRunningTime="2025-01-30 13:59:51.43825673 +0000 UTC m=+1.236750097" Jan 30 13:59:51.438337 kubelet[2709]: I0130 13:59:51.438345 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-9-9df89b74d7" podStartSLOduration=2.438342165 podStartE2EDuration="2.438342165s" podCreationTimestamp="2025-01-30 13:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:59:51.436418428 +0000 UTC m=+1.234911784" watchObservedRunningTime="2025-01-30 13:59:51.438342165 +0000 UTC m=+1.236835533" Jan 30 13:59:52.370030 kubelet[2709]: E0130 13:59:52.369961 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:52.553571 kubelet[2709]: E0130 13:59:52.553447 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:52.915116 sudo[1794]: pam_unix(sudo:session): session closed for user root Jan 30 13:59:52.919512 sshd[1790]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:52.926123 systemd[1]: sshd@6-164.92.85.159:22-147.75.109.163:38952.service: Deactivated successfully. Jan 30 13:59:52.930444 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:59:52.931092 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:59:52.934677 systemd-logind[1563]: Removed session 7. Jan 30 13:59:53.742121 kubelet[2709]: E0130 13:59:53.742076 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:56.832943 kubelet[2709]: E0130 13:59:56.832875 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:57.378520 kubelet[2709]: E0130 13:59:57.378475 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:02.572222 kubelet[2709]: E0130 14:00:02.571523 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:03.758346 kubelet[2709]: E0130 14:00:03.758251 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:04.421399 kubelet[2709]: E0130 14:00:04.421360 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:05.269475 kubelet[2709]: I0130 14:00:05.269381 2709 topology_manager.go:215] "Topology Admit Handler" podUID="acd30e0c-3c62-4798-a279-8398ac8e4373" podNamespace="kube-system" podName="cilium-operator-599987898-96sgh" Jan 30 14:00:05.316408 kubelet[2709]: I0130 14:00:05.315851 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9kc\" (UniqueName: \"kubernetes.io/projected/acd30e0c-3c62-4798-a279-8398ac8e4373-kube-api-access-ll9kc\") pod \"cilium-operator-599987898-96sgh\" (UID: \"acd30e0c-3c62-4798-a279-8398ac8e4373\") " pod="kube-system/cilium-operator-599987898-96sgh" Jan 30 14:00:05.316635 kubelet[2709]: I0130 14:00:05.316464 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd30e0c-3c62-4798-a279-8398ac8e4373-cilium-config-path\") pod \"cilium-operator-599987898-96sgh\" (UID: \"acd30e0c-3c62-4798-a279-8398ac8e4373\") " pod="kube-system/cilium-operator-599987898-96sgh" Jan 30 14:00:05.338591 kubelet[2709]: I0130 14:00:05.337851 2709 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:00:05.339704 containerd[1597]: time="2025-01-30T14:00:05.339635022Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:00:05.342199 kubelet[2709]: I0130 14:00:05.340952 2709 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:00:05.463075 kubelet[2709]: I0130 14:00:05.462798 2709 topology_manager.go:215] "Topology Admit Handler" podUID="0b59539c-1f82-49a7-90d6-c5aa6f53206f" podNamespace="kube-system" podName="cilium-7gv6p" Jan 30 14:00:05.491243 kubelet[2709]: I0130 14:00:05.490354 2709 topology_manager.go:215] "Topology Admit Handler" podUID="aa2b7193-5711-4fe4-8fc0-446c03089e49" podNamespace="kube-system" podName="kube-proxy-22clk" Jan 30 14:00:05.519897 kubelet[2709]: I0130 14:00:05.517872 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b59539c-1f82-49a7-90d6-c5aa6f53206f-clustermesh-secrets\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.519897 kubelet[2709]: I0130 14:00:05.517934 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-host-proc-sys-net\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.519897 kubelet[2709]: I0130 14:00:05.517969 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6kb\" (UniqueName: \"kubernetes.io/projected/aa2b7193-5711-4fe4-8fc0-446c03089e49-kube-api-access-xm6kb\") pod \"kube-proxy-22clk\" (UID: \"aa2b7193-5711-4fe4-8fc0-446c03089e49\") " pod="kube-system/kube-proxy-22clk" Jan 30 14:00:05.519897 kubelet[2709]: I0130 14:00:05.518004 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa2b7193-5711-4fe4-8fc0-446c03089e49-kube-proxy\") pod \"kube-proxy-22clk\" (UID: \"aa2b7193-5711-4fe4-8fc0-446c03089e49\") " pod="kube-system/kube-proxy-22clk" Jan 30 14:00:05.519897 kubelet[2709]: I0130 14:00:05.518033 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cni-path\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520370 kubelet[2709]: I0130 14:00:05.518061 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa2b7193-5711-4fe4-8fc0-446c03089e49-lib-modules\") pod \"kube-proxy-22clk\" (UID: \"aa2b7193-5711-4fe4-8fc0-446c03089e49\") " pod="kube-system/kube-proxy-22clk" Jan 30 14:00:05.520370 kubelet[2709]: I0130 14:00:05.518086 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-run\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520370 kubelet[2709]: I0130 14:00:05.518113 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-bpf-maps\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520370 kubelet[2709]: I0130 14:00:05.518139 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa2b7193-5711-4fe4-8fc0-446c03089e49-xtables-lock\") pod \"kube-proxy-22clk\" (UID: \"aa2b7193-5711-4fe4-8fc0-446c03089e49\") " pod="kube-system/kube-proxy-22clk" Jan 30 14:00:05.520370 kubelet[2709]: I0130 14:00:05.518198 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-cgroup\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520370 kubelet[2709]: I0130 14:00:05.518262 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-xtables-lock\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520647 kubelet[2709]: I0130 14:00:05.518292 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-config-path\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520647 kubelet[2709]: I0130 14:00:05.518315 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-host-proc-sys-kernel\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520647 kubelet[2709]: I0130 14:00:05.518342 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-hostproc\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520647 kubelet[2709]: I0130 14:00:05.518364 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b59539c-1f82-49a7-90d6-c5aa6f53206f-hubble-tls\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520647 kubelet[2709]: I0130 14:00:05.518385 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hkw\" (UniqueName: \"kubernetes.io/projected/0b59539c-1f82-49a7-90d6-c5aa6f53206f-kube-api-access-w6hkw\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.520647 kubelet[2709]: I0130 14:00:05.518413 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-etc-cni-netd\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.525288 kubelet[2709]: I0130 14:00:05.518438 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-lib-modules\") pod \"cilium-7gv6p\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " pod="kube-system/cilium-7gv6p" Jan 30 14:00:05.587299 kubelet[2709]: E0130 14:00:05.585904 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:05.588730 containerd[1597]: time="2025-01-30T14:00:05.588651316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-96sgh,Uid:acd30e0c-3c62-4798-a279-8398ac8e4373,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:05.704377 containerd[1597]: time="2025-01-30T14:00:05.703415650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:05.706444 containerd[1597]: time="2025-01-30T14:00:05.706313537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:05.706738 containerd[1597]: time="2025-01-30T14:00:05.706510421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:05.708367 containerd[1597]: time="2025-01-30T14:00:05.706802161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:05.770431 kubelet[2709]: E0130 14:00:05.769162 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:05.770604 containerd[1597]: time="2025-01-30T14:00:05.770534165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7gv6p,Uid:0b59539c-1f82-49a7-90d6-c5aa6f53206f,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:05.804707 containerd[1597]: time="2025-01-30T14:00:05.804644074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-96sgh,Uid:acd30e0c-3c62-4798-a279-8398ac8e4373,Namespace:kube-system,Attempt:0,} returns sandbox id \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\"" Jan 30 14:00:05.806259 kubelet[2709]: E0130 14:00:05.805926 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:05.808192 kubelet[2709]: E0130 14:00:05.808092 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:05.810729 containerd[1597]: time="2025-01-30T14:00:05.810617024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-22clk,Uid:aa2b7193-5711-4fe4-8fc0-446c03089e49,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:05.818181 containerd[1597]: time="2025-01-30T14:00:05.817895599Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 14:00:05.844072 containerd[1597]: time="2025-01-30T14:00:05.840915385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:05.844072 containerd[1597]: time="2025-01-30T14:00:05.843707757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:05.844072 containerd[1597]: time="2025-01-30T14:00:05.843734147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:05.844072 containerd[1597]: time="2025-01-30T14:00:05.843891461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:05.857846 containerd[1597]: time="2025-01-30T14:00:05.857157718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:05.857846 containerd[1597]: time="2025-01-30T14:00:05.857267082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:05.857846 containerd[1597]: time="2025-01-30T14:00:05.857294068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:05.857846 containerd[1597]: time="2025-01-30T14:00:05.857618040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:05.936660 containerd[1597]: time="2025-01-30T14:00:05.936524501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7gv6p,Uid:0b59539c-1f82-49a7-90d6-c5aa6f53206f,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\"" Jan 30 14:00:05.939137 kubelet[2709]: E0130 14:00:05.938765 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:05.950054 containerd[1597]: time="2025-01-30T14:00:05.949894380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-22clk,Uid:aa2b7193-5711-4fe4-8fc0-446c03089e49,Namespace:kube-system,Attempt:0,} returns sandbox id \"89bcc2b21746772858bf62aace3f5c3f4401d63aecf584a28439a3ecfb64cdfb\"" Jan 30 14:00:05.951116 kubelet[2709]: E0130 14:00:05.951073 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:05.969802 containerd[1597]: time="2025-01-30T14:00:05.969615574Z" level=info msg="CreateContainer within sandbox \"89bcc2b21746772858bf62aace3f5c3f4401d63aecf584a28439a3ecfb64cdfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:00:05.994558 containerd[1597]: time="2025-01-30T14:00:05.994485346Z" level=info msg="CreateContainer within sandbox \"89bcc2b21746772858bf62aace3f5c3f4401d63aecf584a28439a3ecfb64cdfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f6a1fe6660179ed6557f40b1217272d25bd1d2e14cbfe5a5aa3c4b5fd7d7ed6\"" Jan 30 14:00:05.997326 containerd[1597]: time="2025-01-30T14:00:05.995632668Z" level=info msg="StartContainer for \"1f6a1fe6660179ed6557f40b1217272d25bd1d2e14cbfe5a5aa3c4b5fd7d7ed6\"" Jan 30 14:00:06.097789 containerd[1597]: time="2025-01-30T14:00:06.096647816Z" level=info msg="StartContainer for \"1f6a1fe6660179ed6557f40b1217272d25bd1d2e14cbfe5a5aa3c4b5fd7d7ed6\" returns successfully" Jan 30 14:00:06.432174 kubelet[2709]: E0130 14:00:06.432122 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:06.463052 kubelet[2709]: I0130 14:00:06.460047 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-22clk" podStartSLOduration=1.460019658 podStartE2EDuration="1.460019658s" podCreationTimestamp="2025-01-30 14:00:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:00:06.459708671 +0000 UTC m=+16.258202029" watchObservedRunningTime="2025-01-30 14:00:06.460019658 +0000 UTC m=+16.258513023" Jan 30 14:00:07.385601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692917790.mount: Deactivated successfully. Jan 30 14:00:08.051452 update_engine[1567]: I20250130 14:00:08.050315 1567 update_attempter.cc:509] Updating boot flags... Jan 30 14:00:08.129273 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3082) Jan 30 14:00:08.253273 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3082) Jan 30 14:00:08.724697 containerd[1597]: time="2025-01-30T14:00:08.724599565Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 14:00:08.727291 containerd[1597]: time="2025-01-30T14:00:08.727197479Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:08.729276 containerd[1597]: time="2025-01-30T14:00:08.728752238Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.910793644s" Jan 30 14:00:08.729276 containerd[1597]: time="2025-01-30T14:00:08.728808326Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 14:00:08.729572 containerd[1597]: time="2025-01-30T14:00:08.729534006Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:08.731044 containerd[1597]: time="2025-01-30T14:00:08.731007889Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 14:00:08.738897 containerd[1597]: time="2025-01-30T14:00:08.738628769Z" level=info msg="CreateContainer within sandbox \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 14:00:08.764813 containerd[1597]: time="2025-01-30T14:00:08.764620053Z" level=info msg="CreateContainer within sandbox \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\"" Jan 30 14:00:08.767657 containerd[1597]: time="2025-01-30T14:00:08.767134713Z" level=info msg="StartContainer for \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\"" Jan 30 14:00:08.810335 systemd[1]: run-containerd-runc-k8s.io-66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e-runc.sKLeOT.mount: Deactivated successfully. Jan 30 14:00:08.847443 containerd[1597]: time="2025-01-30T14:00:08.847354143Z" level=info msg="StartContainer for \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\" returns successfully" Jan 30 14:00:09.471117 kubelet[2709]: E0130 14:00:09.471061 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:09.633581 kubelet[2709]: I0130 14:00:09.633482 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-96sgh" podStartSLOduration=1.710604507 podStartE2EDuration="4.633454703s" podCreationTimestamp="2025-01-30 14:00:05 +0000 UTC" firstStartedPulling="2025-01-30 14:00:05.807939217 +0000 UTC m=+15.606432552" lastFinishedPulling="2025-01-30 14:00:08.730789396 +0000 UTC m=+18.529282748" observedRunningTime="2025-01-30 14:00:09.630901607 +0000 UTC m=+19.429394964" watchObservedRunningTime="2025-01-30 14:00:09.633454703 +0000 UTC m=+19.431948074" Jan 30 14:00:10.473843 kubelet[2709]: E0130 14:00:10.472928 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:13.701562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075156985.mount: Deactivated successfully. Jan 30 14:00:16.036990 containerd[1597]: time="2025-01-30T14:00:16.036818519Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:16.037907 containerd[1597]: time="2025-01-30T14:00:16.037386515Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 14:00:16.039019 containerd[1597]: time="2025-01-30T14:00:16.038976408Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:16.041711 containerd[1597]: time="2025-01-30T14:00:16.041648813Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.310601099s" Jan 30 14:00:16.041711 containerd[1597]: time="2025-01-30T14:00:16.041694726Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 14:00:16.045307 containerd[1597]: time="2025-01-30T14:00:16.045266161Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:00:16.139273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248886557.mount: Deactivated successfully. Jan 30 14:00:16.145903 containerd[1597]: time="2025-01-30T14:00:16.145857448Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6\"" Jan 30 14:00:16.147910 containerd[1597]: time="2025-01-30T14:00:16.146945625Z" level=info msg="StartContainer for \"c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6\"" Jan 30 14:00:16.357296 containerd[1597]: time="2025-01-30T14:00:16.357110944Z" level=info msg="StartContainer for \"c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6\" returns successfully" Jan 30 14:00:16.482492 containerd[1597]: time="2025-01-30T14:00:16.462790658Z" level=info msg="shim disconnected" id=c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6 namespace=k8s.io Jan 30 14:00:16.482732 containerd[1597]: time="2025-01-30T14:00:16.482513429Z" level=warning msg="cleaning up after shim disconnected" id=c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6 namespace=k8s.io Jan 30 14:00:16.482732 containerd[1597]: time="2025-01-30T14:00:16.482533391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:00:16.492628 kubelet[2709]: E0130 14:00:16.492519 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:17.132800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6-rootfs.mount: Deactivated successfully. Jan 30 14:00:17.492656 kubelet[2709]: E0130 14:00:17.492621 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:17.495404 containerd[1597]: time="2025-01-30T14:00:17.495309768Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:00:17.529427 containerd[1597]: time="2025-01-30T14:00:17.529332410Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780\"" Jan 30 14:00:17.532216 containerd[1597]: time="2025-01-30T14:00:17.532125791Z" level=info msg="StartContainer for \"aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780\"" Jan 30 14:00:17.604413 containerd[1597]: time="2025-01-30T14:00:17.604348569Z" level=info msg="StartContainer for \"aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780\" returns successfully" Jan 30 14:00:17.618888 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:00:17.619770 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:17.619853 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:17.627835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:17.656191 containerd[1597]: time="2025-01-30T14:00:17.656117665Z" level=info msg="shim disconnected" id=aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780 namespace=k8s.io Jan 30 14:00:17.656498 containerd[1597]: time="2025-01-30T14:00:17.656466732Z" level=warning msg="cleaning up after shim disconnected" id=aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780 namespace=k8s.io Jan 30 14:00:17.656596 containerd[1597]: time="2025-01-30T14:00:17.656581467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:00:17.668447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:18.133071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780-rootfs.mount: Deactivated successfully. Jan 30 14:00:18.496713 kubelet[2709]: E0130 14:00:18.496647 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:18.510788 containerd[1597]: time="2025-01-30T14:00:18.510713501Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:00:18.534494 containerd[1597]: time="2025-01-30T14:00:18.533226278Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94\"" Jan 30 14:00:18.535216 containerd[1597]: time="2025-01-30T14:00:18.535092787Z" level=info msg="StartContainer for \"fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94\"" Jan 30 14:00:18.622673 containerd[1597]: time="2025-01-30T14:00:18.622521579Z" level=info msg="StartContainer for \"fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94\" returns successfully" Jan 30 14:00:18.658569 containerd[1597]: time="2025-01-30T14:00:18.658325864Z" level=info msg="shim disconnected" id=fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94 namespace=k8s.io Jan 30 14:00:18.658569 containerd[1597]: time="2025-01-30T14:00:18.658396540Z" level=warning msg="cleaning up after shim disconnected" id=fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94 namespace=k8s.io Jan 30 14:00:18.658569 containerd[1597]: time="2025-01-30T14:00:18.658409062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:00:19.132712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94-rootfs.mount: Deactivated successfully. Jan 30 14:00:19.501993 kubelet[2709]: E0130 14:00:19.501804 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:19.507057 containerd[1597]: time="2025-01-30T14:00:19.505811539Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:00:19.527288 containerd[1597]: time="2025-01-30T14:00:19.526920843Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5\"" Jan 30 14:00:19.532805 containerd[1597]: time="2025-01-30T14:00:19.529974520Z" level=info msg="StartContainer for \"4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5\"" Jan 30 14:00:19.612916 containerd[1597]: time="2025-01-30T14:00:19.612856956Z" level=info msg="StartContainer for \"4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5\" returns successfully" Jan 30 14:00:19.641317 containerd[1597]: time="2025-01-30T14:00:19.640982882Z" level=info msg="shim disconnected" id=4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5 namespace=k8s.io Jan 30 14:00:19.641317 containerd[1597]: time="2025-01-30T14:00:19.641086172Z" level=warning msg="cleaning up after shim disconnected" id=4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5 namespace=k8s.io Jan 30 14:00:19.641317 containerd[1597]: time="2025-01-30T14:00:19.641098998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:00:20.133590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5-rootfs.mount: Deactivated successfully. Jan 30 14:00:20.505778 kubelet[2709]: E0130 14:00:20.505728 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:20.512696 containerd[1597]: time="2025-01-30T14:00:20.511134037Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:00:20.551405 containerd[1597]: time="2025-01-30T14:00:20.551360019Z" level=info msg="CreateContainer within sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\"" Jan 30 14:00:20.554061 containerd[1597]: time="2025-01-30T14:00:20.553417708Z" level=info msg="StartContainer for \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\"" Jan 30 14:00:20.625506 containerd[1597]: time="2025-01-30T14:00:20.625446206Z" level=info msg="StartContainer for \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\" returns successfully" Jan 30 14:00:20.864748 kubelet[2709]: I0130 14:00:20.864610 2709 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:00:20.943367 kubelet[2709]: I0130 14:00:20.941196 2709 topology_manager.go:215] "Topology Admit Handler" podUID="fd459557-3582-4316-a7c1-9b3a8c627c51" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b22xb" Jan 30 14:00:20.943367 kubelet[2709]: I0130 14:00:20.941541 2709 topology_manager.go:215] "Topology Admit Handler" podUID="f15676f9-64ac-4845-8496-94230fe08576" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xklv9" Jan 30 14:00:21.059760 kubelet[2709]: I0130 14:00:21.059692 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f15676f9-64ac-4845-8496-94230fe08576-config-volume\") pod \"coredns-7db6d8ff4d-xklv9\" (UID: \"f15676f9-64ac-4845-8496-94230fe08576\") " pod="kube-system/coredns-7db6d8ff4d-xklv9" Jan 30 14:00:21.060332 kubelet[2709]: I0130 14:00:21.060282 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s456\" (UniqueName: \"kubernetes.io/projected/f15676f9-64ac-4845-8496-94230fe08576-kube-api-access-5s456\") pod \"coredns-7db6d8ff4d-xklv9\" (UID: \"f15676f9-64ac-4845-8496-94230fe08576\") " pod="kube-system/coredns-7db6d8ff4d-xklv9" Jan 30 14:00:21.060676 kubelet[2709]: I0130 14:00:21.060503 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zkw\" (UniqueName: \"kubernetes.io/projected/fd459557-3582-4316-a7c1-9b3a8c627c51-kube-api-access-l9zkw\") pod \"coredns-7db6d8ff4d-b22xb\" (UID: \"fd459557-3582-4316-a7c1-9b3a8c627c51\") " pod="kube-system/coredns-7db6d8ff4d-b22xb" Jan 30 14:00:21.060676 kubelet[2709]: I0130 14:00:21.060627 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd459557-3582-4316-a7c1-9b3a8c627c51-config-volume\") pod \"coredns-7db6d8ff4d-b22xb\" (UID: \"fd459557-3582-4316-a7c1-9b3a8c627c51\") " pod="kube-system/coredns-7db6d8ff4d-b22xb" Jan 30 14:00:21.262707 kubelet[2709]: E0130 14:00:21.262571 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:21.262707 kubelet[2709]: E0130 14:00:21.262675 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:21.267784 containerd[1597]: time="2025-01-30T14:00:21.267533547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xklv9,Uid:f15676f9-64ac-4845-8496-94230fe08576,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:21.278751 containerd[1597]: time="2025-01-30T14:00:21.278673351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b22xb,Uid:fd459557-3582-4316-a7c1-9b3a8c627c51,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:21.514461 kubelet[2709]: E0130 14:00:21.514139 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:21.535795 kubelet[2709]: I0130 14:00:21.535695 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7gv6p" podStartSLOduration=6.434561163 podStartE2EDuration="16.5356729s" podCreationTimestamp="2025-01-30 14:00:05 +0000 UTC" firstStartedPulling="2025-01-30 14:00:05.941334114 +0000 UTC m=+15.739827459" lastFinishedPulling="2025-01-30 14:00:16.04244584 +0000 UTC m=+25.840939196" observedRunningTime="2025-01-30 14:00:21.535225081 +0000 UTC m=+31.333718438" watchObservedRunningTime="2025-01-30 14:00:21.5356729 +0000 UTC m=+31.334166273" Jan 30 14:00:22.514113 kubelet[2709]: E0130 14:00:22.514075 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:22.880584 systemd-networkd[1226]: cilium_host: Link UP Jan 30 14:00:22.880751 systemd-networkd[1226]: cilium_net: Link UP Jan 30 14:00:22.880757 systemd-networkd[1226]: cilium_net: Gained carrier Jan 30 14:00:22.880948 systemd-networkd[1226]: cilium_host: Gained carrier Jan 30 14:00:23.041120 systemd-networkd[1226]: cilium_vxlan: Link UP Jan 30 14:00:23.041131 systemd-networkd[1226]: cilium_vxlan: Gained carrier Jan 30 14:00:23.218467 systemd-networkd[1226]: cilium_net: Gained IPv6LL Jan 30 14:00:23.346459 systemd-networkd[1226]: cilium_host: Gained IPv6LL Jan 30 14:00:23.425781 kernel: NET: Registered PF_ALG protocol family Jan 30 14:00:23.518029 kubelet[2709]: E0130 14:00:23.517780 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:24.269313 systemd-networkd[1226]: lxc_health: Link UP Jan 30 14:00:24.274271 systemd-networkd[1226]: lxc_health: Gained carrier Jan 30 14:00:24.876952 systemd-networkd[1226]: lxc259ac39d0ee0: Link UP Jan 30 14:00:24.878719 systemd-networkd[1226]: lxcf3a73b1e6494: Link UP Jan 30 14:00:24.885267 kernel: eth0: renamed from tmp6dd6a Jan 30 14:00:24.888682 kernel: eth0: renamed from tmpb5a29 Jan 30 14:00:24.896081 systemd-networkd[1226]: lxcf3a73b1e6494: Gained carrier Jan 30 14:00:24.899863 systemd-networkd[1226]: lxc259ac39d0ee0: Gained carrier Jan 30 14:00:24.994541 systemd-networkd[1226]: cilium_vxlan: Gained IPv6LL Jan 30 14:00:25.634495 systemd-networkd[1226]: lxc_health: Gained IPv6LL Jan 30 14:00:25.773273 kubelet[2709]: E0130 14:00:25.771955 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:26.020191 systemd-networkd[1226]: lxc259ac39d0ee0: Gained IPv6LL Jan 30 14:00:26.526912 kubelet[2709]: E0130 14:00:26.526866 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:26.786439 systemd-networkd[1226]: lxcf3a73b1e6494: Gained IPv6LL Jan 30 14:00:27.529363 kubelet[2709]: E0130 14:00:27.529116 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:29.665056 containerd[1597]: time="2025-01-30T14:00:29.664946701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:29.665056 containerd[1597]: time="2025-01-30T14:00:29.665001082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:29.665056 containerd[1597]: time="2025-01-30T14:00:29.665011788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:29.669472 containerd[1597]: time="2025-01-30T14:00:29.665104963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:29.730979 containerd[1597]: time="2025-01-30T14:00:29.729353780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:29.730979 containerd[1597]: time="2025-01-30T14:00:29.729439318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:29.730979 containerd[1597]: time="2025-01-30T14:00:29.729451882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:29.730979 containerd[1597]: time="2025-01-30T14:00:29.729563211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:29.787163 containerd[1597]: time="2025-01-30T14:00:29.786946300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xklv9,Uid:f15676f9-64ac-4845-8496-94230fe08576,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dd6a9831b34837b0906789b4ea1ebffc078c09599254cecf79a6b69176acee6\"" Jan 30 14:00:29.789620 kubelet[2709]: E0130 14:00:29.788800 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:29.792654 containerd[1597]: time="2025-01-30T14:00:29.792169684Z" level=info msg="CreateContainer within sandbox \"6dd6a9831b34837b0906789b4ea1ebffc078c09599254cecf79a6b69176acee6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:00:29.832559 containerd[1597]: time="2025-01-30T14:00:29.832511717Z" level=info msg="CreateContainer within sandbox \"6dd6a9831b34837b0906789b4ea1ebffc078c09599254cecf79a6b69176acee6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6e34f55eadf443f96b2c89af8bdf113534c46d3336ac70f11a1200967ffff756\"" Jan 30 14:00:29.833786 containerd[1597]: time="2025-01-30T14:00:29.833408834Z" level=info msg="StartContainer for \"6e34f55eadf443f96b2c89af8bdf113534c46d3336ac70f11a1200967ffff756\"" Jan 30 14:00:29.876490 containerd[1597]: time="2025-01-30T14:00:29.876338325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-b22xb,Uid:fd459557-3582-4316-a7c1-9b3a8c627c51,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5a29f002cba3f750ad7c427673efd6d9458f6430d5510d4ad9fe33eac9915c2\"" Jan 30 14:00:29.878458 kubelet[2709]: E0130 14:00:29.877397 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:29.883436 containerd[1597]: time="2025-01-30T14:00:29.883368986Z" level=info msg="CreateContainer within sandbox \"b5a29f002cba3f750ad7c427673efd6d9458f6430d5510d4ad9fe33eac9915c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:00:29.901731 containerd[1597]: time="2025-01-30T14:00:29.901565676Z" level=info msg="CreateContainer within sandbox \"b5a29f002cba3f750ad7c427673efd6d9458f6430d5510d4ad9fe33eac9915c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad71d3e64a60310ace0e25723a8ec4fefea48de0f9b1699e933d831a20295a46\"" Jan 30 14:00:29.905291 containerd[1597]: time="2025-01-30T14:00:29.904109192Z" level=info msg="StartContainer for \"ad71d3e64a60310ace0e25723a8ec4fefea48de0f9b1699e933d831a20295a46\"" Jan 30 14:00:29.931046 containerd[1597]: time="2025-01-30T14:00:29.930048425Z" level=info msg="StartContainer for \"6e34f55eadf443f96b2c89af8bdf113534c46d3336ac70f11a1200967ffff756\" returns successfully" Jan 30 14:00:29.984474 containerd[1597]: time="2025-01-30T14:00:29.984172882Z" level=info msg="StartContainer for \"ad71d3e64a60310ace0e25723a8ec4fefea48de0f9b1699e933d831a20295a46\" returns successfully" Jan 30 14:00:30.550557 kubelet[2709]: E0130 14:00:30.550100 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:30.551998 kubelet[2709]: E0130 14:00:30.551609 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:30.567798 kubelet[2709]: I0130 14:00:30.565323 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-b22xb" podStartSLOduration=25.565301399 podStartE2EDuration="25.565301399s" podCreationTimestamp="2025-01-30 14:00:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:00:30.565106217 +0000 UTC m=+40.363599573" watchObservedRunningTime="2025-01-30 14:00:30.565301399 +0000 UTC m=+40.363794755" Jan 30 14:00:30.582296 kubelet[2709]: I0130 14:00:30.581799 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xklv9" podStartSLOduration=25.581777305 podStartE2EDuration="25.581777305s" podCreationTimestamp="2025-01-30 14:00:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:00:30.581471791 +0000 UTC m=+40.379965162" watchObservedRunningTime="2025-01-30 14:00:30.581777305 +0000 UTC m=+40.380270655" Jan 30 14:00:30.673001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150964794.mount: Deactivated successfully. Jan 30 14:00:31.555030 kubelet[2709]: E0130 14:00:31.554578 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:31.555030 kubelet[2709]: E0130 14:00:31.554681 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:32.556825 kubelet[2709]: E0130 14:00:32.556473 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:32.556825 kubelet[2709]: E0130 14:00:32.556749 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:39.532629 systemd[1]: Started sshd@7-164.92.85.159:22-147.75.109.163:54590.service - OpenSSH per-connection server daemon (147.75.109.163:54590). Jan 30 14:00:39.629559 sshd[4089]: Accepted publickey for core from 147.75.109.163 port 54590 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:39.631481 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:39.647265 systemd-logind[1563]: New session 8 of user core. Jan 30 14:00:39.651728 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:00:40.209500 sshd[4089]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:40.213471 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:00:40.217605 systemd[1]: sshd@7-164.92.85.159:22-147.75.109.163:54590.service: Deactivated successfully. Jan 30 14:00:40.222635 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:00:40.224321 systemd-logind[1563]: Removed session 8. Jan 30 14:00:45.219990 systemd[1]: Started sshd@8-164.92.85.159:22-147.75.109.163:54600.service - OpenSSH per-connection server daemon (147.75.109.163:54600). Jan 30 14:00:45.267305 sshd[4104]: Accepted publickey for core from 147.75.109.163 port 54600 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:45.268994 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:45.274851 systemd-logind[1563]: New session 9 of user core. Jan 30 14:00:45.283879 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:00:45.446498 sshd[4104]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:45.451616 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:00:45.451932 systemd[1]: sshd@8-164.92.85.159:22-147.75.109.163:54600.service: Deactivated successfully. Jan 30 14:00:45.459249 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:00:45.460809 systemd-logind[1563]: Removed session 9. Jan 30 14:00:50.459736 systemd[1]: Started sshd@9-164.92.85.159:22-147.75.109.163:55762.service - OpenSSH per-connection server daemon (147.75.109.163:55762). Jan 30 14:00:50.533511 sshd[4121]: Accepted publickey for core from 147.75.109.163 port 55762 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:50.535058 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:50.540192 systemd-logind[1563]: New session 10 of user core. Jan 30 14:00:50.548698 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:00:50.679486 sshd[4121]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:50.683300 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:00:50.683955 systemd[1]: sshd@9-164.92.85.159:22-147.75.109.163:55762.service: Deactivated successfully. Jan 30 14:00:50.689997 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:00:50.693062 systemd-logind[1563]: Removed session 10. Jan 30 14:00:55.690273 systemd[1]: Started sshd@10-164.92.85.159:22-147.75.109.163:55764.service - OpenSSH per-connection server daemon (147.75.109.163:55764). Jan 30 14:00:55.731823 sshd[4136]: Accepted publickey for core from 147.75.109.163 port 55764 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:55.732451 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:55.737564 systemd-logind[1563]: New session 11 of user core. Jan 30 14:00:55.745748 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:00:55.888207 sshd[4136]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:55.894113 systemd[1]: sshd@10-164.92.85.159:22-147.75.109.163:55764.service: Deactivated successfully. Jan 30 14:00:55.903180 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:00:55.905039 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:00:55.918691 systemd[1]: Started sshd@11-164.92.85.159:22-147.75.109.163:55776.service - OpenSSH per-connection server daemon (147.75.109.163:55776). Jan 30 14:00:55.921554 systemd-logind[1563]: Removed session 11. Jan 30 14:00:55.978007 sshd[4151]: Accepted publickey for core from 147.75.109.163 port 55776 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:55.980300 sshd[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:55.987708 systemd-logind[1563]: New session 12 of user core. Jan 30 14:00:55.991820 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:00:56.172615 sshd[4151]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:56.186705 systemd[1]: Started sshd@12-164.92.85.159:22-147.75.109.163:55784.service - OpenSSH per-connection server daemon (147.75.109.163:55784). Jan 30 14:00:56.187244 systemd[1]: sshd@11-164.92.85.159:22-147.75.109.163:55776.service: Deactivated successfully. Jan 30 14:00:56.194533 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:00:56.198499 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:00:56.203858 systemd-logind[1563]: Removed session 12. Jan 30 14:00:56.249981 sshd[4159]: Accepted publickey for core from 147.75.109.163 port 55784 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:56.251441 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:56.259537 systemd-logind[1563]: New session 13 of user core. Jan 30 14:00:56.266780 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:00:56.397916 sshd[4159]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:56.403931 systemd[1]: sshd@12-164.92.85.159:22-147.75.109.163:55784.service: Deactivated successfully. Jan 30 14:00:56.404500 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:00:56.409956 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:00:56.411732 systemd-logind[1563]: Removed session 13. Jan 30 14:01:01.410984 systemd[1]: Started sshd@13-164.92.85.159:22-147.75.109.163:50320.service - OpenSSH per-connection server daemon (147.75.109.163:50320). Jan 30 14:01:01.453287 sshd[4175]: Accepted publickey for core from 147.75.109.163 port 50320 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:01.456220 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:01.463327 systemd-logind[1563]: New session 14 of user core. Jan 30 14:01:01.468789 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:01:01.632440 sshd[4175]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:01.642042 systemd[1]: sshd@13-164.92.85.159:22-147.75.109.163:50320.service: Deactivated successfully. Jan 30 14:01:01.648678 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:01:01.650979 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:01:01.653011 systemd-logind[1563]: Removed session 14. Jan 30 14:01:06.646624 systemd[1]: Started sshd@14-164.92.85.159:22-147.75.109.163:50334.service - OpenSSH per-connection server daemon (147.75.109.163:50334). Jan 30 14:01:06.689738 sshd[4190]: Accepted publickey for core from 147.75.109.163 port 50334 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:06.690556 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:06.698102 systemd-logind[1563]: New session 15 of user core. Jan 30 14:01:06.706181 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:01:06.843482 sshd[4190]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:06.847456 systemd[1]: sshd@14-164.92.85.159:22-147.75.109.163:50334.service: Deactivated successfully. Jan 30 14:01:06.851907 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:01:06.853099 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:01:06.854486 systemd-logind[1563]: Removed session 15. Jan 30 14:01:11.853657 systemd[1]: Started sshd@15-164.92.85.159:22-147.75.109.163:56368.service - OpenSSH per-connection server daemon (147.75.109.163:56368). Jan 30 14:01:11.901374 sshd[4203]: Accepted publickey for core from 147.75.109.163 port 56368 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:11.903652 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:11.911411 systemd-logind[1563]: New session 16 of user core. Jan 30 14:01:11.916668 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:01:12.054479 sshd[4203]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:12.059684 systemd[1]: sshd@15-164.92.85.159:22-147.75.109.163:56368.service: Deactivated successfully. Jan 30 14:01:12.065361 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:01:12.070743 systemd[1]: Started sshd@16-164.92.85.159:22-147.75.109.163:56382.service - OpenSSH per-connection server daemon (147.75.109.163:56382). Jan 30 14:01:12.071564 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:01:12.075121 systemd-logind[1563]: Removed session 16. Jan 30 14:01:12.120830 sshd[4217]: Accepted publickey for core from 147.75.109.163 port 56382 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:12.123436 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:12.129369 systemd-logind[1563]: New session 17 of user core. Jan 30 14:01:12.137779 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:01:12.472714 sshd[4217]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:12.481991 systemd[1]: Started sshd@17-164.92.85.159:22-147.75.109.163:56388.service - OpenSSH per-connection server daemon (147.75.109.163:56388). Jan 30 14:01:12.485210 systemd[1]: sshd@16-164.92.85.159:22-147.75.109.163:56382.service: Deactivated successfully. Jan 30 14:01:12.492101 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:01:12.495439 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:01:12.499170 systemd-logind[1563]: Removed session 17. Jan 30 14:01:12.548764 sshd[4226]: Accepted publickey for core from 147.75.109.163 port 56388 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:12.551708 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:12.560265 systemd-logind[1563]: New session 18 of user core. Jan 30 14:01:12.569682 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:01:14.415755 sshd[4226]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:14.432831 systemd[1]: Started sshd@18-164.92.85.159:22-147.75.109.163:56400.service - OpenSSH per-connection server daemon (147.75.109.163:56400). Jan 30 14:01:14.433527 systemd[1]: sshd@17-164.92.85.159:22-147.75.109.163:56388.service: Deactivated successfully. Jan 30 14:01:14.446580 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:01:14.447457 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:01:14.451100 systemd-logind[1563]: Removed session 18. Jan 30 14:01:14.497156 sshd[4245]: Accepted publickey for core from 147.75.109.163 port 56400 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:14.499326 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:14.508621 systemd-logind[1563]: New session 19 of user core. Jan 30 14:01:14.519885 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:01:14.852228 sshd[4245]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:14.871686 systemd[1]: Started sshd@19-164.92.85.159:22-147.75.109.163:56408.service - OpenSSH per-connection server daemon (147.75.109.163:56408). Jan 30 14:01:14.874373 systemd[1]: sshd@18-164.92.85.159:22-147.75.109.163:56400.service: Deactivated successfully. Jan 30 14:01:14.881898 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:01:14.884141 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:01:14.887178 systemd-logind[1563]: Removed session 19. Jan 30 14:01:14.919323 sshd[4257]: Accepted publickey for core from 147.75.109.163 port 56408 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:14.921636 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:14.927041 systemd-logind[1563]: New session 20 of user core. Jan 30 14:01:14.936997 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:01:15.078982 sshd[4257]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:15.083182 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:01:15.083885 systemd[1]: sshd@19-164.92.85.159:22-147.75.109.163:56408.service: Deactivated successfully. Jan 30 14:01:15.089347 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:01:15.092531 systemd-logind[1563]: Removed session 20. Jan 30 14:01:16.329381 kubelet[2709]: E0130 14:01:16.329230 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:20.090931 systemd[1]: Started sshd@20-164.92.85.159:22-147.75.109.163:44064.service - OpenSSH per-connection server daemon (147.75.109.163:44064). Jan 30 14:01:20.145279 sshd[4273]: Accepted publickey for core from 147.75.109.163 port 44064 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:20.146544 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:20.151914 systemd-logind[1563]: New session 21 of user core. Jan 30 14:01:20.168940 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:01:20.300054 sshd[4273]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:20.303385 systemd[1]: sshd@20-164.92.85.159:22-147.75.109.163:44064.service: Deactivated successfully. Jan 30 14:01:20.308711 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:01:20.309515 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:01:20.310925 systemd-logind[1563]: Removed session 21. Jan 30 14:01:23.329886 kubelet[2709]: E0130 14:01:23.329288 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:25.311603 systemd[1]: Started sshd@21-164.92.85.159:22-147.75.109.163:44066.service - OpenSSH per-connection server daemon (147.75.109.163:44066). Jan 30 14:01:25.359751 sshd[4290]: Accepted publickey for core from 147.75.109.163 port 44066 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:25.362169 sshd[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:25.368314 systemd-logind[1563]: New session 22 of user core. Jan 30 14:01:25.374612 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:01:25.507576 sshd[4290]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:25.511421 systemd[1]: sshd@21-164.92.85.159:22-147.75.109.163:44066.service: Deactivated successfully. Jan 30 14:01:25.517906 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:01:25.518900 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:01:25.521308 systemd-logind[1563]: Removed session 22. Jan 30 14:01:26.329534 kubelet[2709]: E0130 14:01:26.329462 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:29.328511 kubelet[2709]: E0130 14:01:29.328078 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:29.328511 kubelet[2709]: E0130 14:01:29.328355 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:30.530447 systemd[1]: Started sshd@22-164.92.85.159:22-147.75.109.163:45206.service - OpenSSH per-connection server daemon (147.75.109.163:45206). Jan 30 14:01:30.628766 sshd[4304]: Accepted publickey for core from 147.75.109.163 port 45206 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:30.632354 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:30.639579 systemd-logind[1563]: New session 23 of user core. Jan 30 14:01:30.648192 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:01:30.812224 sshd[4304]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:30.819999 systemd[1]: sshd@22-164.92.85.159:22-147.75.109.163:45206.service: Deactivated successfully. Jan 30 14:01:30.822429 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:01:30.828042 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:01:30.829464 systemd-logind[1563]: Removed session 23. Jan 30 14:01:35.822576 systemd[1]: Started sshd@23-164.92.85.159:22-147.75.109.163:45208.service - OpenSSH per-connection server daemon (147.75.109.163:45208). Jan 30 14:01:35.867089 sshd[4318]: Accepted publickey for core from 147.75.109.163 port 45208 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:35.869702 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:35.875446 systemd-logind[1563]: New session 24 of user core. Jan 30 14:01:35.885685 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:01:36.020681 sshd[4318]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:36.033989 systemd[1]: Started sshd@24-164.92.85.159:22-147.75.109.163:45222.service - OpenSSH per-connection server daemon (147.75.109.163:45222). Jan 30 14:01:36.035194 systemd[1]: sshd@23-164.92.85.159:22-147.75.109.163:45208.service: Deactivated successfully. Jan 30 14:01:36.038992 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:01:36.044621 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:01:36.046546 systemd-logind[1563]: Removed session 24. Jan 30 14:01:36.098795 sshd[4330]: Accepted publickey for core from 147.75.109.163 port 45222 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:36.103886 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:36.109920 systemd-logind[1563]: New session 25 of user core. Jan 30 14:01:36.121721 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:01:37.567135 containerd[1597]: time="2025-01-30T14:01:37.567017025Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:01:37.576280 containerd[1597]: time="2025-01-30T14:01:37.576208704Z" level=info msg="StopContainer for \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\" with timeout 2 (s)" Jan 30 14:01:37.576550 containerd[1597]: time="2025-01-30T14:01:37.576217437Z" level=info msg="StopContainer for \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\" with timeout 30 (s)" Jan 30 14:01:37.578784 containerd[1597]: time="2025-01-30T14:01:37.578745046Z" level=info msg="Stop container \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\" with signal terminated" Jan 30 14:01:37.579137 containerd[1597]: time="2025-01-30T14:01:37.578987397Z" level=info msg="Stop container \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\" with signal terminated" Jan 30 14:01:37.591778 systemd-networkd[1226]: lxc_health: Link DOWN Jan 30 14:01:37.591785 systemd-networkd[1226]: lxc_health: Lost carrier Jan 30 14:01:37.646627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718-rootfs.mount: Deactivated successfully. Jan 30 14:01:37.655885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e-rootfs.mount: Deactivated successfully. Jan 30 14:01:37.660354 containerd[1597]: time="2025-01-30T14:01:37.660230252Z" level=info msg="shim disconnected" id=66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e namespace=k8s.io Jan 30 14:01:37.660909 containerd[1597]: time="2025-01-30T14:01:37.660335378Z" level=warning msg="cleaning up after shim disconnected" id=66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e namespace=k8s.io Jan 30 14:01:37.660909 containerd[1597]: time="2025-01-30T14:01:37.660656936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:37.660909 containerd[1597]: time="2025-01-30T14:01:37.660551827Z" level=info msg="shim disconnected" id=f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718 namespace=k8s.io Jan 30 14:01:37.660909 containerd[1597]: time="2025-01-30T14:01:37.660755863Z" level=warning msg="cleaning up after shim disconnected" id=f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718 namespace=k8s.io Jan 30 14:01:37.660909 containerd[1597]: time="2025-01-30T14:01:37.660762558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:37.681743 containerd[1597]: time="2025-01-30T14:01:37.681589239Z" level=info msg="StopContainer for \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\" returns successfully" Jan 30 14:01:37.688719 containerd[1597]: time="2025-01-30T14:01:37.688553428Z" level=info msg="StopContainer for \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\" returns successfully" Jan 30 14:01:37.691436 containerd[1597]: time="2025-01-30T14:01:37.689106378Z" level=info msg="StopPodSandbox for \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\"" Jan 30 14:01:37.691436 containerd[1597]: time="2025-01-30T14:01:37.689157556Z" level=info msg="Container to stop \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:37.691644 containerd[1597]: time="2025-01-30T14:01:37.691596345Z" level=info msg="StopPodSandbox for \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\"" Jan 30 14:01:37.691709 containerd[1597]: time="2025-01-30T14:01:37.691655653Z" level=info msg="Container to stop \"aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:37.691709 containerd[1597]: time="2025-01-30T14:01:37.691667903Z" level=info msg="Container to stop \"fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:37.691709 containerd[1597]: time="2025-01-30T14:01:37.691677805Z" level=info msg="Container to stop \"4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:37.691709 containerd[1597]: time="2025-01-30T14:01:37.691689555Z" level=info msg="Container to stop \"c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:37.691709 containerd[1597]: time="2025-01-30T14:01:37.691700686Z" level=info msg="Container to stop \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:37.694129 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78-shm.mount: Deactivated successfully. Jan 30 14:01:37.700166 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a-shm.mount: Deactivated successfully. Jan 30 14:01:37.770311 containerd[1597]: time="2025-01-30T14:01:37.770221656Z" level=info msg="shim disconnected" id=aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78 namespace=k8s.io Jan 30 14:01:37.770311 containerd[1597]: time="2025-01-30T14:01:37.770304107Z" level=warning msg="cleaning up after shim disconnected" id=aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78 namespace=k8s.io Jan 30 14:01:37.770311 containerd[1597]: time="2025-01-30T14:01:37.770313997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:37.773871 containerd[1597]: time="2025-01-30T14:01:37.773734170Z" level=info msg="shim disconnected" id=53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a namespace=k8s.io Jan 30 14:01:37.773871 containerd[1597]: time="2025-01-30T14:01:37.773787994Z" level=warning msg="cleaning up after shim disconnected" id=53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a namespace=k8s.io Jan 30 14:01:37.773871 containerd[1597]: time="2025-01-30T14:01:37.773797569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:37.797509 containerd[1597]: time="2025-01-30T14:01:37.796290073Z" level=info msg="TearDown network for sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" successfully" Jan 30 14:01:37.797509 containerd[1597]: time="2025-01-30T14:01:37.796371934Z" level=info msg="StopPodSandbox for \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" returns successfully" Jan 30 14:01:37.797798 containerd[1597]: time="2025-01-30T14:01:37.797769037Z" level=info msg="TearDown network for sandbox \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\" successfully" Jan 30 14:01:37.797862 containerd[1597]: time="2025-01-30T14:01:37.797850659Z" level=info msg="StopPodSandbox for \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\" returns successfully" Jan 30 14:01:37.934318 kubelet[2709]: I0130 14:01:37.933923 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-bpf-maps\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.934318 kubelet[2709]: I0130 14:01:37.933972 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-host-proc-sys-kernel\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.934318 kubelet[2709]: I0130 14:01:37.933999 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6hkw\" (UniqueName: \"kubernetes.io/projected/0b59539c-1f82-49a7-90d6-c5aa6f53206f-kube-api-access-w6hkw\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.934318 kubelet[2709]: I0130 14:01:37.934018 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-config-path\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.934318 kubelet[2709]: I0130 14:01:37.934035 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b59539c-1f82-49a7-90d6-c5aa6f53206f-hubble-tls\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.934318 kubelet[2709]: I0130 14:01:37.934053 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-host-proc-sys-net\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935015 kubelet[2709]: I0130 14:01:37.934068 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-hostproc\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935015 kubelet[2709]: I0130 14:01:37.934081 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-cgroup\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935015 kubelet[2709]: I0130 14:01:37.934099 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd30e0c-3c62-4798-a279-8398ac8e4373-cilium-config-path\") pod \"acd30e0c-3c62-4798-a279-8398ac8e4373\" (UID: \"acd30e0c-3c62-4798-a279-8398ac8e4373\") " Jan 30 14:01:37.935015 kubelet[2709]: I0130 14:01:37.934117 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b59539c-1f82-49a7-90d6-c5aa6f53206f-clustermesh-secrets\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935015 kubelet[2709]: I0130 14:01:37.934130 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cni-path\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935015 kubelet[2709]: I0130 14:01:37.934145 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-etc-cni-netd\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935183 kubelet[2709]: I0130 14:01:37.934162 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll9kc\" (UniqueName: \"kubernetes.io/projected/acd30e0c-3c62-4798-a279-8398ac8e4373-kube-api-access-ll9kc\") pod \"acd30e0c-3c62-4798-a279-8398ac8e4373\" (UID: \"acd30e0c-3c62-4798-a279-8398ac8e4373\") " Jan 30 14:01:37.935183 kubelet[2709]: I0130 14:01:37.934177 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-run\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935183 kubelet[2709]: I0130 14:01:37.934192 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-xtables-lock\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935183 kubelet[2709]: I0130 14:01:37.934205 2709 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-lib-modules\") pod \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\" (UID: \"0b59539c-1f82-49a7-90d6-c5aa6f53206f\") " Jan 30 14:01:37.935787 kubelet[2709]: I0130 14:01:37.934346 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.937112 kubelet[2709]: I0130 14:01:37.934484 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.937222 kubelet[2709]: I0130 14:01:37.937212 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.937315 kubelet[2709]: I0130 14:01:37.937232 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.940268 kubelet[2709]: I0130 14:01:37.938871 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acd30e0c-3c62-4798-a279-8398ac8e4373-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acd30e0c-3c62-4798-a279-8398ac8e4373" (UID: "acd30e0c-3c62-4798-a279-8398ac8e4373"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:01:37.941922 kubelet[2709]: I0130 14:01:37.941850 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b59539c-1f82-49a7-90d6-c5aa6f53206f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:01:37.943137 kubelet[2709]: I0130 14:01:37.943059 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.943137 kubelet[2709]: I0130 14:01:37.943123 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.946350 kubelet[2709]: I0130 14:01:37.946303 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.946472 kubelet[2709]: I0130 14:01:37.946367 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.949621 kubelet[2709]: I0130 14:01:37.949566 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.949621 kubelet[2709]: I0130 14:01:37.949621 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:37.951350 kubelet[2709]: I0130 14:01:37.951309 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b59539c-1f82-49a7-90d6-c5aa6f53206f-kube-api-access-w6hkw" (OuterVolumeSpecName: "kube-api-access-w6hkw") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "kube-api-access-w6hkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:37.951451 kubelet[2709]: I0130 14:01:37.951367 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd30e0c-3c62-4798-a279-8398ac8e4373-kube-api-access-ll9kc" (OuterVolumeSpecName: "kube-api-access-ll9kc") pod "acd30e0c-3c62-4798-a279-8398ac8e4373" (UID: "acd30e0c-3c62-4798-a279-8398ac8e4373"). InnerVolumeSpecName "kube-api-access-ll9kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:37.951451 kubelet[2709]: I0130 14:01:37.951389 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b59539c-1f82-49a7-90d6-c5aa6f53206f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:37.951961 kubelet[2709]: I0130 14:01:37.951859 2709 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b59539c-1f82-49a7-90d6-c5aa6f53206f" (UID: "0b59539c-1f82-49a7-90d6-c5aa6f53206f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:01:38.034834 kubelet[2709]: I0130 14:01:38.034746 2709 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-bpf-maps\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.034834 kubelet[2709]: I0130 14:01:38.034791 2709 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-host-proc-sys-kernel\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.034834 kubelet[2709]: I0130 14:01:38.034804 2709 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w6hkw\" (UniqueName: \"kubernetes.io/projected/0b59539c-1f82-49a7-90d6-c5aa6f53206f-kube-api-access-w6hkw\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.034834 kubelet[2709]: I0130 14:01:38.034815 2709 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-config-path\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.034834 kubelet[2709]: I0130 14:01:38.034830 2709 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b59539c-1f82-49a7-90d6-c5aa6f53206f-hubble-tls\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.034834 kubelet[2709]: I0130 14:01:38.034842 2709 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-host-proc-sys-net\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.034834 kubelet[2709]: I0130 14:01:38.034857 2709 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-hostproc\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.034834 kubelet[2709]: I0130 14:01:38.034865 2709 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-cgroup\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.035310 kubelet[2709]: I0130 14:01:38.034876 2709 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acd30e0c-3c62-4798-a279-8398ac8e4373-cilium-config-path\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.035310 kubelet[2709]: I0130 14:01:38.034885 2709 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b59539c-1f82-49a7-90d6-c5aa6f53206f-clustermesh-secrets\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.035310 kubelet[2709]: I0130 14:01:38.034896 2709 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cni-path\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.035310 kubelet[2709]: I0130 14:01:38.034912 2709 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-etc-cni-netd\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.035310 kubelet[2709]: I0130 14:01:38.034923 2709 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-xtables-lock\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.035310 kubelet[2709]: I0130 14:01:38.034933 2709 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ll9kc\" (UniqueName: \"kubernetes.io/projected/acd30e0c-3c62-4798-a279-8398ac8e4373-kube-api-access-ll9kc\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.035310 kubelet[2709]: I0130 14:01:38.034942 2709 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-cilium-run\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.035310 kubelet[2709]: I0130 14:01:38.034954 2709 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b59539c-1f82-49a7-90d6-c5aa6f53206f-lib-modules\") on node \"ci-4081.3.0-9-9df89b74d7\" DevicePath \"\"" Jan 30 14:01:38.545792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a-rootfs.mount: Deactivated successfully. Jan 30 14:01:38.545992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78-rootfs.mount: Deactivated successfully. Jan 30 14:01:38.546095 systemd[1]: var-lib-kubelet-pods-0b59539c\x2d1f82\x2d49a7\x2d90d6\x2dc5aa6f53206f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw6hkw.mount: Deactivated successfully. Jan 30 14:01:38.546190 systemd[1]: var-lib-kubelet-pods-0b59539c\x2d1f82\x2d49a7\x2d90d6\x2dc5aa6f53206f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 14:01:38.546317 systemd[1]: var-lib-kubelet-pods-0b59539c\x2d1f82\x2d49a7\x2d90d6\x2dc5aa6f53206f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 14:01:38.546668 systemd[1]: var-lib-kubelet-pods-acd30e0c\x2d3c62\x2d4798\x2da279\x2d8398ac8e4373-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dll9kc.mount: Deactivated successfully. Jan 30 14:01:38.724452 kubelet[2709]: I0130 14:01:38.723756 2709 scope.go:117] "RemoveContainer" containerID="66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e" Jan 30 14:01:38.727325 containerd[1597]: time="2025-01-30T14:01:38.727277974Z" level=info msg="RemoveContainer for \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\"" Jan 30 14:01:38.740292 containerd[1597]: time="2025-01-30T14:01:38.738828016Z" level=info msg="RemoveContainer for \"66a4cdb1590b7e5612de0f95218197b4510ffbf2513d3dd81205a558a2e2a70e\" returns successfully" Jan 30 14:01:38.751525 kubelet[2709]: I0130 14:01:38.751481 2709 scope.go:117] "RemoveContainer" containerID="f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718" Jan 30 14:01:38.774343 containerd[1597]: time="2025-01-30T14:01:38.774299407Z" level=info msg="RemoveContainer for \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\"" Jan 30 14:01:38.778027 containerd[1597]: time="2025-01-30T14:01:38.777965086Z" level=info msg="RemoveContainer for \"f9cac2ce19534155c4628e3eac476336521d986f0cc3cdf0d28ebcd7b5e11718\" returns successfully" Jan 30 14:01:38.778729 kubelet[2709]: I0130 14:01:38.778691 2709 scope.go:117] "RemoveContainer" containerID="4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5" Jan 30 14:01:38.782694 containerd[1597]: time="2025-01-30T14:01:38.782601888Z" level=info msg="RemoveContainer for \"4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5\"" Jan 30 14:01:38.790767 containerd[1597]: time="2025-01-30T14:01:38.790698557Z" level=info msg="RemoveContainer for \"4bb76d00f79ee98ed206e1d47a38ad7fbd2534ad26321b320dbefb3357efdbe5\" returns successfully" Jan 30 14:01:38.791674 kubelet[2709]: I0130 14:01:38.791645 2709 scope.go:117] "RemoveContainer" containerID="fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94" Jan 30 14:01:38.793584 containerd[1597]: time="2025-01-30T14:01:38.793449674Z" level=info msg="RemoveContainer for \"fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94\"" Jan 30 14:01:38.797269 containerd[1597]: time="2025-01-30T14:01:38.796185178Z" level=info msg="RemoveContainer for \"fbbbb2881896baac1be68dc82652e906447e0cbbe0918a74dac17c3f7cd8ed94\" returns successfully" Jan 30 14:01:38.797396 kubelet[2709]: I0130 14:01:38.796524 2709 scope.go:117] "RemoveContainer" containerID="aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780" Jan 30 14:01:38.799376 containerd[1597]: time="2025-01-30T14:01:38.799206935Z" level=info msg="RemoveContainer for \"aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780\"" Jan 30 14:01:38.803177 containerd[1597]: time="2025-01-30T14:01:38.803102482Z" level=info msg="RemoveContainer for \"aa0876ec9222922ecaf0255093882ebc7a57c95531a5ed9b7544cba3bb34a780\" returns successfully" Jan 30 14:01:38.803874 kubelet[2709]: I0130 14:01:38.803518 2709 scope.go:117] "RemoveContainer" containerID="c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6" Jan 30 14:01:38.805518 containerd[1597]: time="2025-01-30T14:01:38.805451735Z" level=info msg="RemoveContainer for \"c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6\"" Jan 30 14:01:38.808707 containerd[1597]: time="2025-01-30T14:01:38.808556078Z" level=info msg="RemoveContainer for \"c784453d9e328e2924efa80f368fa4a024108910a00c4c507af852d4e1b4e3c6\" returns successfully" Jan 30 14:01:39.328289 kubelet[2709]: E0130 14:01:39.328099 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:39.451153 sshd[4330]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:39.465733 systemd[1]: Started sshd@25-164.92.85.159:22-147.75.109.163:59652.service - OpenSSH per-connection server daemon (147.75.109.163:59652). Jan 30 14:01:39.467424 systemd[1]: sshd@24-164.92.85.159:22-147.75.109.163:45222.service: Deactivated successfully. Jan 30 14:01:39.469997 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:01:39.474072 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:01:39.476428 systemd-logind[1563]: Removed session 25. Jan 30 14:01:39.510883 sshd[4494]: Accepted publickey for core from 147.75.109.163 port 59652 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:39.513381 sshd[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:39.519345 systemd-logind[1563]: New session 26 of user core. Jan 30 14:01:39.529660 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:01:40.169220 sshd[4494]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:40.177604 systemd[1]: Started sshd@26-164.92.85.159:22-147.75.109.163:59668.service - OpenSSH per-connection server daemon (147.75.109.163:59668). Jan 30 14:01:40.178208 systemd[1]: sshd@25-164.92.85.159:22-147.75.109.163:59652.service: Deactivated successfully. Jan 30 14:01:40.191104 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:01:40.196366 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:01:40.203671 systemd-logind[1563]: Removed session 26. Jan 30 14:01:40.231159 kubelet[2709]: I0130 14:01:40.222939 2709 topology_manager.go:215] "Topology Admit Handler" podUID="a24fb7a3-7a5f-49a1-bc0e-6df1206317b1" podNamespace="kube-system" podName="cilium-ldwvq" Jan 30 14:01:40.234005 kubelet[2709]: E0130 14:01:40.233085 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b59539c-1f82-49a7-90d6-c5aa6f53206f" containerName="mount-bpf-fs" Jan 30 14:01:40.234005 kubelet[2709]: E0130 14:01:40.233325 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b59539c-1f82-49a7-90d6-c5aa6f53206f" containerName="cilium-agent" Jan 30 14:01:40.234883 kubelet[2709]: E0130 14:01:40.234270 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="acd30e0c-3c62-4798-a279-8398ac8e4373" containerName="cilium-operator" Jan 30 14:01:40.234883 kubelet[2709]: E0130 14:01:40.234291 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b59539c-1f82-49a7-90d6-c5aa6f53206f" containerName="mount-cgroup" Jan 30 14:01:40.234883 kubelet[2709]: E0130 14:01:40.234355 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b59539c-1f82-49a7-90d6-c5aa6f53206f" containerName="apply-sysctl-overwrites" Jan 30 14:01:40.234883 kubelet[2709]: E0130 14:01:40.234364 2709 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b59539c-1f82-49a7-90d6-c5aa6f53206f" containerName="clean-cilium-state" Jan 30 14:01:40.249645 kubelet[2709]: I0130 14:01:40.234411 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd30e0c-3c62-4798-a279-8398ac8e4373" containerName="cilium-operator" Jan 30 14:01:40.249645 kubelet[2709]: I0130 14:01:40.249656 2709 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b59539c-1f82-49a7-90d6-c5aa6f53206f" containerName="cilium-agent" Jan 30 14:01:40.264269 sshd[4507]: Accepted publickey for core from 147.75.109.163 port 59668 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:40.272211 sshd[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:40.295322 systemd-logind[1563]: New session 27 of user core. Jan 30 14:01:40.301682 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:01:40.332931 kubelet[2709]: I0130 14:01:40.332280 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b59539c-1f82-49a7-90d6-c5aa6f53206f" path="/var/lib/kubelet/pods/0b59539c-1f82-49a7-90d6-c5aa6f53206f/volumes" Jan 30 14:01:40.333440 kubelet[2709]: I0130 14:01:40.333185 2709 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd30e0c-3c62-4798-a279-8398ac8e4373" path="/var/lib/kubelet/pods/acd30e0c-3c62-4798-a279-8398ac8e4373/volumes" Jan 30 14:01:40.354489 kubelet[2709]: I0130 14:01:40.354433 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-etc-cni-netd\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.354489 kubelet[2709]: I0130 14:01:40.354493 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-cilium-ipsec-secrets\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.354848 kubelet[2709]: I0130 14:01:40.354516 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq7ch\" (UniqueName: \"kubernetes.io/projected/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-kube-api-access-rq7ch\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.354848 kubelet[2709]: I0130 14:01:40.354535 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-cni-path\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.354848 kubelet[2709]: I0130 14:01:40.354552 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-clustermesh-secrets\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.354848 kubelet[2709]: I0130 14:01:40.354567 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-hubble-tls\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.354848 kubelet[2709]: I0130 14:01:40.354589 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-hostproc\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.354848 kubelet[2709]: I0130 14:01:40.354605 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-cilium-cgroup\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.355129 kubelet[2709]: I0130 14:01:40.354624 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-cilium-config-path\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.355129 kubelet[2709]: I0130 14:01:40.354659 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-host-proc-sys-net\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.355129 kubelet[2709]: I0130 14:01:40.354681 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-host-proc-sys-kernel\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.355129 kubelet[2709]: I0130 14:01:40.354725 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-cilium-run\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.355129 kubelet[2709]: I0130 14:01:40.354745 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-xtables-lock\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.355129 kubelet[2709]: I0130 14:01:40.354779 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-bpf-maps\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.355346 kubelet[2709]: I0130 14:01:40.354801 2709 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a24fb7a3-7a5f-49a1-bc0e-6df1206317b1-lib-modules\") pod \"cilium-ldwvq\" (UID: \"a24fb7a3-7a5f-49a1-bc0e-6df1206317b1\") " pod="kube-system/cilium-ldwvq" Jan 30 14:01:40.369100 sshd[4507]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:40.375122 systemd[1]: sshd@26-164.92.85.159:22-147.75.109.163:59668.service: Deactivated successfully. Jan 30 14:01:40.378125 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:01:40.385742 systemd[1]: Started sshd@27-164.92.85.159:22-147.75.109.163:59678.service - OpenSSH per-connection server daemon (147.75.109.163:59678). Jan 30 14:01:40.386177 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:01:40.389096 systemd-logind[1563]: Removed session 27. Jan 30 14:01:40.429724 sshd[4519]: Accepted publickey for core from 147.75.109.163 port 59678 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:40.431482 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:40.439302 systemd-logind[1563]: New session 28 of user core. Jan 30 14:01:40.442646 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:01:40.466319 kubelet[2709]: E0130 14:01:40.456877 2709 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:01:40.592274 kubelet[2709]: E0130 14:01:40.589587 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:40.595455 containerd[1597]: time="2025-01-30T14:01:40.593143680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ldwvq,Uid:a24fb7a3-7a5f-49a1-bc0e-6df1206317b1,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:40.629271 containerd[1597]: time="2025-01-30T14:01:40.628808604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:40.630503 containerd[1597]: time="2025-01-30T14:01:40.630221640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:40.630503 containerd[1597]: time="2025-01-30T14:01:40.630295268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:40.630503 containerd[1597]: time="2025-01-30T14:01:40.630430980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:40.684470 containerd[1597]: time="2025-01-30T14:01:40.684344813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ldwvq,Uid:a24fb7a3-7a5f-49a1-bc0e-6df1206317b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\"" Jan 30 14:01:40.685421 kubelet[2709]: E0130 14:01:40.685394 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:40.694583 containerd[1597]: time="2025-01-30T14:01:40.694494092Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:01:40.710857 containerd[1597]: time="2025-01-30T14:01:40.710794131Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f4f41a8f02409f3fc77233e5ed56a6844a42a7a86096917196c28be3e9af7dd\"" Jan 30 14:01:40.712132 containerd[1597]: time="2025-01-30T14:01:40.711560238Z" level=info msg="StartContainer for \"5f4f41a8f02409f3fc77233e5ed56a6844a42a7a86096917196c28be3e9af7dd\"" Jan 30 14:01:40.783332 containerd[1597]: time="2025-01-30T14:01:40.783261797Z" level=info msg="StartContainer for \"5f4f41a8f02409f3fc77233e5ed56a6844a42a7a86096917196c28be3e9af7dd\" returns successfully" Jan 30 14:01:40.850675 containerd[1597]: time="2025-01-30T14:01:40.850550458Z" level=info msg="shim disconnected" id=5f4f41a8f02409f3fc77233e5ed56a6844a42a7a86096917196c28be3e9af7dd namespace=k8s.io Jan 30 14:01:40.851025 containerd[1597]: time="2025-01-30T14:01:40.850818817Z" level=warning msg="cleaning up after shim disconnected" id=5f4f41a8f02409f3fc77233e5ed56a6844a42a7a86096917196c28be3e9af7dd namespace=k8s.io Jan 30 14:01:40.851025 containerd[1597]: time="2025-01-30T14:01:40.850834823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:41.743488 kubelet[2709]: E0130 14:01:41.743443 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:41.749116 containerd[1597]: time="2025-01-30T14:01:41.749030031Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:01:41.766207 containerd[1597]: time="2025-01-30T14:01:41.765716938Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c26f455a99fe4c7d90b2047649771c8f4ce9782543272dda4acd22f4d5f6da8a\"" Jan 30 14:01:41.769724 containerd[1597]: time="2025-01-30T14:01:41.768316128Z" level=info msg="StartContainer for \"c26f455a99fe4c7d90b2047649771c8f4ce9782543272dda4acd22f4d5f6da8a\"" Jan 30 14:01:41.842998 containerd[1597]: time="2025-01-30T14:01:41.842913381Z" level=info msg="StartContainer for \"c26f455a99fe4c7d90b2047649771c8f4ce9782543272dda4acd22f4d5f6da8a\" returns successfully" Jan 30 14:01:41.876841 containerd[1597]: time="2025-01-30T14:01:41.876657254Z" level=info msg="shim disconnected" id=c26f455a99fe4c7d90b2047649771c8f4ce9782543272dda4acd22f4d5f6da8a namespace=k8s.io Jan 30 14:01:41.876841 containerd[1597]: time="2025-01-30T14:01:41.876830111Z" level=warning msg="cleaning up after shim disconnected" id=c26f455a99fe4c7d90b2047649771c8f4ce9782543272dda4acd22f4d5f6da8a namespace=k8s.io Jan 30 14:01:41.877223 containerd[1597]: time="2025-01-30T14:01:41.876843713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:42.462714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c26f455a99fe4c7d90b2047649771c8f4ce9782543272dda4acd22f4d5f6da8a-rootfs.mount: Deactivated successfully. Jan 30 14:01:42.749277 kubelet[2709]: E0130 14:01:42.749035 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:42.757263 kubelet[2709]: I0130 14:01:42.756041 2709 setters.go:580] "Node became not ready" node="ci-4081.3.0-9-9df89b74d7" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T14:01:42Z","lastTransitionTime":"2025-01-30T14:01:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 14:01:42.758409 containerd[1597]: time="2025-01-30T14:01:42.758363080Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:01:42.792866 containerd[1597]: time="2025-01-30T14:01:42.792751810Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b03305e7e02cc534ccedcc74bb5c4202b99d6baceee62ead700a6711e547b47\"" Jan 30 14:01:42.793812 containerd[1597]: time="2025-01-30T14:01:42.793780311Z" level=info msg="StartContainer for \"8b03305e7e02cc534ccedcc74bb5c4202b99d6baceee62ead700a6711e547b47\"" Jan 30 14:01:42.865739 containerd[1597]: time="2025-01-30T14:01:42.865674965Z" level=info msg="StartContainer for \"8b03305e7e02cc534ccedcc74bb5c4202b99d6baceee62ead700a6711e547b47\" returns successfully" Jan 30 14:01:42.906463 containerd[1597]: time="2025-01-30T14:01:42.906387350Z" level=info msg="shim disconnected" id=8b03305e7e02cc534ccedcc74bb5c4202b99d6baceee62ead700a6711e547b47 namespace=k8s.io Jan 30 14:01:42.906919 containerd[1597]: time="2025-01-30T14:01:42.906704104Z" level=warning msg="cleaning up after shim disconnected" id=8b03305e7e02cc534ccedcc74bb5c4202b99d6baceee62ead700a6711e547b47 namespace=k8s.io Jan 30 14:01:42.906919 containerd[1597]: time="2025-01-30T14:01:42.906725005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:43.461881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b03305e7e02cc534ccedcc74bb5c4202b99d6baceee62ead700a6711e547b47-rootfs.mount: Deactivated successfully. Jan 30 14:01:43.758373 kubelet[2709]: E0130 14:01:43.755327 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:43.771206 containerd[1597]: time="2025-01-30T14:01:43.770686942Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:01:43.793855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768021235.mount: Deactivated successfully. Jan 30 14:01:43.796034 containerd[1597]: time="2025-01-30T14:01:43.795400067Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b90f0b4ba5039222293d30b6f4617611f994278e74caccccb4eebafc9eb1134\"" Jan 30 14:01:43.797073 containerd[1597]: time="2025-01-30T14:01:43.797018009Z" level=info msg="StartContainer for \"7b90f0b4ba5039222293d30b6f4617611f994278e74caccccb4eebafc9eb1134\"" Jan 30 14:01:43.873394 containerd[1597]: time="2025-01-30T14:01:43.873308817Z" level=info msg="StartContainer for \"7b90f0b4ba5039222293d30b6f4617611f994278e74caccccb4eebafc9eb1134\" returns successfully" Jan 30 14:01:43.901052 containerd[1597]: time="2025-01-30T14:01:43.900963482Z" level=info msg="shim disconnected" id=7b90f0b4ba5039222293d30b6f4617611f994278e74caccccb4eebafc9eb1134 namespace=k8s.io Jan 30 14:01:43.901488 containerd[1597]: time="2025-01-30T14:01:43.901372323Z" level=warning msg="cleaning up after shim disconnected" id=7b90f0b4ba5039222293d30b6f4617611f994278e74caccccb4eebafc9eb1134 namespace=k8s.io Jan 30 14:01:43.901488 containerd[1597]: time="2025-01-30T14:01:43.901394396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:43.917628 containerd[1597]: time="2025-01-30T14:01:43.917261409Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:01:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:01:44.462081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b90f0b4ba5039222293d30b6f4617611f994278e74caccccb4eebafc9eb1134-rootfs.mount: Deactivated successfully. Jan 30 14:01:44.763915 kubelet[2709]: E0130 14:01:44.763786 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:44.773552 containerd[1597]: time="2025-01-30T14:01:44.772759937Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:01:44.799029 containerd[1597]: time="2025-01-30T14:01:44.797929717Z" level=info msg="CreateContainer within sandbox \"cb3142016b14a62f031d1f4083d98737089340959b10b3a747829ac770c4372d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a55dd475bf81670190412b39d26ba97ff1329e2399d502fe037dea54fd2dafd7\"" Jan 30 14:01:44.800593 containerd[1597]: time="2025-01-30T14:01:44.799225974Z" level=info msg="StartContainer for \"a55dd475bf81670190412b39d26ba97ff1329e2399d502fe037dea54fd2dafd7\"" Jan 30 14:01:44.883517 containerd[1597]: time="2025-01-30T14:01:44.883441719Z" level=info msg="StartContainer for \"a55dd475bf81670190412b39d26ba97ff1329e2399d502fe037dea54fd2dafd7\" returns successfully" Jan 30 14:01:45.323375 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 14:01:45.329380 kubelet[2709]: E0130 14:01:45.328337 2709 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-xklv9" podUID="f15676f9-64ac-4845-8496-94230fe08576" Jan 30 14:01:45.768699 kubelet[2709]: E0130 14:01:45.768601 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:46.773719 kubelet[2709]: E0130 14:01:46.773266 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:47.065957 systemd[1]: run-containerd-runc-k8s.io-a55dd475bf81670190412b39d26ba97ff1329e2399d502fe037dea54fd2dafd7-runc.DBswRw.mount: Deactivated successfully. Jan 30 14:01:47.329092 kubelet[2709]: E0130 14:01:47.328745 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:47.775213 kubelet[2709]: E0130 14:01:47.775172 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:48.572300 systemd-networkd[1226]: lxc_health: Link UP Jan 30 14:01:48.577417 systemd-networkd[1226]: lxc_health: Gained carrier Jan 30 14:01:48.663901 kubelet[2709]: I0130 14:01:48.663837 2709 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ldwvq" podStartSLOduration=8.663816293 podStartE2EDuration="8.663816293s" podCreationTimestamp="2025-01-30 14:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:45.796944909 +0000 UTC m=+115.595438276" watchObservedRunningTime="2025-01-30 14:01:48.663816293 +0000 UTC m=+118.462309658" Jan 30 14:01:48.779529 kubelet[2709]: E0130 14:01:48.779493 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:49.266647 systemd[1]: run-containerd-runc-k8s.io-a55dd475bf81670190412b39d26ba97ff1329e2399d502fe037dea54fd2dafd7-runc.EK0wnL.mount: Deactivated successfully. Jan 30 14:01:49.780454 kubelet[2709]: E0130 14:01:49.780336 2709 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:50.327156 containerd[1597]: time="2025-01-30T14:01:50.327089320Z" level=info msg="StopPodSandbox for \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\"" Jan 30 14:01:50.330757 containerd[1597]: time="2025-01-30T14:01:50.327962969Z" level=info msg="TearDown network for sandbox \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\" successfully" Jan 30 14:01:50.330757 containerd[1597]: time="2025-01-30T14:01:50.328752908Z" level=info msg="StopPodSandbox for \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\" returns successfully" Jan 30 14:01:50.346160 containerd[1597]: time="2025-01-30T14:01:50.346060671Z" level=info msg="RemovePodSandbox for \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\"" Jan 30 14:01:50.350599 containerd[1597]: time="2025-01-30T14:01:50.350560567Z" level=info msg="Forcibly stopping sandbox \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\"" Jan 30 14:01:50.350960 containerd[1597]: time="2025-01-30T14:01:50.350926044Z" level=info msg="TearDown network for sandbox \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\" successfully" Jan 30 14:01:50.356150 containerd[1597]: time="2025-01-30T14:01:50.356097296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:01:50.356548 containerd[1597]: time="2025-01-30T14:01:50.356389158Z" level=info msg="RemovePodSandbox \"aaf0e347bc4eb6b047a6196485e0fca0608d98c29cd3e67b38d6aabe9cfa5a78\" returns successfully" Jan 30 14:01:50.357548 containerd[1597]: time="2025-01-30T14:01:50.357128984Z" level=info msg="StopPodSandbox for \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\"" Jan 30 14:01:50.357548 containerd[1597]: time="2025-01-30T14:01:50.357321520Z" level=info msg="TearDown network for sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" successfully" Jan 30 14:01:50.357548 containerd[1597]: time="2025-01-30T14:01:50.357341806Z" level=info msg="StopPodSandbox for \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" returns successfully" Jan 30 14:01:50.358418 containerd[1597]: time="2025-01-30T14:01:50.358271423Z" level=info msg="RemovePodSandbox for \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\"" Jan 30 14:01:50.358418 containerd[1597]: time="2025-01-30T14:01:50.358298964Z" level=info msg="Forcibly stopping sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\"" Jan 30 14:01:50.358418 containerd[1597]: time="2025-01-30T14:01:50.358354595Z" level=info msg="TearDown network for sandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" successfully" Jan 30 14:01:50.360862 containerd[1597]: time="2025-01-30T14:01:50.360688108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:01:50.360862 containerd[1597]: time="2025-01-30T14:01:50.360773681Z" level=info msg="RemovePodSandbox \"53c4d6fd3bbac7fad67d6ab53c5fa97512f0736be70bfd793d7587bf968c400a\" returns successfully" Jan 30 14:01:50.438365 systemd-networkd[1226]: lxc_health: Gained IPv6LL Jan 30 14:01:55.994278 sshd[4519]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:56.002060 systemd[1]: sshd@27-164.92.85.159:22-147.75.109.163:59678.service: Deactivated successfully. Jan 30 14:01:56.007760 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:01:56.008663 systemd-logind[1563]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:01:56.010684 systemd-logind[1563]: Removed session 28.