Jan 29 11:09:29.967052 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 29 11:09:29.967094 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:09:29.967121 kernel: BIOS-provided physical RAM map: Jan 29 11:09:29.967136 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:09:29.967150 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:09:29.967165 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:09:29.967184 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 29 11:09:29.967201 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 29 11:09:29.967216 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:09:29.967232 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:09:29.967252 kernel: NX (Execute Disable) protection: active Jan 29 11:09:29.967267 kernel: APIC: Static calls initialized Jan 29 11:09:29.967284 kernel: SMBIOS 2.8 present. Jan 29 11:09:29.967300 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 29 11:09:29.967321 kernel: Hypervisor detected: KVM Jan 29 11:09:29.967339 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:09:29.967366 kernel: kvm-clock: using sched offset of 4045426065 cycles Jan 29 11:09:29.967388 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:09:29.967406 kernel: tsc: Detected 2294.608 MHz processor Jan 29 11:09:29.967424 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:09:29.967443 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:09:29.967461 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 29 11:09:29.967482 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:09:29.967505 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:09:29.967531 kernel: ACPI: Early table checksum verification disabled Jan 29 11:09:29.967553 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 29 11:09:29.967571 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:29.967589 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:29.967607 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:29.967625 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 29 11:09:29.967643 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:29.967662 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:29.967680 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:29.967702 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:09:29.967720 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 29 11:09:29.967738 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 29 11:09:29.967759 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 29 11:09:29.967776 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 29 11:09:29.967794 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 29 11:09:29.967813 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 29 11:09:29.967841 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 29 11:09:29.967863 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:09:29.968616 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:09:29.968641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:09:29.968660 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:09:29.968681 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 29 11:09:29.968701 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 29 11:09:29.968726 kernel: Zone ranges: Jan 29 11:09:29.968745 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:09:29.968764 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 29 11:09:29.968784 kernel: Normal empty Jan 29 11:09:29.968803 kernel: Movable zone start for each node Jan 29 11:09:29.968822 kernel: Early memory node ranges Jan 29 11:09:29.968841 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:09:29.968860 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 29 11:09:29.968893 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 29 11:09:29.968912 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:09:29.968935 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:09:29.968955 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 29 11:09:29.968974 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:09:29.968994 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:09:29.969017 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:09:29.969037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:09:29.969061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:09:29.969085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:09:29.969109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:09:29.969140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:09:29.969163 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:09:29.969188 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:09:29.969211 kernel: TSC deadline timer available Jan 29 11:09:29.969234 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:09:29.969259 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:09:29.969283 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 29 11:09:29.969319 kernel: Booting paravirtualized kernel on KVM Jan 29 11:09:29.969343 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:09:29.969373 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:09:29.969398 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:09:29.969421 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:09:29.969444 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:09:29.969468 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 11:09:29.969495 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:09:29.969520 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:09:29.969543 kernel: random: crng init done Jan 29 11:09:29.969573 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:09:29.969597 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:09:29.969620 kernel: Fallback order for Node 0: 0 Jan 29 11:09:29.969645 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 29 11:09:29.969669 kernel: Policy zone: DMA32 Jan 29 11:09:29.969694 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:09:29.969718 kernel: Memory: 1969144K/2096600K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127196K reserved, 0K cma-reserved) Jan 29 11:09:29.969742 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:09:29.969766 kernel: Kernel/User page tables isolation: enabled Jan 29 11:09:29.969795 kernel: ftrace: allocating 37890 entries in 149 pages Jan 29 11:09:29.969818 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:09:29.969842 kernel: Dynamic Preempt: voluntary Jan 29 11:09:29.969865 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:09:29.969920 kernel: rcu: RCU event tracing is enabled. Jan 29 11:09:29.969945 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:09:29.969969 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:09:29.969992 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:09:29.970016 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:09:29.970047 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:09:29.970072 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:09:29.970095 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:09:29.970120 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:09:29.970143 kernel: Console: colour VGA+ 80x25 Jan 29 11:09:29.970167 kernel: printk: console [tty0] enabled Jan 29 11:09:29.970191 kernel: printk: console [ttyS0] enabled Jan 29 11:09:29.970208 kernel: ACPI: Core revision 20230628 Jan 29 11:09:29.970223 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:09:29.970249 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:09:29.970270 kernel: x2apic enabled Jan 29 11:09:29.970289 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:09:29.970309 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:09:29.970328 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 29 11:09:29.970347 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 29 11:09:29.970368 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 11:09:29.970387 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 11:09:29.970429 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:09:29.970449 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:09:29.970470 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:09:29.970500 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:09:29.970519 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 11:09:29.970556 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:09:29.970581 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:09:29.970602 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 11:09:29.970623 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:09:29.970653 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:09:29.970673 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:09:29.970695 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:09:29.970717 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:09:29.970742 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 11:09:29.970761 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:09:29.970785 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:09:29.970807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:09:29.970835 kernel: landlock: Up and running. Jan 29 11:09:29.970857 kernel: SELinux: Initializing. Jan 29 11:09:29.970985 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:09:29.971009 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:09:29.971035 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 29 11:09:29.971058 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:09:29.971082 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:09:29.971103 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:09:29.971124 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 29 11:09:29.971150 kernel: signal: max sigframe size: 1776 Jan 29 11:09:29.971171 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:09:29.971193 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:09:29.971214 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:09:29.971235 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:09:29.971255 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:09:29.971276 kernel: .... node #0, CPUs: #1 Jan 29 11:09:29.971296 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:09:29.971319 kernel: smpboot: Max logical packages: 1 Jan 29 11:09:29.971350 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 29 11:09:29.971376 kernel: devtmpfs: initialized Jan 29 11:09:29.971402 kernel: x86/mm: Memory block size: 128MB Jan 29 11:09:29.971428 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:09:29.971454 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:09:29.971478 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:09:29.971504 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:09:29.971530 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:09:29.971556 kernel: audit: type=2000 audit(1738148969.571:1): state=initialized audit_enabled=0 res=1 Jan 29 11:09:29.971588 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:09:29.971614 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:09:29.971639 kernel: cpuidle: using governor menu Jan 29 11:09:29.971665 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:09:29.971691 kernel: dca service started, version 1.12.1 Jan 29 11:09:29.971716 kernel: PCI: Using configuration type 1 for base access Jan 29 11:09:29.971742 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:09:29.971768 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:09:29.971793 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:09:29.971824 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:09:29.971850 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:09:29.971896 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:09:29.971923 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:09:29.971948 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:09:29.971974 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:09:29.971999 kernel: ACPI: Interpreter enabled Jan 29 11:09:29.972024 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:09:29.972049 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:09:29.972081 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:09:29.972106 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:09:29.972131 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 11:09:29.972157 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:09:29.972550 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:09:29.972726 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:09:29.972925 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:09:29.972962 kernel: acpiphp: Slot [3] registered Jan 29 11:09:29.972985 kernel: acpiphp: Slot [4] registered Jan 29 11:09:29.973007 kernel: acpiphp: Slot [5] registered Jan 29 11:09:29.973028 kernel: acpiphp: Slot [6] registered Jan 29 11:09:29.973046 kernel: acpiphp: Slot [7] registered Jan 29 11:09:29.973069 kernel: acpiphp: Slot [8] registered Jan 29 11:09:29.973089 kernel: acpiphp: Slot [9] registered Jan 29 11:09:29.973111 kernel: acpiphp: Slot [10] registered Jan 29 11:09:29.973132 kernel: acpiphp: Slot [11] registered Jan 29 11:09:29.973158 kernel: acpiphp: Slot [12] registered Jan 29 11:09:29.973179 kernel: acpiphp: Slot [13] registered Jan 29 11:09:29.973199 kernel: acpiphp: Slot [14] registered Jan 29 11:09:29.973220 kernel: acpiphp: Slot [15] registered Jan 29 11:09:29.973240 kernel: acpiphp: Slot [16] registered Jan 29 11:09:29.973261 kernel: acpiphp: Slot [17] registered Jan 29 11:09:29.973281 kernel: acpiphp: Slot [18] registered Jan 29 11:09:29.973301 kernel: acpiphp: Slot [19] registered Jan 29 11:09:29.973322 kernel: acpiphp: Slot [20] registered Jan 29 11:09:29.973342 kernel: acpiphp: Slot [21] registered Jan 29 11:09:29.973366 kernel: acpiphp: Slot [22] registered Jan 29 11:09:29.973388 kernel: acpiphp: Slot [23] registered Jan 29 11:09:29.973408 kernel: acpiphp: Slot [24] registered Jan 29 11:09:29.973428 kernel: acpiphp: Slot [25] registered Jan 29 11:09:29.973449 kernel: acpiphp: Slot [26] registered Jan 29 11:09:29.973469 kernel: acpiphp: Slot [27] registered Jan 29 11:09:29.973489 kernel: acpiphp: Slot [28] registered Jan 29 11:09:29.973510 kernel: acpiphp: Slot [29] registered Jan 29 11:09:29.973530 kernel: acpiphp: Slot [30] registered Jan 29 11:09:29.973554 kernel: acpiphp: Slot [31] registered Jan 29 11:09:29.973575 kernel: PCI host bridge to bus 0000:00 Jan 29 11:09:29.973731 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:09:29.974832 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:09:29.975067 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:09:29.975217 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 11:09:29.975374 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 29 11:09:29.975532 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:09:29.975740 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:09:29.975919 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:09:29.976095 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 11:09:29.976303 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 29 11:09:29.976527 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:09:29.976744 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:09:29.976992 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:09:29.977214 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:09:29.977424 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 29 11:09:29.977607 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 29 11:09:29.977799 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 11:09:29.977973 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 11:09:29.978129 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 11:09:29.978292 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 11:09:29.978496 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 11:09:29.978736 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 29 11:09:29.979006 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 29 11:09:29.979234 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 11:09:29.979457 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:09:29.979688 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:09:29.979901 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 29 11:09:29.980085 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 29 11:09:29.980244 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 29 11:09:29.980403 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:09:29.980637 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 29 11:09:29.980865 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 29 11:09:29.981082 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 29 11:09:29.981298 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 29 11:09:29.981452 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 29 11:09:29.981599 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 29 11:09:29.981745 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 29 11:09:29.981920 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:09:29.982091 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:09:29.982255 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 29 11:09:29.983996 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 29 11:09:29.984207 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:09:29.984361 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 29 11:09:29.984511 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 29 11:09:29.984658 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 29 11:09:29.984855 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 11:09:29.985045 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 29 11:09:29.985217 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 29 11:09:29.985246 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:09:29.985267 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:09:29.985288 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:09:29.985309 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:09:29.985330 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:09:29.985356 kernel: iommu: Default domain type: Translated Jan 29 11:09:29.985377 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:09:29.985398 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:09:29.985419 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:09:29.985439 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:09:29.985460 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 29 11:09:29.985610 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 11:09:29.985757 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 11:09:29.988060 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:09:29.988099 kernel: vgaarb: loaded Jan 29 11:09:29.988116 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:09:29.988132 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:09:29.988155 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:09:29.988189 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:09:29.988211 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:09:29.988232 kernel: pnp: PnP ACPI init Jan 29 11:09:29.988252 kernel: pnp: PnP ACPI: found 4 devices Jan 29 11:09:29.988282 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:09:29.988305 kernel: NET: Registered PF_INET protocol family Jan 29 11:09:29.988325 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:09:29.988348 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:09:29.988369 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:09:29.988390 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:09:29.988412 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:09:29.988429 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:09:29.988447 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:09:29.988471 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:09:29.988492 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:09:29.988506 kernel: NET: Registered PF_XDP protocol family Jan 29 11:09:29.988683 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:09:29.988826 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:09:29.988987 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:09:29.989120 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 11:09:29.989248 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 29 11:09:29.989410 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 11:09:29.989562 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:09:29.989589 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:09:29.989734 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 29182 usecs Jan 29 11:09:29.989760 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:09:29.989781 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:09:29.989802 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 29 11:09:29.989823 kernel: Initialise system trusted keyrings Jan 29 11:09:29.989844 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:09:29.989869 kernel: Key type asymmetric registered Jan 29 11:09:29.991031 kernel: Asymmetric key parser 'x509' registered Jan 29 11:09:29.991044 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:09:29.991055 kernel: io scheduler mq-deadline registered Jan 29 11:09:29.991065 kernel: io scheduler kyber registered Jan 29 11:09:29.991074 kernel: io scheduler bfq registered Jan 29 11:09:29.991084 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:09:29.991095 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 11:09:29.991104 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 11:09:29.991121 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 11:09:29.991130 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:09:29.991140 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:09:29.991150 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:09:29.991160 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:09:29.991169 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:09:29.991318 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 11:09:29.991333 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:09:29.991432 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 11:09:29.991525 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T11:09:29 UTC (1738148969) Jan 29 11:09:29.991617 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:09:29.991630 kernel: intel_pstate: CPU model not supported Jan 29 11:09:29.991639 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:09:29.991649 kernel: Segment Routing with IPv6 Jan 29 11:09:29.991658 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:09:29.991668 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:09:29.991681 kernel: Key type dns_resolver registered Jan 29 11:09:29.991690 kernel: IPI shorthand broadcast: enabled Jan 29 11:09:29.991700 kernel: sched_clock: Marking stable (922002325, 166737099)->(1228958545, -140219121) Jan 29 11:09:29.991710 kernel: registered taskstats version 1 Jan 29 11:09:29.991719 kernel: Loading compiled-in X.509 certificates Jan 29 11:09:29.991729 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 29 11:09:29.991738 kernel: Key type .fscrypt registered Jan 29 11:09:29.991747 kernel: Key type fscrypt-provisioning registered Jan 29 11:09:29.991756 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:09:29.991769 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:09:29.991779 kernel: ima: No architecture policies found Jan 29 11:09:29.991788 kernel: clk: Disabling unused clocks Jan 29 11:09:29.991798 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:09:29.991808 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:09:29.991836 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:09:29.991849 kernel: Run /init as init process Jan 29 11:09:29.991859 kernel: with arguments: Jan 29 11:09:29.991869 kernel: /init Jan 29 11:09:29.991894 kernel: with environment: Jan 29 11:09:29.991904 kernel: HOME=/ Jan 29 11:09:29.991913 kernel: TERM=linux Jan 29 11:09:29.991923 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:09:29.991936 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:09:29.991949 systemd[1]: Detected virtualization kvm. Jan 29 11:09:29.991960 systemd[1]: Detected architecture x86-64. Jan 29 11:09:29.991973 systemd[1]: Running in initrd. Jan 29 11:09:29.991986 systemd[1]: No hostname configured, using default hostname. Jan 29 11:09:29.991996 systemd[1]: Hostname set to <localhost>. Jan 29 11:09:29.992007 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:09:29.992017 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:09:29.992028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:09:29.992038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:09:29.992049 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:09:29.992059 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:09:29.992072 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:09:29.992083 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:09:29.992095 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:09:29.992106 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:09:29.992117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:09:29.992127 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:09:29.992137 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:09:29.992151 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:09:29.992161 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:09:29.992175 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:09:29.992186 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:09:29.992197 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:09:29.992210 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:09:29.992221 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:09:29.992231 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:09:29.992242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:09:29.992252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:09:29.992262 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:09:29.992273 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:09:29.992283 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:09:29.992294 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:09:29.992307 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:09:29.992318 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:09:29.992328 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:09:29.992339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:29.992377 systemd-journald[182]: Collecting audit messages is disabled. Jan 29 11:09:29.992406 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:09:29.992417 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:09:29.992428 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:09:29.992439 systemd-journald[182]: Journal started Jan 29 11:09:29.992466 systemd-journald[182]: Runtime Journal (/run/log/journal/655f88cbfc3f4c63ab9673e763ce638c) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:09:29.990268 systemd-modules-load[183]: Inserted module 'overlay' Jan 29 11:09:30.051188 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:09:30.051233 kernel: Bridge firewalling registered Jan 29 11:09:30.022268 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 29 11:09:30.054265 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:09:30.056107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:09:30.057024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:30.072085 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:09:30.074972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:09:30.081088 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:09:30.090388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:09:30.101466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:09:30.103240 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:09:30.105014 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:09:30.106602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:09:30.111054 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:09:30.115043 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:09:30.119334 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:09:30.142126 dracut-cmdline[214]: dracut-dracut-053 Jan 29 11:09:30.148138 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:09:30.150125 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:09:30.169190 systemd-resolved[215]: Positive Trust Anchors: Jan 29 11:09:30.169208 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:09:30.169247 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:09:30.172353 systemd-resolved[215]: Defaulting to hostname 'linux'. Jan 29 11:09:30.173640 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:09:30.176594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:09:30.245943 kernel: SCSI subsystem initialized Jan 29 11:09:30.256921 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:09:30.271909 kernel: iscsi: registered transport (tcp) Jan 29 11:09:30.295921 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:09:30.296013 kernel: QLogic iSCSI HBA Driver Jan 29 11:09:30.345925 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:09:30.352117 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:09:30.381235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:09:30.381314 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:09:30.383221 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:09:30.428938 kernel: raid6: avx2x4 gen() 17337 MB/s Jan 29 11:09:30.446943 kernel: raid6: avx2x2 gen() 17459 MB/s Jan 29 11:09:30.465500 kernel: raid6: avx2x1 gen() 13171 MB/s Jan 29 11:09:30.465593 kernel: raid6: using algorithm avx2x2 gen() 17459 MB/s Jan 29 11:09:30.484832 kernel: raid6: .... xor() 18126 MB/s, rmw enabled Jan 29 11:09:30.484974 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:09:30.509926 kernel: xor: automatically using best checksumming function avx Jan 29 11:09:30.673919 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:09:30.688830 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:09:30.694080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:09:30.728867 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 29 11:09:30.736242 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:09:30.746101 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:09:30.764447 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 29 11:09:30.805009 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:09:30.811114 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:09:30.898424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:09:30.905199 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:09:30.940536 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:09:30.944814 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:09:30.946821 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:09:30.948239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:09:30.957117 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:09:30.979850 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:09:31.010910 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 29 11:09:31.097408 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:09:31.097586 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 11:09:31.101045 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:09:31.101079 kernel: libata version 3.00 loaded. Jan 29 11:09:31.101105 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:09:31.101131 kernel: GPT:9289727 != 125829119 Jan 29 11:09:31.101164 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:09:31.101189 kernel: GPT:9289727 != 125829119 Jan 29 11:09:31.101214 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:09:31.101239 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:09:31.101265 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 11:09:31.130233 kernel: scsi host1: ata_piix Jan 29 11:09:31.130431 kernel: scsi host2: ata_piix Jan 29 11:09:31.130621 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 29 11:09:31.130659 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 29 11:09:31.130685 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 29 11:09:31.130849 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:09:31.131953 kernel: ACPI: bus type USB registered Jan 29 11:09:31.131986 kernel: usbcore: registered new interface driver usbfs Jan 29 11:09:31.132012 kernel: usbcore: registered new interface driver hub Jan 29 11:09:31.132038 kernel: usbcore: registered new device driver usb Jan 29 11:09:31.132064 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 29 11:09:31.132241 kernel: AES CTR mode by8 optimization enabled Jan 29 11:09:31.056722 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:09:31.059015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:09:31.059830 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:09:31.060487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:09:31.060732 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:31.061631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:31.067237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:31.171079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:31.178118 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:09:31.215233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:09:31.300306 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 29 11:09:31.314445 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 29 11:09:31.314672 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 29 11:09:31.314800 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (450) Jan 29 11:09:31.314814 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 29 11:09:31.315109 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (454) Jan 29 11:09:31.315124 kernel: hub 1-0:1.0: USB hub found Jan 29 11:09:31.315270 kernel: hub 1-0:1.0: 2 ports detected Jan 29 11:09:31.313781 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:09:31.321913 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:09:31.330417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:09:31.334810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:09:31.335466 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:09:31.343106 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:09:31.359746 disk-uuid[549]: Primary Header is updated. Jan 29 11:09:31.359746 disk-uuid[549]: Secondary Entries is updated. Jan 29 11:09:31.359746 disk-uuid[549]: Secondary Header is updated. Jan 29 11:09:31.366969 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:09:31.371903 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:09:32.378151 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:09:32.378253 disk-uuid[550]: The operation has completed successfully. Jan 29 11:09:32.439500 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:09:32.439665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:09:32.455171 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:09:32.471280 sh[561]: Success Jan 29 11:09:32.489262 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:09:32.567829 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:09:32.582431 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:09:32.584037 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:09:32.609362 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 29 11:09:32.609444 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:09:32.613696 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:09:32.613775 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:09:32.616660 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:09:32.626126 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:09:32.627539 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:09:32.634127 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:09:32.637516 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:09:32.655586 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:32.655661 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:09:32.655689 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:09:32.661917 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:09:32.677557 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:09:32.680205 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:32.690396 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:09:32.697670 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:09:32.827207 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:09:32.835200 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:09:32.844307 ignition[651]: Ignition 2.20.0 Jan 29 11:09:32.844320 ignition[651]: Stage: fetch-offline Jan 29 11:09:32.847134 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:09:32.844358 ignition[651]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:32.844367 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:32.844477 ignition[651]: parsed url from cmdline: "" Jan 29 11:09:32.844481 ignition[651]: no config URL provided Jan 29 11:09:32.844486 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:09:32.844495 ignition[651]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:09:32.844501 ignition[651]: failed to fetch config: resource requires networking Jan 29 11:09:32.844692 ignition[651]: Ignition finished successfully Jan 29 11:09:32.870018 systemd-networkd[751]: lo: Link UP Jan 29 11:09:32.870033 systemd-networkd[751]: lo: Gained carrier Jan 29 11:09:32.872496 systemd-networkd[751]: Enumeration completed Jan 29 11:09:32.872626 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:09:32.873418 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:09:32.873422 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 29 11:09:32.874359 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:09:32.874364 systemd-networkd[751]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:09:32.875220 systemd-networkd[751]: eth0: Link UP Jan 29 11:09:32.875224 systemd-networkd[751]: eth0: Gained carrier Jan 29 11:09:32.875233 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:09:32.875659 systemd[1]: Reached target network.target - Network. Jan 29 11:09:32.880206 systemd-networkd[751]: eth1: Link UP Jan 29 11:09:32.880211 systemd-networkd[751]: eth1: Gained carrier Jan 29 11:09:32.880223 systemd-networkd[751]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:09:32.885091 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:09:32.891028 systemd-networkd[751]: eth1: DHCPv4 address 10.124.0.11/20 acquired from 169.254.169.253 Jan 29 11:09:32.898013 systemd-networkd[751]: eth0: DHCPv4 address 143.110.233.113/20, gateway 143.110.224.1 acquired from 169.254.169.253 Jan 29 11:09:32.905751 ignition[755]: Ignition 2.20.0 Jan 29 11:09:32.905764 ignition[755]: Stage: fetch Jan 29 11:09:32.906008 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:32.906020 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:32.906127 ignition[755]: parsed url from cmdline: "" Jan 29 11:09:32.906131 ignition[755]: no config URL provided Jan 29 11:09:32.906137 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:09:32.906146 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:09:32.906170 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 29 11:09:32.922002 ignition[755]: GET result: OK Jan 29 11:09:32.922159 ignition[755]: parsing config with SHA512: f16d9af05e370b68830d6d961b193d5e7448581f52bef3f01801ebed35e6f3fc4d0ade7b2db5d5d8ec95745600a68139c99dba189fe33115defd803ece6a4dda Jan 29 11:09:32.930033 unknown[755]: fetched base config from "system" Jan 29 11:09:32.930052 unknown[755]: fetched base config from "system" Jan 29 11:09:32.931647 ignition[755]: fetch: fetch complete Jan 29 11:09:32.930063 unknown[755]: fetched user config from "digitalocean" Jan 29 11:09:32.931658 ignition[755]: fetch: fetch passed Jan 29 11:09:32.933861 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:09:32.931767 ignition[755]: Ignition finished successfully Jan 29 11:09:32.941154 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:09:32.982177 ignition[763]: Ignition 2.20.0 Jan 29 11:09:32.982200 ignition[763]: Stage: kargs Jan 29 11:09:32.982581 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:32.982605 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:32.985840 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:09:32.984125 ignition[763]: kargs: kargs passed Jan 29 11:09:32.984210 ignition[763]: Ignition finished successfully Jan 29 11:09:32.997187 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:09:33.014796 ignition[769]: Ignition 2.20.0 Jan 29 11:09:33.014813 ignition[769]: Stage: disks Jan 29 11:09:33.016123 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:33.016139 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:33.017605 ignition[769]: disks: disks passed Jan 29 11:09:33.018867 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:09:33.017665 ignition[769]: Ignition finished successfully Jan 29 11:09:33.024519 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:09:33.025678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:09:33.026831 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:09:33.028183 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:09:33.029543 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:09:33.036096 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:09:33.055211 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:09:33.058671 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:09:33.063038 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:09:33.175901 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 29 11:09:33.176509 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:09:33.177603 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:09:33.183992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:09:33.196113 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:09:33.200101 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 29 11:09:33.204067 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:09:33.204855 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:09:33.204908 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:09:33.209369 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:09:33.217905 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (785) Jan 29 11:09:33.224984 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:33.225050 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:09:33.225064 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:09:33.230005 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:09:33.233564 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:09:33.238081 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:09:33.291296 coreos-metadata[788]: Jan 29 11:09:33.291 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:09:33.299677 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:09:33.305763 coreos-metadata[788]: Jan 29 11:09:33.305 INFO Fetch successful Jan 29 11:09:33.311973 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:09:33.313317 coreos-metadata[787]: Jan 29 11:09:33.310 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:09:33.315976 coreos-metadata[788]: Jan 29 11:09:33.312 INFO wrote hostname ci-4186.1.0-f-d3e806da58 to /sysroot/etc/hostname Jan 29 11:09:33.314747 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:09:33.319100 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:09:33.322977 coreos-metadata[787]: Jan 29 11:09:33.322 INFO Fetch successful Jan 29 11:09:33.324383 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:09:33.335454 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 29 11:09:33.335556 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 29 11:09:33.435863 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:09:33.448110 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:09:33.453367 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:09:33.463890 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:33.492182 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:09:33.499469 ignition[907]: INFO : Ignition 2.20.0 Jan 29 11:09:33.499469 ignition[907]: INFO : Stage: mount Jan 29 11:09:33.500959 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:33.500959 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:33.503216 ignition[907]: INFO : mount: mount passed Jan 29 11:09:33.503216 ignition[907]: INFO : Ignition finished successfully Jan 29 11:09:33.502485 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:09:33.511071 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:09:33.607439 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:09:33.614186 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:09:33.628909 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (918) Jan 29 11:09:33.634228 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:09:33.634290 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:09:33.634304 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:09:33.641134 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:09:33.642718 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:09:33.675200 ignition[934]: INFO : Ignition 2.20.0 Jan 29 11:09:33.676298 ignition[934]: INFO : Stage: files Jan 29 11:09:33.678243 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:33.678243 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:33.678243 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:09:33.680677 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:09:33.681562 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:09:33.686262 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:09:33.688015 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:09:33.688015 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:09:33.686805 unknown[934]: wrote ssh authorized keys file for user: core Jan 29 11:09:33.692722 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:09:33.692722 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:09:33.835808 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:09:33.945799 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:09:33.945799 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:09:33.949189 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:09:34.400721 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:09:34.575158 systemd-networkd[751]: eth0: Gained IPv6LL Jan 29 11:09:34.633895 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:09:34.633895 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:09:34.636205 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:09:34.636205 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:09:34.636205 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:09:34.636205 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:09:34.636205 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:09:34.636205 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:09:34.636205 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:09:34.636205 ignition[934]: INFO : files: files passed Jan 29 11:09:34.636205 ignition[934]: INFO : Ignition finished successfully Jan 29 11:09:34.637729 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:09:34.640331 systemd-networkd[751]: eth1: Gained IPv6LL Jan 29 11:09:34.649076 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:09:34.653550 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:09:34.654402 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:09:34.654536 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:09:34.678343 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:09:34.678343 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:09:34.680960 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:09:34.682984 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:09:34.684175 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:09:34.691125 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:09:34.729847 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:09:34.730045 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:09:34.731674 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:09:34.732771 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:09:34.734249 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:09:34.747159 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:09:34.764922 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:09:34.771160 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:09:34.786413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:09:34.787289 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:09:34.788616 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:09:34.789920 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:09:34.790153 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:09:34.791454 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:09:34.792325 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:09:34.793559 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:09:34.794771 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:09:34.795805 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:09:34.797040 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:09:34.798316 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:09:34.799688 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:09:34.800734 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:09:34.801940 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:09:34.803110 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:09:34.803329 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:09:34.804534 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:09:34.805275 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:09:34.806381 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:09:34.806544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:09:34.807693 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:09:34.807814 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:09:34.809822 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:09:34.809955 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:09:34.810716 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:09:34.810815 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:09:34.811775 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:09:34.811871 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:09:34.821510 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:09:34.822129 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:09:34.822296 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:09:34.824090 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:09:34.826210 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:09:34.826404 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:09:34.829082 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:09:34.829234 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:09:34.839327 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:09:34.839426 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:09:34.854697 ignition[988]: INFO : Ignition 2.20.0 Jan 29 11:09:34.856051 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:09:34.861965 ignition[988]: INFO : Stage: umount Jan 29 11:09:34.861965 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:09:34.861965 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:09:34.861965 ignition[988]: INFO : umount: umount passed Jan 29 11:09:34.861965 ignition[988]: INFO : Ignition finished successfully Jan 29 11:09:34.862367 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:09:34.862559 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:09:34.864947 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:09:34.865109 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:09:34.877631 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:09:34.877727 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:09:34.878411 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:09:34.878472 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:09:34.879577 systemd[1]: Stopped target network.target - Network. Jan 29 11:09:34.880665 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:09:34.880734 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:09:34.881840 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:09:34.882922 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:09:34.886942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:09:34.888269 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:09:34.889346 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:09:34.890672 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:09:34.890724 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:09:34.892083 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:09:34.892122 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:09:34.893110 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:09:34.893163 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:09:34.894164 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:09:34.894206 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:09:34.926647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:09:34.927905 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:09:34.931940 systemd-networkd[751]: eth0: DHCPv6 lease lost Jan 29 11:09:34.935439 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:09:34.935788 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:09:34.936987 systemd-networkd[751]: eth1: DHCPv6 lease lost Jan 29 11:09:34.939441 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:09:34.940011 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:09:34.942055 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:09:34.942167 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:09:34.952455 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:09:34.953058 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:09:34.953123 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:09:34.953790 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:09:34.953832 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:09:34.955056 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:09:34.955106 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:09:34.956281 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:09:34.956325 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:09:34.958365 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:09:34.991284 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:09:34.991423 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:09:34.995237 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:09:34.995398 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:09:34.996560 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:09:34.996783 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:09:34.999796 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:09:35.000009 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:09:35.001557 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:09:35.001598 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:09:35.002768 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:09:35.002816 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:09:35.004738 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:09:35.004793 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:09:35.006039 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:09:35.006108 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:09:35.007301 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:09:35.007364 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:09:35.014038 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:09:35.014673 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:09:35.014730 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:09:35.018384 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:09:35.018443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:35.023017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:09:35.023135 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:09:35.024609 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:09:35.030071 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:09:35.040907 systemd[1]: Switching root. Jan 29 11:09:35.134924 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 29 11:09:35.135013 systemd-journald[182]: Journal stopped Jan 29 11:09:36.504297 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:09:36.504376 kernel: SELinux: policy capability open_perms=1 Jan 29 11:09:36.504391 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:09:36.504404 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:09:36.504422 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:09:36.504439 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:09:36.504457 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:09:36.504470 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:09:36.504487 kernel: audit: type=1403 audit(1738148975.341:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:09:36.504506 systemd[1]: Successfully loaded SELinux policy in 58.272ms. Jan 29 11:09:36.504534 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.491ms. Jan 29 11:09:36.504551 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:09:36.504565 systemd[1]: Detected virtualization kvm. Jan 29 11:09:36.504578 systemd[1]: Detected architecture x86-64. Jan 29 11:09:36.504594 systemd[1]: Detected first boot. Jan 29 11:09:36.504608 systemd[1]: Hostname set to <ci-4186.1.0-f-d3e806da58>. Jan 29 11:09:36.504622 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:09:36.504636 zram_generator::config[1030]: No configuration found. Jan 29 11:09:36.504650 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:09:36.504663 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:09:36.504676 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:09:36.504690 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:09:36.504711 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:09:36.504724 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:09:36.504738 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:09:36.504751 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:09:36.504764 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:09:36.504778 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:09:36.504791 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:09:36.504805 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:09:36.504821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:09:36.504835 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:09:36.504848 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:09:36.504862 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:09:36.507658 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:09:36.507695 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:09:36.507710 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:09:36.507724 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:09:36.507739 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:09:36.507759 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:09:36.507774 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:09:36.507788 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:09:36.507801 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:09:36.507815 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:09:36.507829 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:09:36.507843 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:09:36.507860 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:09:36.507884 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:09:36.507898 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:09:36.507912 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:09:36.507925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:09:36.507938 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:09:36.507953 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:09:36.507966 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:09:36.507979 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:09:36.507996 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:36.508009 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:09:36.508023 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:09:36.508037 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:09:36.508051 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:09:36.508064 systemd[1]: Reached target machines.target - Containers. Jan 29 11:09:36.508078 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:09:36.508091 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:36.508107 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:09:36.508121 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:09:36.508135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:09:36.508149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:09:36.508163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:09:36.508177 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:09:36.508190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:09:36.508204 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:09:36.508218 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:09:36.508235 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:09:36.508249 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:09:36.508262 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:09:36.508277 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:09:36.508291 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:09:36.508304 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:09:36.508318 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:09:36.508331 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:09:36.508345 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:09:36.508362 systemd[1]: Stopped verity-setup.service. Jan 29 11:09:36.508376 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:36.508389 kernel: fuse: init (API version 7.39) Jan 29 11:09:36.508404 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:09:36.508417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:09:36.508431 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:09:36.508444 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:09:36.508458 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:09:36.508474 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:09:36.508488 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:09:36.508501 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:09:36.508515 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:09:36.508529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:09:36.508545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:09:36.508558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:09:36.508572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:09:36.508586 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:09:36.508600 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:09:36.508616 kernel: loop: module loaded Jan 29 11:09:36.508629 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:09:36.508673 systemd-journald[1106]: Collecting audit messages is disabled. Jan 29 11:09:36.508701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:09:36.508715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:09:36.508729 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:09:36.508743 systemd-journald[1106]: Journal started Jan 29 11:09:36.508777 systemd-journald[1106]: Runtime Journal (/run/log/journal/655f88cbfc3f4c63ab9673e763ce638c) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:09:36.124747 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:09:36.142498 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:09:36.142951 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:09:36.513534 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:09:36.512966 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:09:36.515034 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:09:36.527853 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:09:36.538991 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:09:36.558155 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:09:36.561012 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:09:36.561061 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:09:36.564479 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:09:36.571765 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:09:36.571919 kernel: ACPI: bus type drm_connector registered Jan 29 11:09:36.574071 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:09:36.574829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:36.582079 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:09:36.587403 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:09:36.589109 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:09:36.597146 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:09:36.597827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:09:36.599027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:09:36.603052 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:09:36.608148 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:09:36.612759 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:09:36.614086 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:09:36.615975 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:09:36.617240 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:09:36.619222 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:09:36.622003 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:09:36.646172 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:09:36.659480 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:09:36.662603 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:09:36.672947 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:09:36.686064 systemd-journald[1106]: Time spent on flushing to /var/log/journal/655f88cbfc3f4c63ab9673e763ce638c is 73.244ms for 991 entries. Jan 29 11:09:36.686064 systemd-journald[1106]: System Journal (/var/log/journal/655f88cbfc3f4c63ab9673e763ce638c) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:09:36.785665 systemd-journald[1106]: Received client request to flush runtime journal. Jan 29 11:09:36.785728 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 11:09:36.785755 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:09:36.712899 udevadm[1156]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:09:36.754838 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:09:36.762600 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:09:36.763368 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:09:36.788189 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:09:36.797280 kernel: loop1: detected capacity change from 0 to 8 Jan 29 11:09:36.805685 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:09:36.817033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:09:36.828225 kernel: loop2: detected capacity change from 0 to 138184 Jan 29 11:09:36.869530 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 29 11:09:36.869553 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jan 29 11:09:36.881141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:09:36.909959 kernel: loop3: detected capacity change from 0 to 141000 Jan 29 11:09:36.986211 kernel: loop4: detected capacity change from 0 to 210664 Jan 29 11:09:37.007135 kernel: loop5: detected capacity change from 0 to 8 Jan 29 11:09:37.019707 kernel: loop6: detected capacity change from 0 to 138184 Jan 29 11:09:37.049051 kernel: loop7: detected capacity change from 0 to 141000 Jan 29 11:09:37.070964 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 29 11:09:37.072566 (sd-merge)[1176]: Merged extensions into '/usr'. Jan 29 11:09:37.085872 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:09:37.085904 systemd[1]: Reloading... Jan 29 11:09:37.267567 ldconfig[1144]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:09:37.270052 zram_generator::config[1202]: No configuration found. Jan 29 11:09:37.415547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:09:37.468780 systemd[1]: Reloading finished in 382 ms. Jan 29 11:09:37.495728 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:09:37.500153 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:09:37.509068 systemd[1]: Starting ensure-sysext.service... Jan 29 11:09:37.512175 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:09:37.534055 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:09:37.534105 systemd[1]: Reloading... Jan 29 11:09:37.576015 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:09:37.576347 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:09:37.579485 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:09:37.579917 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 11:09:37.579999 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 11:09:37.587598 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:09:37.587614 systemd-tmpfiles[1246]: Skipping /boot Jan 29 11:09:37.610864 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:09:37.611930 systemd-tmpfiles[1246]: Skipping /boot Jan 29 11:09:37.666900 zram_generator::config[1272]: No configuration found. Jan 29 11:09:37.809921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:09:37.876380 systemd[1]: Reloading finished in 341 ms. Jan 29 11:09:37.892198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:09:37.893482 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:09:37.909100 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:09:37.913421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:09:37.919137 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:09:37.921784 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:09:37.927084 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:09:37.937224 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:09:37.946307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:37.946523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:37.955179 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:09:37.965174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:09:37.972184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:09:37.972979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:37.981160 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:09:37.982082 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:37.985186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:09:37.985364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:09:37.994739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:37.995115 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:38.005168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:09:38.006201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:38.006340 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:38.008519 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:09:38.010496 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:09:38.013823 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Jan 29 11:09:38.028127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:09:38.030065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:09:38.034609 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:38.035133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:38.043155 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:09:38.045451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:38.045716 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:09:38.046082 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:09:38.046244 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:38.048240 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:09:38.049997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:09:38.052713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:09:38.052959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:09:38.055619 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:09:38.064109 systemd[1]: Finished ensure-sysext.service. Jan 29 11:09:38.102135 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:09:38.103148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:09:38.112101 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:09:38.127291 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:09:38.127464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:09:38.135851 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:09:38.145692 augenrules[1376]: No rules Jan 29 11:09:38.146080 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:09:38.147515 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:09:38.148480 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:09:38.175929 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1360) Jan 29 11:09:38.199483 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:09:38.202665 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:09:38.207739 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:09:38.296012 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 29 11:09:38.297942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:38.298104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:09:38.305121 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:09:38.311082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:09:38.313157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:09:38.313832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:09:38.313901 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:09:38.313918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:09:38.316425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:09:38.316576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:09:38.348305 systemd-networkd[1361]: lo: Link UP Jan 29 11:09:38.348912 systemd-networkd[1361]: lo: Gained carrier Jan 29 11:09:38.353653 systemd-networkd[1361]: Enumeration completed Jan 29 11:09:38.356249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:09:38.356425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:09:38.357920 kernel: ISO 9660 Extensions: RRIP_1991A Jan 29 11:09:38.357116 systemd-networkd[1361]: eth0: Configuring with /run/systemd/network/10-2e:02:95:f5:78:8e.network. Jan 29 11:09:38.357810 systemd-networkd[1361]: eth1: Configuring with /run/systemd/network/10-8e:85:09:c1:d5:7a.network. Jan 29 11:09:38.360019 systemd-networkd[1361]: eth0: Link UP Jan 29 11:09:38.360028 systemd-networkd[1361]: eth0: Gained carrier Jan 29 11:09:38.361829 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:09:38.364202 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 29 11:09:38.366387 systemd-networkd[1361]: eth1: Link UP Jan 29 11:09:38.366398 systemd-networkd[1361]: eth1: Gained carrier Jan 29 11:09:38.378181 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:09:38.378859 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:09:38.379243 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:09:38.380024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:09:38.383171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:09:38.385315 systemd-resolved[1320]: Positive Trust Anchors: Jan 29 11:09:38.385496 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:09:38.385536 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:09:38.391112 systemd-resolved[1320]: Using system hostname 'ci-4186.1.0-f-d3e806da58'. Jan 29 11:09:38.393931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:09:38.395058 systemd[1]: Reached target network.target - Network. Jan 29 11:09:38.396011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:09:38.404529 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:09:38.407158 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:09:38.444995 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:09:38.473908 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:09:38.476897 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 11:09:38.479486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:09:38.500066 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:09:38.504112 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:09:38.531197 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:09:39.692780 systemd-resolved[1320]: Clock change detected. Flushing caches. Jan 29 11:09:39.692857 systemd-timesyncd[1362]: Contacted time server 208.69.120.241:123 (0.flatcar.pool.ntp.org). Jan 29 11:09:39.692916 systemd-timesyncd[1362]: Initial clock synchronization to Wed 2025-01-29 11:09:39.692724 UTC. Jan 29 11:09:39.705811 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:09:39.715101 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 11:09:39.715184 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 11:09:39.729104 kernel: Console: switching to colour dummy device 80x25 Jan 29 11:09:39.730493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:39.735496 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:09:39.735597 kernel: [drm] features: -context_init Jan 29 11:09:39.742099 kernel: [drm] number of scanouts: 1 Jan 29 11:09:39.745099 kernel: [drm] number of cap sets: 0 Jan 29 11:09:39.753793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:09:39.757047 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 11:09:39.755361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:39.762382 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:39.771327 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 11:09:39.771445 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:09:39.781103 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:09:39.805034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:09:39.805281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:39.854689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:09:39.906870 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:09:39.926785 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:09:39.939456 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:09:39.958355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:09:39.960137 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:09:39.996450 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:09:39.997877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:09:39.998010 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:09:39.998259 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:09:39.998388 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:09:39.998747 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:09:39.998941 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:09:39.999012 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:09:39.999069 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:09:39.999714 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:09:39.999819 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:09:40.001828 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:09:40.004111 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:09:40.016631 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:09:40.018977 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:09:40.020701 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:09:40.023356 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:09:40.023941 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:09:40.024558 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:09:40.024623 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:09:40.036266 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:09:40.039336 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:09:40.046389 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:09:40.053351 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:09:40.058280 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:09:40.068455 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:09:40.068988 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:09:40.078307 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:09:40.087194 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:09:40.092945 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:09:40.099332 coreos-metadata[1439]: Jan 29 11:09:40.099 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:09:40.112328 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:09:40.116618 coreos-metadata[1439]: Jan 29 11:09:40.116 INFO Fetch successful Jan 29 11:09:40.126337 jq[1441]: false Jan 29 11:09:40.117264 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:09:40.119096 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:09:40.119619 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:09:40.128245 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:09:40.134202 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:09:40.137880 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:09:40.154922 extend-filesystems[1444]: Found loop4 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found loop5 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found loop6 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found loop7 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found vda Jan 29 11:09:40.154922 extend-filesystems[1444]: Found vda1 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found vda2 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found vda3 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found usr Jan 29 11:09:40.154922 extend-filesystems[1444]: Found vda4 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found vda6 Jan 29 11:09:40.154922 extend-filesystems[1444]: Found vda7 Jan 29 11:09:40.268127 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 29 11:09:40.146632 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:09:40.175679 dbus-daemon[1440]: [system] SELinux support is enabled Jan 29 11:09:40.270180 extend-filesystems[1444]: Found vda9 Jan 29 11:09:40.270180 extend-filesystems[1444]: Checking size of /dev/vda9 Jan 29 11:09:40.270180 extend-filesystems[1444]: Resized partition /dev/vda9 Jan 29 11:09:40.147224 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:09:40.301578 update_engine[1450]: I20250129 11:09:40.200335 1450 main.cc:92] Flatcar Update Engine starting Jan 29 11:09:40.301578 update_engine[1450]: I20250129 11:09:40.225332 1450 update_check_scheduler.cc:74] Next update check in 11m43s Jan 29 11:09:40.303900 extend-filesystems[1472]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:09:40.160123 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:09:40.317768 jq[1453]: true Jan 29 11:09:40.160320 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:09:40.180429 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:09:40.318779 tar[1458]: linux-amd64/helm Jan 29 11:09:40.336579 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1359) Jan 29 11:09:40.203604 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:09:40.204243 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:09:40.226007 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:09:40.226045 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:09:40.233956 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:09:40.234047 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 29 11:09:40.234070 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:09:40.244329 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:09:40.276381 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:09:40.299735 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:09:40.379284 jq[1478]: true Jan 29 11:09:40.384761 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:09:40.398744 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:09:40.414709 systemd-logind[1449]: New seat seat0. Jan 29 11:09:40.416099 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:09:40.416128 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:09:40.416391 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:09:40.515592 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 11:09:40.576629 extend-filesystems[1472]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:09:40.576629 extend-filesystems[1472]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 11:09:40.576629 extend-filesystems[1472]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 11:09:40.584894 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:09:40.578594 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:09:40.585049 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jan 29 11:09:40.585049 extend-filesystems[1444]: Found vdb Jan 29 11:09:40.578841 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:09:40.584394 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:09:40.599444 systemd[1]: Starting sshkeys.service... Jan 29 11:09:40.604295 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:09:40.646372 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:09:40.651947 systemd-networkd[1361]: eth1: Gained IPv6LL Jan 29 11:09:40.657567 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:09:40.659542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:09:40.664196 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:09:40.671393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:40.676414 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:09:40.766183 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:09:40.787829 coreos-metadata[1516]: Jan 29 11:09:40.787 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:09:40.803645 coreos-metadata[1516]: Jan 29 11:09:40.803 INFO Fetch successful Jan 29 11:09:40.831672 unknown[1516]: wrote ssh authorized keys file for user: core Jan 29 11:09:40.885136 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:09:40.886596 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:09:40.890417 systemd[1]: Finished sshkeys.service. Jan 29 11:09:40.917106 containerd[1474]: time="2025-01-29T11:09:40.914039223Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:09:40.947456 containerd[1474]: time="2025-01-29T11:09:40.946090249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948139 containerd[1474]: time="2025-01-29T11:09:40.948041610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948139 containerd[1474]: time="2025-01-29T11:09:40.948089602Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:09:40.948139 containerd[1474]: time="2025-01-29T11:09:40.948111111Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:09:40.948368 containerd[1474]: time="2025-01-29T11:09:40.948303442Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:09:40.948368 containerd[1474]: time="2025-01-29T11:09:40.948334362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948460 containerd[1474]: time="2025-01-29T11:09:40.948400524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948460 containerd[1474]: time="2025-01-29T11:09:40.948413893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948639 containerd[1474]: time="2025-01-29T11:09:40.948616372Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948639 containerd[1474]: time="2025-01-29T11:09:40.948637185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948733 containerd[1474]: time="2025-01-29T11:09:40.948651088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948733 containerd[1474]: time="2025-01-29T11:09:40.948659988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:40.948831 containerd[1474]: time="2025-01-29T11:09:40.948760309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:40.949009 containerd[1474]: time="2025-01-29T11:09:40.948981667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:09:40.953281 containerd[1474]: time="2025-01-29T11:09:40.953227111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:09:40.954002 containerd[1474]: time="2025-01-29T11:09:40.953528832Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:09:40.954002 containerd[1474]: time="2025-01-29T11:09:40.953732123Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:09:40.954002 containerd[1474]: time="2025-01-29T11:09:40.953809544Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:09:40.962108 containerd[1474]: time="2025-01-29T11:09:40.961681623Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:09:40.962108 containerd[1474]: time="2025-01-29T11:09:40.961765236Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:09:40.962108 containerd[1474]: time="2025-01-29T11:09:40.961789342Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:09:40.962108 containerd[1474]: time="2025-01-29T11:09:40.961826104Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:09:40.962108 containerd[1474]: time="2025-01-29T11:09:40.961849308Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:09:40.962377 containerd[1474]: time="2025-01-29T11:09:40.962064425Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:09:40.962747 containerd[1474]: time="2025-01-29T11:09:40.962726524Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:09:40.962950 containerd[1474]: time="2025-01-29T11:09:40.962931433Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:09:40.963025 containerd[1474]: time="2025-01-29T11:09:40.963012050Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:09:40.963102 containerd[1474]: time="2025-01-29T11:09:40.963089183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:09:40.963167 containerd[1474]: time="2025-01-29T11:09:40.963155819Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:09:40.963226 containerd[1474]: time="2025-01-29T11:09:40.963215275Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963288087Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963313028Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963332977Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963350675Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963367194Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963384007Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963409739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963433857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963451548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963468305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963485602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963504827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963520373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.963904 containerd[1474]: time="2025-01-29T11:09:40.963556749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963578472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963599028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963614739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963631895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963648125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963667191Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963693647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963711049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.964507 containerd[1474]: time="2025-01-29T11:09:40.963725384Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:09:40.967108 containerd[1474]: time="2025-01-29T11:09:40.966209274Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:09:40.967108 containerd[1474]: time="2025-01-29T11:09:40.966254283Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:09:40.967108 containerd[1474]: time="2025-01-29T11:09:40.966271551Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:09:40.967108 containerd[1474]: time="2025-01-29T11:09:40.966290915Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:09:40.967108 containerd[1474]: time="2025-01-29T11:09:40.966306268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.967108 containerd[1474]: time="2025-01-29T11:09:40.966324247Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:09:40.967108 containerd[1474]: time="2025-01-29T11:09:40.966339647Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:09:40.967108 containerd[1474]: time="2025-01-29T11:09:40.966371241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:09:40.970291 containerd[1474]: time="2025-01-29T11:09:40.970065903Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:09:40.970955 containerd[1474]: time="2025-01-29T11:09:40.970591608Z" level=info msg="Connect containerd service" Jan 29 11:09:40.970955 containerd[1474]: time="2025-01-29T11:09:40.970739722Z" level=info msg="using legacy CRI server" Jan 29 11:09:40.970955 containerd[1474]: time="2025-01-29T11:09:40.970754781Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:09:40.971246 containerd[1474]: time="2025-01-29T11:09:40.971226557Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:09:40.973115 containerd[1474]: time="2025-01-29T11:09:40.973059516Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:09:40.974100 containerd[1474]: time="2025-01-29T11:09:40.973911115Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:09:40.974100 containerd[1474]: time="2025-01-29T11:09:40.973969993Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:09:40.974100 containerd[1474]: time="2025-01-29T11:09:40.973985427Z" level=info msg="Start subscribing containerd event" Jan 29 11:09:40.974100 containerd[1474]: time="2025-01-29T11:09:40.974043847Z" level=info msg="Start recovering state" Jan 29 11:09:40.974349 containerd[1474]: time="2025-01-29T11:09:40.974326975Z" level=info msg="Start event monitor" Jan 29 11:09:40.975095 containerd[1474]: time="2025-01-29T11:09:40.974397009Z" level=info msg="Start snapshots syncer" Jan 29 11:09:40.975095 containerd[1474]: time="2025-01-29T11:09:40.974412139Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:09:40.975095 containerd[1474]: time="2025-01-29T11:09:40.974428390Z" level=info msg="Start streaming server" Jan 29 11:09:40.975095 containerd[1474]: time="2025-01-29T11:09:40.974511398Z" level=info msg="containerd successfully booted in 0.062180s" Jan 29 11:09:40.974628 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:09:41.067329 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:09:41.128133 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:09:41.144803 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:09:41.168841 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:09:41.169695 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:09:41.181200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:09:41.219752 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:09:41.235459 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:09:41.246479 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:09:41.249108 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:09:41.346939 tar[1458]: linux-amd64/LICENSE Jan 29 11:09:41.347790 tar[1458]: linux-amd64/README.md Jan 29 11:09:41.364988 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:09:41.547415 systemd-networkd[1361]: eth0: Gained IPv6LL Jan 29 11:09:41.854744 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:09:41.862275 systemd[1]: Started sshd@0-143.110.233.113:22-139.178.89.65:33900.service - OpenSSH per-connection server daemon (139.178.89.65:33900). Jan 29 11:09:41.946282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:41.955406 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:09:41.955564 (kubelet)[1565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:41.959137 systemd[1]: Startup finished in 1.074s (kernel) + 5.604s (initrd) + 5.523s (userspace) = 12.203s. Jan 29 11:09:41.966062 sshd[1558]: Accepted publickey for core from 139.178.89.65 port 33900 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:41.969587 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:41.984356 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:09:41.991637 agetty[1552]: failed to open credentials directory Jan 29 11:09:41.993384 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:09:41.994874 agetty[1553]: failed to open credentials directory Jan 29 11:09:42.003279 systemd-logind[1449]: New session 1 of user core. Jan 29 11:09:42.019138 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:09:42.026180 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:09:42.040997 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:09:42.156404 systemd[1572]: Queued start job for default target default.target. Jan 29 11:09:42.162356 systemd[1572]: Created slice app.slice - User Application Slice. Jan 29 11:09:42.162400 systemd[1572]: Reached target paths.target - Paths. Jan 29 11:09:42.162421 systemd[1572]: Reached target timers.target - Timers. Jan 29 11:09:42.165249 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:09:42.182388 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:09:42.182717 systemd[1572]: Reached target sockets.target - Sockets. Jan 29 11:09:42.182749 systemd[1572]: Reached target basic.target - Basic System. Jan 29 11:09:42.182839 systemd[1572]: Reached target default.target - Main User Target. Jan 29 11:09:42.182898 systemd[1572]: Startup finished in 133ms. Jan 29 11:09:42.183309 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:09:42.190408 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:09:42.268172 systemd[1]: Started sshd@1-143.110.233.113:22-139.178.89.65:33904.service - OpenSSH per-connection server daemon (139.178.89.65:33904). Jan 29 11:09:42.324313 sshd[1587]: Accepted publickey for core from 139.178.89.65 port 33904 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:42.326319 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:42.333282 systemd-logind[1449]: New session 2 of user core. Jan 29 11:09:42.341337 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:09:42.406148 sshd[1589]: Connection closed by 139.178.89.65 port 33904 Jan 29 11:09:42.406978 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:42.419268 systemd[1]: sshd@1-143.110.233.113:22-139.178.89.65:33904.service: Deactivated successfully. Jan 29 11:09:42.423345 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:09:42.428901 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:09:42.435098 systemd[1]: Started sshd@2-143.110.233.113:22-139.178.89.65:33916.service - OpenSSH per-connection server daemon (139.178.89.65:33916). Jan 29 11:09:42.438379 systemd-logind[1449]: Removed session 2. Jan 29 11:09:42.513497 sshd[1594]: Accepted publickey for core from 139.178.89.65 port 33916 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:42.516016 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:42.525969 systemd-logind[1449]: New session 3 of user core. Jan 29 11:09:42.534509 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:09:42.594563 sshd[1598]: Connection closed by 139.178.89.65 port 33916 Jan 29 11:09:42.597012 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:42.607681 systemd[1]: sshd@2-143.110.233.113:22-139.178.89.65:33916.service: Deactivated successfully. Jan 29 11:09:42.611272 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:09:42.614151 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:09:42.619251 systemd[1]: Started sshd@3-143.110.233.113:22-139.178.89.65:33918.service - OpenSSH per-connection server daemon (139.178.89.65:33918). Jan 29 11:09:42.623323 systemd-logind[1449]: Removed session 3. Jan 29 11:09:42.676996 sshd[1603]: Accepted publickey for core from 139.178.89.65 port 33918 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:42.679785 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:42.687264 systemd-logind[1449]: New session 4 of user core. Jan 29 11:09:42.692280 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:09:42.752694 kubelet[1565]: E0129 11:09:42.752552 1565 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:42.756101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:42.756263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:42.756554 systemd[1]: kubelet.service: Consumed 1.151s CPU time. Jan 29 11:09:42.760740 sshd[1605]: Connection closed by 139.178.89.65 port 33918 Jan 29 11:09:42.761448 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Jan 29 11:09:42.771201 systemd[1]: sshd@3-143.110.233.113:22-139.178.89.65:33918.service: Deactivated successfully. Jan 29 11:09:42.773323 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:09:42.774926 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:09:42.780483 systemd[1]: Started sshd@4-143.110.233.113:22-139.178.89.65:33926.service - OpenSSH per-connection server daemon (139.178.89.65:33926). Jan 29 11:09:42.781992 systemd-logind[1449]: Removed session 4. Jan 29 11:09:42.834950 sshd[1612]: Accepted publickey for core from 139.178.89.65 port 33926 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:09:42.836866 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:09:42.844457 systemd-logind[1449]: New session 5 of user core. Jan 29 11:09:42.854440 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:09:42.925461 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:09:42.925903 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:09:43.462056 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:09:43.477220 (dockerd)[1633]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:09:43.958220 dockerd[1633]: time="2025-01-29T11:09:43.957702542Z" level=info msg="Starting up" Jan 29 11:09:44.406701 systemd[1]: var-lib-docker-metacopy\x2dcheck397667744-merged.mount: Deactivated successfully. Jan 29 11:09:44.430401 dockerd[1633]: time="2025-01-29T11:09:44.430334044Z" level=info msg="Loading containers: start." Jan 29 11:09:44.679144 kernel: Initializing XFRM netlink socket Jan 29 11:09:44.787969 systemd-networkd[1361]: docker0: Link UP Jan 29 11:09:44.846734 dockerd[1633]: time="2025-01-29T11:09:44.846627758Z" level=info msg="Loading containers: done." Jan 29 11:09:44.876182 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3834256927-merged.mount: Deactivated successfully. Jan 29 11:09:44.880926 dockerd[1633]: time="2025-01-29T11:09:44.880853439Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:09:44.881176 dockerd[1633]: time="2025-01-29T11:09:44.881063737Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 11:09:44.881411 dockerd[1633]: time="2025-01-29T11:09:44.881380914Z" level=info msg="Daemon has completed initialization" Jan 29 11:09:44.955269 dockerd[1633]: time="2025-01-29T11:09:44.955090727Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:09:44.955697 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:09:46.828700 containerd[1474]: time="2025-01-29T11:09:46.828317443Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:09:47.448626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2846572432.mount: Deactivated successfully. Jan 29 11:09:49.113691 containerd[1474]: time="2025-01-29T11:09:49.113615953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:49.115502 containerd[1474]: time="2025-01-29T11:09:49.115425572Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:09:49.116507 containerd[1474]: time="2025-01-29T11:09:49.116408416Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:49.122772 containerd[1474]: time="2025-01-29T11:09:49.122690153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:49.125109 containerd[1474]: time="2025-01-29T11:09:49.124819230Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.296447618s" Jan 29 11:09:49.125109 containerd[1474]: time="2025-01-29T11:09:49.124879195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:09:49.171994 containerd[1474]: time="2025-01-29T11:09:49.171899244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:09:50.911140 containerd[1474]: time="2025-01-29T11:09:50.910726844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:50.912518 containerd[1474]: time="2025-01-29T11:09:50.912456484Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:09:50.913992 containerd[1474]: time="2025-01-29T11:09:50.913935792Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:50.920138 containerd[1474]: time="2025-01-29T11:09:50.920056231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:50.921393 containerd[1474]: time="2025-01-29T11:09:50.921343719Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.749390128s" Jan 29 11:09:50.921498 containerd[1474]: time="2025-01-29T11:09:50.921396110Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:09:50.951004 containerd[1474]: time="2025-01-29T11:09:50.950967041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:09:52.134126 containerd[1474]: time="2025-01-29T11:09:52.133761015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:52.135538 containerd[1474]: time="2025-01-29T11:09:52.135471324Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:09:52.136994 containerd[1474]: time="2025-01-29T11:09:52.136912208Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:52.141203 containerd[1474]: time="2025-01-29T11:09:52.141111375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:52.142642 containerd[1474]: time="2025-01-29T11:09:52.142300026Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.191062306s" Jan 29 11:09:52.142642 containerd[1474]: time="2025-01-29T11:09:52.142358672Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:09:52.186448 containerd[1474]: time="2025-01-29T11:09:52.186390313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:09:52.766222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:09:52.772853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:52.990218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:53.001702 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:53.105108 kubelet[1919]: E0129 11:09:53.104753 1919 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:53.111448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:53.112791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:53.435127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762115628.mount: Deactivated successfully. Jan 29 11:09:54.023609 containerd[1474]: time="2025-01-29T11:09:54.023547462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:54.025165 containerd[1474]: time="2025-01-29T11:09:54.024752668Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:09:54.026977 containerd[1474]: time="2025-01-29T11:09:54.026907330Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:54.030557 containerd[1474]: time="2025-01-29T11:09:54.030428768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:54.031745 containerd[1474]: time="2025-01-29T11:09:54.031703856Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.845257204s" Jan 29 11:09:54.031745 containerd[1474]: time="2025-01-29T11:09:54.031744668Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:09:54.067954 containerd[1474]: time="2025-01-29T11:09:54.067811136Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:09:54.070055 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 29 11:09:54.623427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464680356.mount: Deactivated successfully. Jan 29 11:09:55.544899 containerd[1474]: time="2025-01-29T11:09:55.544833815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:55.546316 containerd[1474]: time="2025-01-29T11:09:55.546267501Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:09:55.547569 containerd[1474]: time="2025-01-29T11:09:55.547503268Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:55.551043 containerd[1474]: time="2025-01-29T11:09:55.550989192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:55.552323 containerd[1474]: time="2025-01-29T11:09:55.552128171Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.484265346s" Jan 29 11:09:55.552323 containerd[1474]: time="2025-01-29T11:09:55.552167893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:09:55.586978 containerd[1474]: time="2025-01-29T11:09:55.586921748Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:09:56.127247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1868360076.mount: Deactivated successfully. Jan 29 11:09:56.135881 containerd[1474]: time="2025-01-29T11:09:56.135824850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:56.137473 containerd[1474]: time="2025-01-29T11:09:56.137407580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:09:56.138658 containerd[1474]: time="2025-01-29T11:09:56.138596258Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:56.142656 containerd[1474]: time="2025-01-29T11:09:56.142588526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:56.144044 containerd[1474]: time="2025-01-29T11:09:56.143910682Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 556.941284ms" Jan 29 11:09:56.144044 containerd[1474]: time="2025-01-29T11:09:56.143944991Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:09:56.182016 containerd[1474]: time="2025-01-29T11:09:56.181969858Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:09:56.724830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749684696.mount: Deactivated successfully. Jan 29 11:09:57.164707 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 29 11:09:58.786033 containerd[1474]: time="2025-01-29T11:09:58.785947533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:58.787414 containerd[1474]: time="2025-01-29T11:09:58.787048617Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:09:58.790120 containerd[1474]: time="2025-01-29T11:09:58.790026862Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:58.796865 containerd[1474]: time="2025-01-29T11:09:58.796781746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:58.798150 containerd[1474]: time="2025-01-29T11:09:58.797866448Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.615847229s" Jan 29 11:09:58.798150 containerd[1474]: time="2025-01-29T11:09:58.797905180Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:10:02.073061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:02.119935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:02.185186 systemd[1]: Reloading requested from client PID 2100 ('systemctl') (unit session-5.scope)... Jan 29 11:10:02.185215 systemd[1]: Reloading... Jan 29 11:10:02.812541 zram_generator::config[2140]: No configuration found. Jan 29 11:10:03.060845 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:10:03.242013 systemd[1]: Reloading finished in 1055 ms. Jan 29 11:10:03.302667 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:10:03.302793 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:10:03.303474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:03.311717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:03.466971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:03.487824 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:10:03.853164 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:03.853164 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:10:03.853164 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:03.856666 kubelet[2192]: I0129 11:10:03.854921 2192 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:10:04.669169 kubelet[2192]: I0129 11:10:04.669000 2192 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:10:04.669169 kubelet[2192]: I0129 11:10:04.669040 2192 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:10:04.669414 kubelet[2192]: I0129 11:10:04.669394 2192 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:10:04.746046 kubelet[2192]: I0129 11:10:04.745981 2192 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:10:04.764570 kubelet[2192]: E0129 11:10:04.764372 2192 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.110.233.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.801945 kubelet[2192]: I0129 11:10:04.801877 2192 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:10:04.811221 kubelet[2192]: I0129 11:10:04.811016 2192 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:10:04.811990 kubelet[2192]: I0129 11:10:04.811195 2192 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-f-d3e806da58","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:10:04.817916 kubelet[2192]: I0129 11:10:04.817801 2192 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:10:04.817916 kubelet[2192]: I0129 11:10:04.817899 2192 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:10:04.818216 kubelet[2192]: I0129 11:10:04.818199 2192 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:04.820580 kubelet[2192]: I0129 11:10:04.820513 2192 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:10:04.820580 kubelet[2192]: I0129 11:10:04.820568 2192 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:10:04.823218 kubelet[2192]: I0129 11:10:04.820629 2192 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:10:04.823218 kubelet[2192]: I0129 11:10:04.820663 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:10:04.833373 kubelet[2192]: W0129 11:10:04.833282 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.233.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.833563 kubelet[2192]: E0129 11:10:04.833413 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.110.233.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.834108 kubelet[2192]: W0129 11:10:04.834023 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.233.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-f-d3e806da58&limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.834217 kubelet[2192]: E0129 11:10:04.834126 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.110.233.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-f-d3e806da58&limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.834293 kubelet[2192]: I0129 11:10:04.834266 2192 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:10:04.836576 kubelet[2192]: I0129 11:10:04.836507 2192 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:10:04.836719 kubelet[2192]: W0129 11:10:04.836628 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:10:04.838239 kubelet[2192]: I0129 11:10:04.837974 2192 server.go:1264] "Started kubelet" Jan 29 11:10:04.840861 kubelet[2192]: I0129 11:10:04.840784 2192 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:10:04.842509 kubelet[2192]: I0129 11:10:04.842465 2192 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:10:04.846593 kubelet[2192]: I0129 11:10:04.846523 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:10:04.860945 kubelet[2192]: I0129 11:10:04.860716 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:10:04.901840 kubelet[2192]: I0129 11:10:04.861122 2192 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:10:04.901840 kubelet[2192]: E0129 11:10:04.861830 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.233.113:6443/api/v1/namespaces/default/events\": dial tcp 143.110.233.113:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-f-d3e806da58.181f254f90107460 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-f-d3e806da58,UID:ci-4186.1.0-f-d3e806da58,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-f-d3e806da58,},FirstTimestamp:2025-01-29 11:10:04.837942368 +0000 UTC m=+1.341494463,LastTimestamp:2025-01-29 11:10:04.837942368 +0000 UTC m=+1.341494463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-f-d3e806da58,}" Jan 29 11:10:04.901840 kubelet[2192]: I0129 11:10:04.862064 2192 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:10:04.901840 kubelet[2192]: I0129 11:10:04.868095 2192 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:10:04.901840 kubelet[2192]: I0129 11:10:04.868272 2192 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:10:04.901840 kubelet[2192]: E0129 11:10:04.869633 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.233.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-f-d3e806da58?timeout=10s\": dial tcp 143.110.233.113:6443: connect: connection refused" interval="200ms" Jan 29 11:10:04.901840 kubelet[2192]: W0129 11:10:04.869779 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.233.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.902389 kubelet[2192]: E0129 11:10:04.869873 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.110.233.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.932634 kubelet[2192]: I0129 11:10:04.932174 2192 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:10:04.932634 kubelet[2192]: I0129 11:10:04.932371 2192 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:10:04.936770 kubelet[2192]: I0129 11:10:04.934253 2192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:10:04.950791 kubelet[2192]: E0129 11:10:04.950746 2192 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:10:04.954573 kubelet[2192]: I0129 11:10:04.954504 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:10:04.958004 kubelet[2192]: I0129 11:10:04.957943 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:10:04.958328 kubelet[2192]: I0129 11:10:04.958268 2192 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:10:04.958610 kubelet[2192]: I0129 11:10:04.958481 2192 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:10:04.958765 kubelet[2192]: E0129 11:10:04.958587 2192 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:10:04.972737 kubelet[2192]: W0129 11:10:04.972685 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.233.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.974157 kubelet[2192]: E0129 11:10:04.973206 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.110.233.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:04.974157 kubelet[2192]: I0129 11:10:04.972885 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:04.974157 kubelet[2192]: E0129 11:10:04.973674 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.233.113:6443/api/v1/nodes\": dial tcp 143.110.233.113:6443: connect: connection refused" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:04.974157 kubelet[2192]: I0129 11:10:04.973796 2192 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:10:04.974157 kubelet[2192]: I0129 11:10:04.973807 2192 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:10:04.974157 kubelet[2192]: I0129 11:10:04.973831 2192 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:05.002884 kubelet[2192]: I0129 11:10:05.002841 2192 policy_none.go:49] "None policy: Start" Jan 29 11:10:05.004803 kubelet[2192]: I0129 11:10:05.004705 2192 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:10:05.004803 kubelet[2192]: I0129 11:10:05.004758 2192 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:10:05.059148 kubelet[2192]: E0129 11:10:05.059068 2192 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:10:05.071145 kubelet[2192]: E0129 11:10:05.071058 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.233.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-f-d3e806da58?timeout=10s\": dial tcp 143.110.233.113:6443: connect: connection refused" interval="400ms" Jan 29 11:10:05.175532 kubelet[2192]: I0129 11:10:05.175465 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.176227 kubelet[2192]: E0129 11:10:05.176175 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.233.113:6443/api/v1/nodes\": dial tcp 143.110.233.113:6443: connect: connection refused" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.200439 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:10:05.220126 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:10:05.233590 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:10:05.236466 kubelet[2192]: I0129 11:10:05.236427 2192 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:10:05.237154 kubelet[2192]: I0129 11:10:05.236726 2192 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:10:05.237154 kubelet[2192]: I0129 11:10:05.236911 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:10:05.240561 kubelet[2192]: E0129 11:10:05.240518 2192 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-f-d3e806da58\" not found" Jan 29 11:10:05.260619 kubelet[2192]: I0129 11:10:05.259849 2192 topology_manager.go:215] "Topology Admit Handler" podUID="f3066f9d26a59e8bc234ee5946478b62" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.261502 kubelet[2192]: I0129 11:10:05.261466 2192 topology_manager.go:215] "Topology Admit Handler" podUID="3eb51796125993700dd8400f76a07a6e" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.262863 kubelet[2192]: I0129 11:10:05.262825 2192 topology_manager.go:215] "Topology Admit Handler" podUID="54951527e3bd34ba7003868f3c87c317" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274484 kubelet[2192]: I0129 11:10:05.274432 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eb51796125993700dd8400f76a07a6e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-f-d3e806da58\" (UID: \"3eb51796125993700dd8400f76a07a6e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274484 kubelet[2192]: I0129 11:10:05.274482 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274700 kubelet[2192]: I0129 11:10:05.274521 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274700 kubelet[2192]: I0129 11:10:05.274548 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274700 kubelet[2192]: I0129 11:10:05.274579 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3066f9d26a59e8bc234ee5946478b62-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-f-d3e806da58\" (UID: \"f3066f9d26a59e8bc234ee5946478b62\") " pod="kube-system/kube-scheduler-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274700 kubelet[2192]: I0129 11:10:05.274610 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eb51796125993700dd8400f76a07a6e-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-f-d3e806da58\" (UID: \"3eb51796125993700dd8400f76a07a6e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274700 kubelet[2192]: I0129 11:10:05.274659 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eb51796125993700dd8400f76a07a6e-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-f-d3e806da58\" (UID: \"3eb51796125993700dd8400f76a07a6e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274932 kubelet[2192]: I0129 11:10:05.274686 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.274932 kubelet[2192]: I0129 11:10:05.274714 2192 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.275140 systemd[1]: Created slice kubepods-burstable-podf3066f9d26a59e8bc234ee5946478b62.slice - libcontainer container kubepods-burstable-podf3066f9d26a59e8bc234ee5946478b62.slice. Jan 29 11:10:05.288486 systemd[1]: Created slice kubepods-burstable-pod3eb51796125993700dd8400f76a07a6e.slice - libcontainer container kubepods-burstable-pod3eb51796125993700dd8400f76a07a6e.slice. Jan 29 11:10:05.297748 systemd[1]: Created slice kubepods-burstable-pod54951527e3bd34ba7003868f3c87c317.slice - libcontainer container kubepods-burstable-pod54951527e3bd34ba7003868f3c87c317.slice. Jan 29 11:10:05.472807 kubelet[2192]: E0129 11:10:05.472617 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.233.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-f-d3e806da58?timeout=10s\": dial tcp 143.110.233.113:6443: connect: connection refused" interval="800ms" Jan 29 11:10:05.578607 kubelet[2192]: I0129 11:10:05.578512 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.579070 kubelet[2192]: E0129 11:10:05.579029 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.233.113:6443/api/v1/nodes\": dial tcp 143.110.233.113:6443: connect: connection refused" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:05.583735 kubelet[2192]: E0129 11:10:05.583668 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:05.584645 containerd[1474]: time="2025-01-29T11:10:05.584594825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-f-d3e806da58,Uid:f3066f9d26a59e8bc234ee5946478b62,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:05.593325 kubelet[2192]: E0129 11:10:05.593260 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:05.593979 containerd[1474]: time="2025-01-29T11:10:05.593917294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-f-d3e806da58,Uid:3eb51796125993700dd8400f76a07a6e,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:05.602122 kubelet[2192]: E0129 11:10:05.601994 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:05.602954 containerd[1474]: time="2025-01-29T11:10:05.602875635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-f-d3e806da58,Uid:54951527e3bd34ba7003868f3c87c317,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:05.610992 systemd-resolved[1320]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 29 11:10:05.760252 kubelet[2192]: W0129 11:10:05.759445 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.110.233.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:05.760252 kubelet[2192]: E0129 11:10:05.759547 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.110.233.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:05.901859 kubelet[2192]: W0129 11:10:05.901770 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.233.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:05.901859 kubelet[2192]: E0129 11:10:05.901862 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.110.233.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:05.949570 kubelet[2192]: W0129 11:10:05.949469 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.110.233.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-f-d3e806da58&limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:05.949570 kubelet[2192]: E0129 11:10:05.949575 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.110.233.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-f-d3e806da58&limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:06.011225 kubelet[2192]: W0129 11:10:06.010943 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.233.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:06.011225 kubelet[2192]: E0129 11:10:06.011025 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.110.233.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:06.275726 kubelet[2192]: E0129 11:10:06.275526 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.233.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-f-d3e806da58?timeout=10s\": dial tcp 143.110.233.113:6443: connect: connection refused" interval="1.6s" Jan 29 11:10:06.380792 kubelet[2192]: I0129 11:10:06.380748 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:06.381626 kubelet[2192]: E0129 11:10:06.381588 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.233.113:6443/api/v1/nodes\": dial tcp 143.110.233.113:6443: connect: connection refused" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:06.901458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063226133.mount: Deactivated successfully. Jan 29 11:10:06.935185 kubelet[2192]: E0129 11:10:06.935136 2192 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.110.233.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:06.951194 containerd[1474]: time="2025-01-29T11:10:06.950958176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:06.981763 containerd[1474]: time="2025-01-29T11:10:06.981666781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:10:06.995285 containerd[1474]: time="2025-01-29T11:10:06.994882549Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:07.007986 containerd[1474]: time="2025-01-29T11:10:07.007892024Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:07.014051 containerd[1474]: time="2025-01-29T11:10:07.013967035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:07.026136 containerd[1474]: time="2025-01-29T11:10:07.026034470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:10:07.035699 containerd[1474]: time="2025-01-29T11:10:07.035557386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:07.036834 containerd[1474]: time="2025-01-29T11:10:07.036454657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.444965367s" Jan 29 11:10:07.040130 containerd[1474]: time="2025-01-29T11:10:07.039853752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:10:07.050591 containerd[1474]: time="2025-01-29T11:10:07.048306231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.454263267s" Jan 29 11:10:07.052618 containerd[1474]: time="2025-01-29T11:10:07.052468255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.449448586s" Jan 29 11:10:07.300578 containerd[1474]: time="2025-01-29T11:10:07.299738055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:07.300578 containerd[1474]: time="2025-01-29T11:10:07.300005501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:07.300578 containerd[1474]: time="2025-01-29T11:10:07.300032077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:07.303105 containerd[1474]: time="2025-01-29T11:10:07.302314168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:07.306177 containerd[1474]: time="2025-01-29T11:10:07.305102948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:07.306177 containerd[1474]: time="2025-01-29T11:10:07.305195725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:07.306177 containerd[1474]: time="2025-01-29T11:10:07.305225114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:07.306177 containerd[1474]: time="2025-01-29T11:10:07.305342117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:07.314019 containerd[1474]: time="2025-01-29T11:10:07.313839086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:07.314019 containerd[1474]: time="2025-01-29T11:10:07.313929531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:07.314019 containerd[1474]: time="2025-01-29T11:10:07.313953093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:07.314445 containerd[1474]: time="2025-01-29T11:10:07.314157879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:07.355518 systemd[1]: Started cri-containerd-cc82d5ada4f6c732765992764de33e24d81ca99c40be287354199bdb49bfe1a5.scope - libcontainer container cc82d5ada4f6c732765992764de33e24d81ca99c40be287354199bdb49bfe1a5. Jan 29 11:10:07.371542 systemd[1]: Started cri-containerd-6e59d5a3c5ede5174bb29f1e064cbc516085ff31f9f482cbe2c2193b9d132839.scope - libcontainer container 6e59d5a3c5ede5174bb29f1e064cbc516085ff31f9f482cbe2c2193b9d132839. Jan 29 11:10:07.379754 systemd[1]: Started cri-containerd-83df3f12abf533d1e9e3e3f6d27dec29818d3eb2786a468707a0afc07f35c068.scope - libcontainer container 83df3f12abf533d1e9e3e3f6d27dec29818d3eb2786a468707a0afc07f35c068. Jan 29 11:10:07.482577 containerd[1474]: time="2025-01-29T11:10:07.482166777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-f-d3e806da58,Uid:54951527e3bd34ba7003868f3c87c317,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc82d5ada4f6c732765992764de33e24d81ca99c40be287354199bdb49bfe1a5\"" Jan 29 11:10:07.485584 containerd[1474]: time="2025-01-29T11:10:07.482468112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-f-d3e806da58,Uid:f3066f9d26a59e8bc234ee5946478b62,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e59d5a3c5ede5174bb29f1e064cbc516085ff31f9f482cbe2c2193b9d132839\"" Jan 29 11:10:07.490042 kubelet[2192]: E0129 11:10:07.489905 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:07.490706 kubelet[2192]: E0129 11:10:07.490410 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:07.499526 containerd[1474]: time="2025-01-29T11:10:07.499351892Z" level=info msg="CreateContainer within sandbox \"cc82d5ada4f6c732765992764de33e24d81ca99c40be287354199bdb49bfe1a5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:10:07.499709 containerd[1474]: time="2025-01-29T11:10:07.499603827Z" level=info msg="CreateContainer within sandbox \"6e59d5a3c5ede5174bb29f1e064cbc516085ff31f9f482cbe2c2193b9d132839\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:10:07.512332 containerd[1474]: time="2025-01-29T11:10:07.512211265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-f-d3e806da58,Uid:3eb51796125993700dd8400f76a07a6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"83df3f12abf533d1e9e3e3f6d27dec29818d3eb2786a468707a0afc07f35c068\"" Jan 29 11:10:07.514646 kubelet[2192]: E0129 11:10:07.514607 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:07.518764 containerd[1474]: time="2025-01-29T11:10:07.518714678Z" level=info msg="CreateContainer within sandbox \"83df3f12abf533d1e9e3e3f6d27dec29818d3eb2786a468707a0afc07f35c068\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:10:07.552468 containerd[1474]: time="2025-01-29T11:10:07.551798909Z" level=info msg="CreateContainer within sandbox \"cc82d5ada4f6c732765992764de33e24d81ca99c40be287354199bdb49bfe1a5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9e2a9c8e406235a525211ed5558c42fd983e3ae1cb806a8331004a22692d7338\"" Jan 29 11:10:07.553921 containerd[1474]: time="2025-01-29T11:10:07.553765764Z" level=info msg="StartContainer for \"9e2a9c8e406235a525211ed5558c42fd983e3ae1cb806a8331004a22692d7338\"" Jan 29 11:10:07.559426 containerd[1474]: time="2025-01-29T11:10:07.559364868Z" level=info msg="CreateContainer within sandbox \"6e59d5a3c5ede5174bb29f1e064cbc516085ff31f9f482cbe2c2193b9d132839\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"35b9d78fccf6dc77b6d5bd25fd32545aec1b88a8490ab66ded2ba2006a41a764\"" Jan 29 11:10:07.561752 containerd[1474]: time="2025-01-29T11:10:07.561688733Z" level=info msg="StartContainer for \"35b9d78fccf6dc77b6d5bd25fd32545aec1b88a8490ab66ded2ba2006a41a764\"" Jan 29 11:10:07.596274 containerd[1474]: time="2025-01-29T11:10:07.596121846Z" level=info msg="CreateContainer within sandbox \"83df3f12abf533d1e9e3e3f6d27dec29818d3eb2786a468707a0afc07f35c068\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"56957419a77072798bb13e125cd7ccd8686b2e11a49472e80705fce3529588b3\"" Jan 29 11:10:07.597858 containerd[1474]: time="2025-01-29T11:10:07.597651078Z" level=info msg="StartContainer for \"56957419a77072798bb13e125cd7ccd8686b2e11a49472e80705fce3529588b3\"" Jan 29 11:10:07.630440 systemd[1]: Started cri-containerd-35b9d78fccf6dc77b6d5bd25fd32545aec1b88a8490ab66ded2ba2006a41a764.scope - libcontainer container 35b9d78fccf6dc77b6d5bd25fd32545aec1b88a8490ab66ded2ba2006a41a764. Jan 29 11:10:07.644984 systemd[1]: Started cri-containerd-9e2a9c8e406235a525211ed5558c42fd983e3ae1cb806a8331004a22692d7338.scope - libcontainer container 9e2a9c8e406235a525211ed5558c42fd983e3ae1cb806a8331004a22692d7338. Jan 29 11:10:07.684515 systemd[1]: Started cri-containerd-56957419a77072798bb13e125cd7ccd8686b2e11a49472e80705fce3529588b3.scope - libcontainer container 56957419a77072798bb13e125cd7ccd8686b2e11a49472e80705fce3529588b3. Jan 29 11:10:07.789327 containerd[1474]: time="2025-01-29T11:10:07.788838422Z" level=info msg="StartContainer for \"56957419a77072798bb13e125cd7ccd8686b2e11a49472e80705fce3529588b3\" returns successfully" Jan 29 11:10:07.811405 containerd[1474]: time="2025-01-29T11:10:07.810958950Z" level=info msg="StartContainer for \"9e2a9c8e406235a525211ed5558c42fd983e3ae1cb806a8331004a22692d7338\" returns successfully" Jan 29 11:10:07.811405 containerd[1474]: time="2025-01-29T11:10:07.811150465Z" level=info msg="StartContainer for \"35b9d78fccf6dc77b6d5bd25fd32545aec1b88a8490ab66ded2ba2006a41a764\" returns successfully" Jan 29 11:10:07.877817 kubelet[2192]: E0129 11:10:07.876651 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.233.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-f-d3e806da58?timeout=10s\": dial tcp 143.110.233.113:6443: connect: connection refused" interval="3.2s" Jan 29 11:10:07.968301 kubelet[2192]: W0129 11:10:07.967575 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.110.233.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:07.968301 kubelet[2192]: E0129 11:10:07.967635 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.110.233.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:07.968301 kubelet[2192]: W0129 11:10:07.967577 2192 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.110.233.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:07.968301 kubelet[2192]: E0129 11:10:07.967665 2192 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.110.233.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.110.233.113:6443: connect: connection refused Jan 29 11:10:07.979060 kubelet[2192]: E0129 11:10:07.979006 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:07.983391 kubelet[2192]: E0129 11:10:07.983063 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:07.987458 kubelet[2192]: E0129 11:10:07.987416 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:07.987969 kubelet[2192]: I0129 11:10:07.987942 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:07.991570 kubelet[2192]: E0129 11:10:07.991506 2192 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.110.233.113:6443/api/v1/nodes\": dial tcp 143.110.233.113:6443: connect: connection refused" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:08.991327 kubelet[2192]: E0129 11:10:08.991260 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:08.992206 kubelet[2192]: E0129 11:10:08.991418 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:10.595430 kubelet[2192]: E0129 11:10:10.595232 2192 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.0-f-d3e806da58.181f254f90107460 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-f-d3e806da58,UID:ci-4186.1.0-f-d3e806da58,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-f-d3e806da58,},FirstTimestamp:2025-01-29 11:10:04.837942368 +0000 UTC m=+1.341494463,LastTimestamp:2025-01-29 11:10:04.837942368 +0000 UTC m=+1.341494463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-f-d3e806da58,}" Jan 29 11:10:10.652629 kubelet[2192]: E0129 11:10:10.652479 2192 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186.1.0-f-d3e806da58.181f254f96c94412 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-f-d3e806da58,UID:ci-4186.1.0-f-d3e806da58,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-f-d3e806da58,},FirstTimestamp:2025-01-29 11:10:04.950717458 +0000 UTC m=+1.454269556,LastTimestamp:2025-01-29 11:10:04.950717458 +0000 UTC m=+1.454269556,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-f-d3e806da58,}" Jan 29 11:10:10.829456 kubelet[2192]: I0129 11:10:10.829013 2192 apiserver.go:52] "Watching apiserver" Jan 29 11:10:10.869541 kubelet[2192]: I0129 11:10:10.869267 2192 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:10:10.884469 kubelet[2192]: E0129 11:10:10.884385 2192 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4186.1.0-f-d3e806da58" not found Jan 29 11:10:11.083367 kubelet[2192]: E0129 11:10:11.083313 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.1.0-f-d3e806da58\" not found" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:11.194551 kubelet[2192]: I0129 11:10:11.193573 2192 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:11.205028 kubelet[2192]: I0129 11:10:11.204976 2192 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:11.286827 kubelet[2192]: E0129 11:10:11.286774 2192 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:11.287521 kubelet[2192]: E0129 11:10:11.287486 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:12.758466 systemd[1]: Reloading requested from client PID 2467 ('systemctl') (unit session-5.scope)... Jan 29 11:10:12.758499 systemd[1]: Reloading... Jan 29 11:10:12.918150 zram_generator::config[2506]: No configuration found. Jan 29 11:10:13.149664 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:10:13.272212 kubelet[2192]: W0129 11:10:13.270166 2192 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:13.272212 kubelet[2192]: E0129 11:10:13.271106 2192 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:13.339207 systemd[1]: Reloading finished in 579 ms. Jan 29 11:10:13.387532 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:13.403048 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:10:13.403513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:13.403635 systemd[1]: kubelet.service: Consumed 1.563s CPU time, 111.8M memory peak, 0B memory swap peak. Jan 29 11:10:13.423683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:13.632438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:13.633443 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:10:13.719535 kubelet[2557]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:13.720016 kubelet[2557]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:10:13.720126 kubelet[2557]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:13.720323 kubelet[2557]: I0129 11:10:13.720278 2557 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:10:13.727928 kubelet[2557]: I0129 11:10:13.727883 2557 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:10:13.728204 kubelet[2557]: I0129 11:10:13.728190 2557 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:10:13.728599 kubelet[2557]: I0129 11:10:13.728578 2557 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:10:13.732527 kubelet[2557]: I0129 11:10:13.732490 2557 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:10:13.734709 kubelet[2557]: I0129 11:10:13.734522 2557 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:10:13.753653 kubelet[2557]: I0129 11:10:13.753533 2557 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:10:13.754552 kubelet[2557]: I0129 11:10:13.754500 2557 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:10:13.755655 kubelet[2557]: I0129 11:10:13.754547 2557 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-f-d3e806da58","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:10:13.755655 kubelet[2557]: I0129 11:10:13.754931 2557 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:10:13.755655 kubelet[2557]: I0129 11:10:13.754951 2557 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:10:13.755655 kubelet[2557]: I0129 11:10:13.755011 2557 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:13.755655 kubelet[2557]: I0129 11:10:13.755196 2557 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:10:13.756017 kubelet[2557]: I0129 11:10:13.755212 2557 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:10:13.756017 kubelet[2557]: I0129 11:10:13.755247 2557 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:10:13.756017 kubelet[2557]: I0129 11:10:13.755268 2557 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:10:13.761106 kubelet[2557]: I0129 11:10:13.759543 2557 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:10:13.761106 kubelet[2557]: I0129 11:10:13.759758 2557 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:10:13.761106 kubelet[2557]: I0129 11:10:13.760321 2557 server.go:1264] "Started kubelet" Jan 29 11:10:13.762936 kubelet[2557]: I0129 11:10:13.762906 2557 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:10:13.772493 kubelet[2557]: I0129 11:10:13.772419 2557 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:10:13.773430 kubelet[2557]: I0129 11:10:13.773362 2557 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:10:13.780616 kubelet[2557]: I0129 11:10:13.780582 2557 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:10:13.790659 kubelet[2557]: I0129 11:10:13.782838 2557 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:10:13.791169 kubelet[2557]: I0129 11:10:13.788487 2557 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:10:13.797406 kubelet[2557]: I0129 11:10:13.797373 2557 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:10:13.797762 kubelet[2557]: I0129 11:10:13.797747 2557 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:10:13.813367 kubelet[2557]: I0129 11:10:13.813298 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:10:13.815149 kubelet[2557]: I0129 11:10:13.815047 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:10:13.818944 kubelet[2557]: I0129 11:10:13.818456 2557 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:10:13.818944 kubelet[2557]: I0129 11:10:13.818535 2557 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:10:13.818944 kubelet[2557]: E0129 11:10:13.818627 2557 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:10:13.832572 kubelet[2557]: I0129 11:10:13.832521 2557 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:10:13.833490 kubelet[2557]: I0129 11:10:13.833450 2557 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:10:13.836614 kubelet[2557]: I0129 11:10:13.834581 2557 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:10:13.848193 kubelet[2557]: E0129 11:10:13.848161 2557 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:10:13.887777 kubelet[2557]: I0129 11:10:13.887733 2557 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:13.907418 kubelet[2557]: I0129 11:10:13.907386 2557 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:13.907921 kubelet[2557]: I0129 11:10:13.907900 2557 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.1.0-f-d3e806da58" Jan 29 11:10:13.919172 kubelet[2557]: E0129 11:10:13.919139 2557 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:10:13.946053 kubelet[2557]: I0129 11:10:13.946027 2557 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:10:13.946366 kubelet[2557]: I0129 11:10:13.946348 2557 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:10:13.946510 kubelet[2557]: I0129 11:10:13.946499 2557 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:13.946771 kubelet[2557]: I0129 11:10:13.946755 2557 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:10:13.946881 kubelet[2557]: I0129 11:10:13.946852 2557 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:10:13.946969 kubelet[2557]: I0129 11:10:13.946958 2557 policy_none.go:49] "None policy: Start" Jan 29 11:10:13.947933 kubelet[2557]: I0129 11:10:13.947914 2557 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:10:13.948045 kubelet[2557]: I0129 11:10:13.948038 2557 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:10:13.948442 kubelet[2557]: I0129 11:10:13.948426 2557 state_mem.go:75] "Updated machine memory state" Jan 29 11:10:13.956926 kubelet[2557]: I0129 11:10:13.956896 2557 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:10:13.957608 kubelet[2557]: I0129 11:10:13.957565 2557 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:10:13.959160 kubelet[2557]: I0129 11:10:13.958972 2557 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:10:14.120810 kubelet[2557]: I0129 11:10:14.120595 2557 topology_manager.go:215] "Topology Admit Handler" podUID="54951527e3bd34ba7003868f3c87c317" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.120810 kubelet[2557]: I0129 11:10:14.120769 2557 topology_manager.go:215] "Topology Admit Handler" podUID="f3066f9d26a59e8bc234ee5946478b62" podNamespace="kube-system" podName="kube-scheduler-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.121361 kubelet[2557]: I0129 11:10:14.120851 2557 topology_manager.go:215] "Topology Admit Handler" podUID="3eb51796125993700dd8400f76a07a6e" podNamespace="kube-system" podName="kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.133904 kubelet[2557]: W0129 11:10:14.133459 2557 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:14.136894 kubelet[2557]: W0129 11:10:14.136025 2557 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:14.138807 kubelet[2557]: W0129 11:10:14.138547 2557 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:14.138807 kubelet[2557]: E0129 11:10:14.138638 2557 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-f-d3e806da58\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.202544 kubelet[2557]: I0129 11:10:14.202505 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.202995 kubelet[2557]: I0129 11:10:14.202779 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.202995 kubelet[2557]: I0129 11:10:14.202808 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.202995 kubelet[2557]: I0129 11:10:14.202826 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.202995 kubelet[2557]: I0129 11:10:14.202845 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eb51796125993700dd8400f76a07a6e-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-f-d3e806da58\" (UID: \"3eb51796125993700dd8400f76a07a6e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.202995 kubelet[2557]: I0129 11:10:14.202868 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eb51796125993700dd8400f76a07a6e-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-f-d3e806da58\" (UID: \"3eb51796125993700dd8400f76a07a6e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.203448 kubelet[2557]: I0129 11:10:14.202900 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eb51796125993700dd8400f76a07a6e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-f-d3e806da58\" (UID: \"3eb51796125993700dd8400f76a07a6e\") " pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.203448 kubelet[2557]: I0129 11:10:14.202926 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54951527e3bd34ba7003868f3c87c317-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-f-d3e806da58\" (UID: \"54951527e3bd34ba7003868f3c87c317\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.203448 kubelet[2557]: I0129 11:10:14.202954 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3066f9d26a59e8bc234ee5946478b62-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-f-d3e806da58\" (UID: \"f3066f9d26a59e8bc234ee5946478b62\") " pod="kube-system/kube-scheduler-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.436801 kubelet[2557]: E0129 11:10:14.435859 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:14.436801 kubelet[2557]: E0129 11:10:14.436580 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:14.439621 kubelet[2557]: E0129 11:10:14.439513 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:14.759545 kubelet[2557]: I0129 11:10:14.759491 2557 apiserver.go:52] "Watching apiserver" Jan 29 11:10:14.791621 kubelet[2557]: I0129 11:10:14.791558 2557 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:10:14.870529 kubelet[2557]: E0129 11:10:14.870491 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:14.874053 kubelet[2557]: E0129 11:10:14.872813 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:14.883491 kubelet[2557]: W0129 11:10:14.883450 2557 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 11:10:14.883651 kubelet[2557]: E0129 11:10:14.883632 2557 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186.1.0-f-d3e806da58\" already exists" pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" Jan 29 11:10:14.888390 kubelet[2557]: E0129 11:10:14.888333 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:14.943904 kubelet[2557]: I0129 11:10:14.943807 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-f-d3e806da58" podStartSLOduration=1.9437808840000002 podStartE2EDuration="1.943780884s" podCreationTimestamp="2025-01-29 11:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:14.927958772 +0000 UTC m=+1.284139774" watchObservedRunningTime="2025-01-29 11:10:14.943780884 +0000 UTC m=+1.299961887" Jan 29 11:10:14.959475 kubelet[2557]: I0129 11:10:14.959386 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-f-d3e806da58" podStartSLOduration=0.959361244 podStartE2EDuration="959.361244ms" podCreationTimestamp="2025-01-29 11:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:14.944559266 +0000 UTC m=+1.300740273" watchObservedRunningTime="2025-01-29 11:10:14.959361244 +0000 UTC m=+1.315542249" Jan 29 11:10:14.993933 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 29 11:10:14.999863 sshd[1614]: Connection closed by 139.178.89.65 port 33926 Jan 29 11:10:15.000351 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:15.004971 systemd[1]: sshd@4-143.110.233.113:22-139.178.89.65:33926.service: Deactivated successfully. Jan 29 11:10:15.007652 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:10:15.008031 systemd[1]: session-5.scope: Consumed 5.087s CPU time, 187.7M memory peak, 0B memory swap peak. Jan 29 11:10:15.010861 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:10:15.013014 systemd-logind[1449]: Removed session 5. Jan 29 11:10:15.872912 kubelet[2557]: E0129 11:10:15.872879 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:16.875297 kubelet[2557]: E0129 11:10:16.875214 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:18.050670 kubelet[2557]: E0129 11:10:18.050599 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:18.068021 kubelet[2557]: I0129 11:10:18.067665 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-f-d3e806da58" podStartSLOduration=4.067640937 podStartE2EDuration="4.067640937s" podCreationTimestamp="2025-01-29 11:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:14.961065389 +0000 UTC m=+1.317246393" watchObservedRunningTime="2025-01-29 11:10:18.067640937 +0000 UTC m=+4.423821941" Jan 29 11:10:18.879460 kubelet[2557]: E0129 11:10:18.878952 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:22.789716 kubelet[2557]: E0129 11:10:22.789616 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:22.884930 kubelet[2557]: E0129 11:10:22.884804 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:25.330529 kubelet[2557]: E0129 11:10:25.329562 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:25.766987 update_engine[1450]: I20250129 11:10:25.766120 1450 update_attempter.cc:509] Updating boot flags... Jan 29 11:10:25.803132 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2626) Jan 29 11:10:27.423753 kubelet[2557]: I0129 11:10:27.423726 2557 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:10:27.425388 kubelet[2557]: I0129 11:10:27.424923 2557 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:10:27.425439 containerd[1474]: time="2025-01-29T11:10:27.424705331Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:10:28.085839 kubelet[2557]: I0129 11:10:28.083963 2557 topology_manager.go:215] "Topology Admit Handler" podUID="ba6339dd-6eb6-44e9-b472-685f9cc8915c" podNamespace="kube-system" podName="kube-proxy-wj4fv" Jan 29 11:10:28.091589 kubelet[2557]: I0129 11:10:28.091539 2557 topology_manager.go:215] "Topology Admit Handler" podUID="8db19af7-67df-41bd-83e0-bb6799c5a4a4" podNamespace="kube-flannel" podName="kube-flannel-ds-9wjw6" Jan 29 11:10:28.097887 systemd[1]: Created slice kubepods-besteffort-podba6339dd_6eb6_44e9_b472_685f9cc8915c.slice - libcontainer container kubepods-besteffort-podba6339dd_6eb6_44e9_b472_685f9cc8915c.slice. Jan 29 11:10:28.115487 systemd[1]: Created slice kubepods-burstable-pod8db19af7_67df_41bd_83e0_bb6799c5a4a4.slice - libcontainer container kubepods-burstable-pod8db19af7_67df_41bd_83e0_bb6799c5a4a4.slice. Jan 29 11:10:28.191308 kubelet[2557]: I0129 11:10:28.191207 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba6339dd-6eb6-44e9-b472-685f9cc8915c-lib-modules\") pod \"kube-proxy-wj4fv\" (UID: \"ba6339dd-6eb6-44e9-b472-685f9cc8915c\") " pod="kube-system/kube-proxy-wj4fv" Jan 29 11:10:28.191308 kubelet[2557]: I0129 11:10:28.191284 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8db19af7-67df-41bd-83e0-bb6799c5a4a4-cni\") pod \"kube-flannel-ds-9wjw6\" (UID: \"8db19af7-67df-41bd-83e0-bb6799c5a4a4\") " pod="kube-flannel/kube-flannel-ds-9wjw6" Jan 29 11:10:28.191564 kubelet[2557]: I0129 11:10:28.191323 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8db19af7-67df-41bd-83e0-bb6799c5a4a4-xtables-lock\") pod \"kube-flannel-ds-9wjw6\" (UID: \"8db19af7-67df-41bd-83e0-bb6799c5a4a4\") " pod="kube-flannel/kube-flannel-ds-9wjw6" Jan 29 11:10:28.191564 kubelet[2557]: I0129 11:10:28.191359 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjr9\" (UniqueName: \"kubernetes.io/projected/ba6339dd-6eb6-44e9-b472-685f9cc8915c-kube-api-access-jvjr9\") pod \"kube-proxy-wj4fv\" (UID: \"ba6339dd-6eb6-44e9-b472-685f9cc8915c\") " pod="kube-system/kube-proxy-wj4fv" Jan 29 11:10:28.191564 kubelet[2557]: I0129 11:10:28.191391 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8db19af7-67df-41bd-83e0-bb6799c5a4a4-flannel-cfg\") pod \"kube-flannel-ds-9wjw6\" (UID: \"8db19af7-67df-41bd-83e0-bb6799c5a4a4\") " pod="kube-flannel/kube-flannel-ds-9wjw6" Jan 29 11:10:28.191564 kubelet[2557]: I0129 11:10:28.191440 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8db19af7-67df-41bd-83e0-bb6799c5a4a4-cni-plugin\") pod \"kube-flannel-ds-9wjw6\" (UID: \"8db19af7-67df-41bd-83e0-bb6799c5a4a4\") " pod="kube-flannel/kube-flannel-ds-9wjw6" Jan 29 11:10:28.191564 kubelet[2557]: I0129 11:10:28.191476 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ftg\" (UniqueName: \"kubernetes.io/projected/8db19af7-67df-41bd-83e0-bb6799c5a4a4-kube-api-access-d5ftg\") pod \"kube-flannel-ds-9wjw6\" (UID: \"8db19af7-67df-41bd-83e0-bb6799c5a4a4\") " pod="kube-flannel/kube-flannel-ds-9wjw6" Jan 29 11:10:28.191703 kubelet[2557]: I0129 11:10:28.191512 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba6339dd-6eb6-44e9-b472-685f9cc8915c-kube-proxy\") pod \"kube-proxy-wj4fv\" (UID: \"ba6339dd-6eb6-44e9-b472-685f9cc8915c\") " pod="kube-system/kube-proxy-wj4fv" Jan 29 11:10:28.191703 kubelet[2557]: I0129 11:10:28.191540 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba6339dd-6eb6-44e9-b472-685f9cc8915c-xtables-lock\") pod \"kube-proxy-wj4fv\" (UID: \"ba6339dd-6eb6-44e9-b472-685f9cc8915c\") " pod="kube-system/kube-proxy-wj4fv" Jan 29 11:10:28.191703 kubelet[2557]: I0129 11:10:28.191568 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8db19af7-67df-41bd-83e0-bb6799c5a4a4-run\") pod \"kube-flannel-ds-9wjw6\" (UID: \"8db19af7-67df-41bd-83e0-bb6799c5a4a4\") " pod="kube-flannel/kube-flannel-ds-9wjw6" Jan 29 11:10:28.408358 kubelet[2557]: E0129 11:10:28.407881 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:28.409678 containerd[1474]: time="2025-01-29T11:10:28.409606995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj4fv,Uid:ba6339dd-6eb6-44e9-b472-685f9cc8915c,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:28.418699 kubelet[2557]: E0129 11:10:28.418647 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:28.420366 containerd[1474]: time="2025-01-29T11:10:28.419861079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9wjw6,Uid:8db19af7-67df-41bd-83e0-bb6799c5a4a4,Namespace:kube-flannel,Attempt:0,}" Jan 29 11:10:28.448421 containerd[1474]: time="2025-01-29T11:10:28.448307217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:28.449266 containerd[1474]: time="2025-01-29T11:10:28.449209827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:28.449335 containerd[1474]: time="2025-01-29T11:10:28.449271848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:28.449488 containerd[1474]: time="2025-01-29T11:10:28.449456653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:28.488163 systemd[1]: Started cri-containerd-ddfda00ce473840b132c3d4780d4b0ee56d2828fc5945e5d06cb5725498efef9.scope - libcontainer container ddfda00ce473840b132c3d4780d4b0ee56d2828fc5945e5d06cb5725498efef9. Jan 29 11:10:28.509459 containerd[1474]: time="2025-01-29T11:10:28.507090973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:28.509459 containerd[1474]: time="2025-01-29T11:10:28.507247318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:28.509459 containerd[1474]: time="2025-01-29T11:10:28.507276224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:28.509459 containerd[1474]: time="2025-01-29T11:10:28.507596695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:28.548044 systemd[1]: Started cri-containerd-bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec.scope - libcontainer container bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec. Jan 29 11:10:28.562173 containerd[1474]: time="2025-01-29T11:10:28.562122109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj4fv,Uid:ba6339dd-6eb6-44e9-b472-685f9cc8915c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddfda00ce473840b132c3d4780d4b0ee56d2828fc5945e5d06cb5725498efef9\"" Jan 29 11:10:28.564362 kubelet[2557]: E0129 11:10:28.564313 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:28.573254 containerd[1474]: time="2025-01-29T11:10:28.572320136Z" level=info msg="CreateContainer within sandbox \"ddfda00ce473840b132c3d4780d4b0ee56d2828fc5945e5d06cb5725498efef9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:10:28.609787 containerd[1474]: time="2025-01-29T11:10:28.609664755Z" level=info msg="CreateContainer within sandbox \"ddfda00ce473840b132c3d4780d4b0ee56d2828fc5945e5d06cb5725498efef9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48d6f5864b29a5d4b0f474c85d975ca3de9f6014c62ae2d9b1747d0e3e7535ec\"" Jan 29 11:10:28.613207 containerd[1474]: time="2025-01-29T11:10:28.611668093Z" level=info msg="StartContainer for \"48d6f5864b29a5d4b0f474c85d975ca3de9f6014c62ae2d9b1747d0e3e7535ec\"" Jan 29 11:10:28.656293 systemd[1]: Started cri-containerd-48d6f5864b29a5d4b0f474c85d975ca3de9f6014c62ae2d9b1747d0e3e7535ec.scope - libcontainer container 48d6f5864b29a5d4b0f474c85d975ca3de9f6014c62ae2d9b1747d0e3e7535ec. Jan 29 11:10:28.667641 containerd[1474]: time="2025-01-29T11:10:28.667197327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-9wjw6,Uid:8db19af7-67df-41bd-83e0-bb6799c5a4a4,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec\"" Jan 29 11:10:28.670763 kubelet[2557]: E0129 11:10:28.668304 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:28.672138 containerd[1474]: time="2025-01-29T11:10:28.671979758Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 11:10:28.706237 containerd[1474]: time="2025-01-29T11:10:28.706172211Z" level=info msg="StartContainer for \"48d6f5864b29a5d4b0f474c85d975ca3de9f6014c62ae2d9b1747d0e3e7535ec\" returns successfully" Jan 29 11:10:28.897504 kubelet[2557]: E0129 11:10:28.897450 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:28.928119 kubelet[2557]: I0129 11:10:28.927497 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wj4fv" podStartSLOduration=0.927469936 podStartE2EDuration="927.469936ms" podCreationTimestamp="2025-01-29 11:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:28.927289039 +0000 UTC m=+15.283470043" watchObservedRunningTime="2025-01-29 11:10:28.927469936 +0000 UTC m=+15.283650941" Jan 29 11:10:30.607881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1296662594.mount: Deactivated successfully. Jan 29 11:10:30.657133 containerd[1474]: time="2025-01-29T11:10:30.656811456Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:30.658425 containerd[1474]: time="2025-01-29T11:10:30.658352426Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 29 11:10:30.659883 containerd[1474]: time="2025-01-29T11:10:30.659820646Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:30.664228 containerd[1474]: time="2025-01-29T11:10:30.664163801Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:30.665669 containerd[1474]: time="2025-01-29T11:10:30.665463429Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.993438209s" Jan 29 11:10:30.665669 containerd[1474]: time="2025-01-29T11:10:30.665515037Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 29 11:10:30.670561 containerd[1474]: time="2025-01-29T11:10:30.670149657Z" level=info msg="CreateContainer within sandbox \"bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 11:10:30.687486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617951792.mount: Deactivated successfully. Jan 29 11:10:30.693798 containerd[1474]: time="2025-01-29T11:10:30.693725779Z" level=info msg="CreateContainer within sandbox \"bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88\"" Jan 29 11:10:30.695110 containerd[1474]: time="2025-01-29T11:10:30.694844765Z" level=info msg="StartContainer for \"35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88\"" Jan 29 11:10:30.740409 systemd[1]: Started cri-containerd-35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88.scope - libcontainer container 35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88. Jan 29 11:10:30.779041 containerd[1474]: time="2025-01-29T11:10:30.778998280Z" level=info msg="StartContainer for \"35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88\" returns successfully" Jan 29 11:10:30.781454 systemd[1]: cri-containerd-35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88.scope: Deactivated successfully. Jan 29 11:10:30.832631 containerd[1474]: time="2025-01-29T11:10:30.832532376Z" level=info msg="shim disconnected" id=35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88 namespace=k8s.io Jan 29 11:10:30.832631 containerd[1474]: time="2025-01-29T11:10:30.832596366Z" level=warning msg="cleaning up after shim disconnected" id=35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88 namespace=k8s.io Jan 29 11:10:30.832631 containerd[1474]: time="2025-01-29T11:10:30.832607595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:30.908355 kubelet[2557]: E0129 11:10:30.907326 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:30.911107 containerd[1474]: time="2025-01-29T11:10:30.910865424Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 11:10:31.514401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35a1971ccd1e76fd10c31777ae883f1362b88a7b7e8463e1a0c513da548fbe88-rootfs.mount: Deactivated successfully. Jan 29 11:10:32.919757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1357870631.mount: Deactivated successfully. Jan 29 11:10:33.677723 containerd[1474]: time="2025-01-29T11:10:33.677638821Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:33.679927 containerd[1474]: time="2025-01-29T11:10:33.679869876Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 29 11:10:33.681650 containerd[1474]: time="2025-01-29T11:10:33.681571589Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:33.688725 containerd[1474]: time="2025-01-29T11:10:33.688638488Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:33.690454 containerd[1474]: time="2025-01-29T11:10:33.690328064Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.779277454s" Jan 29 11:10:33.690454 containerd[1474]: time="2025-01-29T11:10:33.690378926Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 29 11:10:33.693453 containerd[1474]: time="2025-01-29T11:10:33.693276742Z" level=info msg="CreateContainer within sandbox \"bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:10:33.711624 containerd[1474]: time="2025-01-29T11:10:33.711481106Z" level=info msg="CreateContainer within sandbox \"bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973\"" Jan 29 11:10:33.713186 containerd[1474]: time="2025-01-29T11:10:33.712416140Z" level=info msg="StartContainer for \"ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973\"" Jan 29 11:10:33.750725 systemd[1]: Started cri-containerd-ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973.scope - libcontainer container ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973. Jan 29 11:10:33.800956 systemd[1]: cri-containerd-ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973.scope: Deactivated successfully. Jan 29 11:10:33.808697 containerd[1474]: time="2025-01-29T11:10:33.808456427Z" level=info msg="StartContainer for \"ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973\" returns successfully" Jan 29 11:10:33.835545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973-rootfs.mount: Deactivated successfully. Jan 29 11:10:33.835976 kubelet[2557]: I0129 11:10:33.835685 2557 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:10:33.923762 kubelet[2557]: I0129 11:10:33.923217 2557 topology_manager.go:215] "Topology Admit Handler" podUID="31f4dd8d-999c-435f-bad3-373a38846b41" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5lb5n" Jan 29 11:10:33.924027 kubelet[2557]: I0129 11:10:33.923920 2557 topology_manager.go:215] "Topology Admit Handler" podUID="b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z4pp2" Jan 29 11:10:33.929160 containerd[1474]: time="2025-01-29T11:10:33.928371092Z" level=info msg="shim disconnected" id=ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973 namespace=k8s.io Jan 29 11:10:33.929160 containerd[1474]: time="2025-01-29T11:10:33.929094845Z" level=warning msg="cleaning up after shim disconnected" id=ba74d409b2fda40eabfc0a7fadfbbd31548c5baec7bbac2b6a023d412c26e973 namespace=k8s.io Jan 29 11:10:33.929160 containerd[1474]: time="2025-01-29T11:10:33.929109537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:10:33.942661 systemd[1]: Created slice kubepods-burstable-pod31f4dd8d_999c_435f_bad3_373a38846b41.slice - libcontainer container kubepods-burstable-pod31f4dd8d_999c_435f_bad3_373a38846b41.slice. Jan 29 11:10:33.948890 kubelet[2557]: E0129 11:10:33.948058 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:33.963737 systemd[1]: Created slice kubepods-burstable-podb4a7e5fa_2f7d_470f_a90f_e5c8aed0f938.slice - libcontainer container kubepods-burstable-podb4a7e5fa_2f7d_470f_a90f_e5c8aed0f938.slice. Jan 29 11:10:34.044963 kubelet[2557]: I0129 11:10:34.044892 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31f4dd8d-999c-435f-bad3-373a38846b41-config-volume\") pod \"coredns-7db6d8ff4d-5lb5n\" (UID: \"31f4dd8d-999c-435f-bad3-373a38846b41\") " pod="kube-system/coredns-7db6d8ff4d-5lb5n" Jan 29 11:10:34.044963 kubelet[2557]: I0129 11:10:34.044971 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866ct\" (UniqueName: \"kubernetes.io/projected/b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938-kube-api-access-866ct\") pod \"coredns-7db6d8ff4d-z4pp2\" (UID: \"b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938\") " pod="kube-system/coredns-7db6d8ff4d-z4pp2" Jan 29 11:10:34.045274 kubelet[2557]: I0129 11:10:34.045030 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f95k6\" (UniqueName: \"kubernetes.io/projected/31f4dd8d-999c-435f-bad3-373a38846b41-kube-api-access-f95k6\") pod \"coredns-7db6d8ff4d-5lb5n\" (UID: \"31f4dd8d-999c-435f-bad3-373a38846b41\") " pod="kube-system/coredns-7db6d8ff4d-5lb5n" Jan 29 11:10:34.045849 kubelet[2557]: I0129 11:10:34.045789 2557 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938-config-volume\") pod \"coredns-7db6d8ff4d-z4pp2\" (UID: \"b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938\") " pod="kube-system/coredns-7db6d8ff4d-z4pp2" Jan 29 11:10:34.254287 kubelet[2557]: E0129 11:10:34.254052 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:34.255289 containerd[1474]: time="2025-01-29T11:10:34.255223730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5lb5n,Uid:31f4dd8d-999c-435f-bad3-373a38846b41,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:34.270107 kubelet[2557]: E0129 11:10:34.269726 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:34.275965 containerd[1474]: time="2025-01-29T11:10:34.275052461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z4pp2,Uid:b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:34.309122 containerd[1474]: time="2025-01-29T11:10:34.308815692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5lb5n,Uid:31f4dd8d-999c-435f-bad3-373a38846b41,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f46008d5f813859fdf3120a548b690c8d96f34cb36f55151134f50020d8c161\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:10:34.309331 kubelet[2557]: E0129 11:10:34.309133 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f46008d5f813859fdf3120a548b690c8d96f34cb36f55151134f50020d8c161\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:10:34.309331 kubelet[2557]: E0129 11:10:34.309213 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f46008d5f813859fdf3120a548b690c8d96f34cb36f55151134f50020d8c161\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-5lb5n" Jan 29 11:10:34.309331 kubelet[2557]: E0129 11:10:34.309239 2557 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f46008d5f813859fdf3120a548b690c8d96f34cb36f55151134f50020d8c161\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-5lb5n" Jan 29 11:10:34.309331 kubelet[2557]: E0129 11:10:34.309286 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5lb5n_kube-system(31f4dd8d-999c-435f-bad3-373a38846b41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5lb5n_kube-system(31f4dd8d-999c-435f-bad3-373a38846b41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f46008d5f813859fdf3120a548b690c8d96f34cb36f55151134f50020d8c161\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-5lb5n" podUID="31f4dd8d-999c-435f-bad3-373a38846b41" Jan 29 11:10:34.314312 containerd[1474]: time="2025-01-29T11:10:34.314207979Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z4pp2,Uid:b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7b326c3626d5ba51ef12dad499e5d2310f23f5f2e6d40098edca885dfe787fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:10:34.314531 kubelet[2557]: E0129 11:10:34.314481 2557 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b326c3626d5ba51ef12dad499e5d2310f23f5f2e6d40098edca885dfe787fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 11:10:34.314880 kubelet[2557]: E0129 11:10:34.314540 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b326c3626d5ba51ef12dad499e5d2310f23f5f2e6d40098edca885dfe787fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-z4pp2" Jan 29 11:10:34.314880 kubelet[2557]: E0129 11:10:34.314560 2557 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b326c3626d5ba51ef12dad499e5d2310f23f5f2e6d40098edca885dfe787fd\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-z4pp2" Jan 29 11:10:34.314880 kubelet[2557]: E0129 11:10:34.314603 2557 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-z4pp2_kube-system(b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-z4pp2_kube-system(b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7b326c3626d5ba51ef12dad499e5d2310f23f5f2e6d40098edca885dfe787fd\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-z4pp2" podUID="b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938" Jan 29 11:10:34.774541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f46008d5f813859fdf3120a548b690c8d96f34cb36f55151134f50020d8c161-shm.mount: Deactivated successfully. Jan 29 11:10:34.951634 kubelet[2557]: E0129 11:10:34.951582 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:34.955772 containerd[1474]: time="2025-01-29T11:10:34.955726613Z" level=info msg="CreateContainer within sandbox \"bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 11:10:34.980005 containerd[1474]: time="2025-01-29T11:10:34.979945149Z" level=info msg="CreateContainer within sandbox \"bebecfc45f6dce66438c891c45800fe648c68acce261a7ad27aaec293c94a1ec\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"992fe113e1ba6d1c392c9023a2b1604a988b608f947a4e9cd0d3dcc32f114461\"" Jan 29 11:10:34.981389 containerd[1474]: time="2025-01-29T11:10:34.980541560Z" level=info msg="StartContainer for \"992fe113e1ba6d1c392c9023a2b1604a988b608f947a4e9cd0d3dcc32f114461\"" Jan 29 11:10:35.034376 systemd[1]: Started cri-containerd-992fe113e1ba6d1c392c9023a2b1604a988b608f947a4e9cd0d3dcc32f114461.scope - libcontainer container 992fe113e1ba6d1c392c9023a2b1604a988b608f947a4e9cd0d3dcc32f114461. Jan 29 11:10:35.075188 containerd[1474]: time="2025-01-29T11:10:35.075134753Z" level=info msg="StartContainer for \"992fe113e1ba6d1c392c9023a2b1604a988b608f947a4e9cd0d3dcc32f114461\" returns successfully" Jan 29 11:10:35.956476 kubelet[2557]: E0129 11:10:35.956116 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:35.971764 kubelet[2557]: I0129 11:10:35.971456 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-9wjw6" podStartSLOduration=2.951104624 podStartE2EDuration="7.971437238s" podCreationTimestamp="2025-01-29 11:10:28 +0000 UTC" firstStartedPulling="2025-01-29 11:10:28.671425241 +0000 UTC m=+15.027606223" lastFinishedPulling="2025-01-29 11:10:33.691757854 +0000 UTC m=+20.047938837" observedRunningTime="2025-01-29 11:10:35.970939284 +0000 UTC m=+22.327120286" watchObservedRunningTime="2025-01-29 11:10:35.971437238 +0000 UTC m=+22.327618240" Jan 29 11:10:36.144223 systemd-networkd[1361]: flannel.1: Link UP Jan 29 11:10:36.144232 systemd-networkd[1361]: flannel.1: Gained carrier Jan 29 11:10:36.958446 kubelet[2557]: E0129 11:10:36.958358 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:37.419243 systemd-networkd[1361]: flannel.1: Gained IPv6LL Jan 29 11:10:45.819971 kubelet[2557]: E0129 11:10:45.819853 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:45.821433 containerd[1474]: time="2025-01-29T11:10:45.820799322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5lb5n,Uid:31f4dd8d-999c-435f-bad3-373a38846b41,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:45.871822 systemd-networkd[1361]: cni0: Link UP Jan 29 11:10:45.871832 systemd-networkd[1361]: cni0: Gained carrier Jan 29 11:10:45.877260 systemd-networkd[1361]: cni0: Lost carrier Jan 29 11:10:45.884931 systemd-networkd[1361]: veth86033f38: Link UP Jan 29 11:10:45.891144 kernel: cni0: port 1(veth86033f38) entered blocking state Jan 29 11:10:45.891292 kernel: cni0: port 1(veth86033f38) entered disabled state Jan 29 11:10:45.899500 kernel: veth86033f38: entered allmulticast mode Jan 29 11:10:45.902230 kernel: veth86033f38: entered promiscuous mode Jan 29 11:10:45.902579 kernel: cni0: port 1(veth86033f38) entered blocking state Jan 29 11:10:45.904233 kernel: cni0: port 1(veth86033f38) entered forwarding state Jan 29 11:10:45.904315 kernel: cni0: port 1(veth86033f38) entered disabled state Jan 29 11:10:45.915801 kernel: cni0: port 1(veth86033f38) entered blocking state Jan 29 11:10:45.915909 kernel: cni0: port 1(veth86033f38) entered forwarding state Jan 29 11:10:45.916210 systemd-networkd[1361]: veth86033f38: Gained carrier Jan 29 11:10:45.916927 systemd-networkd[1361]: cni0: Gained carrier Jan 29 11:10:45.923614 containerd[1474]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Jan 29 11:10:45.923614 containerd[1474]: delegateAdd: netconf sent to delegate plugin: Jan 29 11:10:45.949112 containerd[1474]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T11:10:45.948812710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:45.949112 containerd[1474]: time="2025-01-29T11:10:45.948906808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:45.949112 containerd[1474]: time="2025-01-29T11:10:45.948932816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:45.949909 containerd[1474]: time="2025-01-29T11:10:45.949524644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:45.981368 systemd[1]: Started cri-containerd-c88708d6d6b4f70057a9325370320ef5789fc15267c3174ca0a6943a02794e39.scope - libcontainer container c88708d6d6b4f70057a9325370320ef5789fc15267c3174ca0a6943a02794e39. Jan 29 11:10:46.034111 containerd[1474]: time="2025-01-29T11:10:46.034016847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5lb5n,Uid:31f4dd8d-999c-435f-bad3-373a38846b41,Namespace:kube-system,Attempt:0,} returns sandbox id \"c88708d6d6b4f70057a9325370320ef5789fc15267c3174ca0a6943a02794e39\"" Jan 29 11:10:46.035971 kubelet[2557]: E0129 11:10:46.035693 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:46.040430 containerd[1474]: time="2025-01-29T11:10:46.040223317Z" level=info msg="CreateContainer within sandbox \"c88708d6d6b4f70057a9325370320ef5789fc15267c3174ca0a6943a02794e39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:10:46.059993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775203181.mount: Deactivated successfully. Jan 29 11:10:46.061466 containerd[1474]: time="2025-01-29T11:10:46.061414245Z" level=info msg="CreateContainer within sandbox \"c88708d6d6b4f70057a9325370320ef5789fc15267c3174ca0a6943a02794e39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"28db74e4511130da9dd261cdaf0c281b4a8eeb842f5260173aa02c1dcea37dc2\"" Jan 29 11:10:46.062385 containerd[1474]: time="2025-01-29T11:10:46.062341100Z" level=info msg="StartContainer for \"28db74e4511130da9dd261cdaf0c281b4a8eeb842f5260173aa02c1dcea37dc2\"" Jan 29 11:10:46.098383 systemd[1]: Started cri-containerd-28db74e4511130da9dd261cdaf0c281b4a8eeb842f5260173aa02c1dcea37dc2.scope - libcontainer container 28db74e4511130da9dd261cdaf0c281b4a8eeb842f5260173aa02c1dcea37dc2. Jan 29 11:10:46.136214 containerd[1474]: time="2025-01-29T11:10:46.134514370Z" level=info msg="StartContainer for \"28db74e4511130da9dd261cdaf0c281b4a8eeb842f5260173aa02c1dcea37dc2\" returns successfully" Jan 29 11:10:46.955264 systemd-networkd[1361]: cni0: Gained IPv6LL Jan 29 11:10:46.981348 kubelet[2557]: E0129 11:10:46.981307 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:46.997959 kubelet[2557]: I0129 11:10:46.997302 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5lb5n" podStartSLOduration=18.997282998 podStartE2EDuration="18.997282998s" podCreationTimestamp="2025-01-29 11:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:46.996387522 +0000 UTC m=+33.352568524" watchObservedRunningTime="2025-01-29 11:10:46.997282998 +0000 UTC m=+33.353464000" Jan 29 11:10:47.275262 systemd-networkd[1361]: veth86033f38: Gained IPv6LL Jan 29 11:10:47.820241 kubelet[2557]: E0129 11:10:47.819697 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:47.820661 containerd[1474]: time="2025-01-29T11:10:47.820627859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z4pp2,Uid:b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:47.848739 systemd-networkd[1361]: veth980824b5: Link UP Jan 29 11:10:47.851316 kernel: cni0: port 2(veth980824b5) entered blocking state Jan 29 11:10:47.851456 kernel: cni0: port 2(veth980824b5) entered disabled state Jan 29 11:10:47.852229 kernel: veth980824b5: entered allmulticast mode Jan 29 11:10:47.854208 kernel: veth980824b5: entered promiscuous mode Jan 29 11:10:47.861791 kernel: cni0: port 2(veth980824b5) entered blocking state Jan 29 11:10:47.861878 kernel: cni0: port 2(veth980824b5) entered forwarding state Jan 29 11:10:47.863956 systemd-networkd[1361]: veth980824b5: Gained carrier Jan 29 11:10:47.865788 containerd[1474]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Jan 29 11:10:47.865788 containerd[1474]: delegateAdd: netconf sent to delegate plugin: Jan 29 11:10:47.888324 containerd[1474]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T11:10:47.888201326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:47.888324 containerd[1474]: time="2025-01-29T11:10:47.888268262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:47.888324 containerd[1474]: time="2025-01-29T11:10:47.888283888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:47.889593 containerd[1474]: time="2025-01-29T11:10:47.889396429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:47.913527 systemd[1]: run-containerd-runc-k8s.io-994a00ac0727738adad9f86aeca51450740383b017888e42a4f0e451f56aade2-runc.3gQ8rn.mount: Deactivated successfully. Jan 29 11:10:47.920263 systemd[1]: Started cri-containerd-994a00ac0727738adad9f86aeca51450740383b017888e42a4f0e451f56aade2.scope - libcontainer container 994a00ac0727738adad9f86aeca51450740383b017888e42a4f0e451f56aade2. Jan 29 11:10:47.977203 containerd[1474]: time="2025-01-29T11:10:47.977000743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z4pp2,Uid:b4a7e5fa-2f7d-470f-a90f-e5c8aed0f938,Namespace:kube-system,Attempt:0,} returns sandbox id \"994a00ac0727738adad9f86aeca51450740383b017888e42a4f0e451f56aade2\"" Jan 29 11:10:47.980280 kubelet[2557]: E0129 11:10:47.980199 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:47.983772 containerd[1474]: time="2025-01-29T11:10:47.983737412Z" level=info msg="CreateContainer within sandbox \"994a00ac0727738adad9f86aeca51450740383b017888e42a4f0e451f56aade2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:10:47.988511 kubelet[2557]: E0129 11:10:47.988392 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:48.006558 containerd[1474]: time="2025-01-29T11:10:48.005904380Z" level=info msg="CreateContainer within sandbox \"994a00ac0727738adad9f86aeca51450740383b017888e42a4f0e451f56aade2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad482c56aa58c6179e6dc0f3c1c1db495c06dce588645f1fdb9f95f158a9cca3\"" Jan 29 11:10:48.006199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698392696.mount: Deactivated successfully. Jan 29 11:10:48.008476 containerd[1474]: time="2025-01-29T11:10:48.008128011Z" level=info msg="StartContainer for \"ad482c56aa58c6179e6dc0f3c1c1db495c06dce588645f1fdb9f95f158a9cca3\"" Jan 29 11:10:48.039303 systemd[1]: Started cri-containerd-ad482c56aa58c6179e6dc0f3c1c1db495c06dce588645f1fdb9f95f158a9cca3.scope - libcontainer container ad482c56aa58c6179e6dc0f3c1c1db495c06dce588645f1fdb9f95f158a9cca3. Jan 29 11:10:48.077470 containerd[1474]: time="2025-01-29T11:10:48.077358668Z" level=info msg="StartContainer for \"ad482c56aa58c6179e6dc0f3c1c1db495c06dce588645f1fdb9f95f158a9cca3\" returns successfully" Jan 29 11:10:48.992170 kubelet[2557]: E0129 11:10:48.991860 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:48.992170 kubelet[2557]: E0129 11:10:48.991983 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:49.707263 systemd-networkd[1361]: veth980824b5: Gained IPv6LL Jan 29 11:10:49.993574 kubelet[2557]: E0129 11:10:49.993353 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:50.004518 kubelet[2557]: I0129 11:10:50.004224 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-z4pp2" podStartSLOduration=22.004200383 podStartE2EDuration="22.004200383s" podCreationTimestamp="2025-01-29 11:10:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:49.006467778 +0000 UTC m=+35.362648783" watchObservedRunningTime="2025-01-29 11:10:50.004200383 +0000 UTC m=+36.360381380" Jan 29 11:10:50.995937 kubelet[2557]: E0129 11:10:50.995899 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:10:51.998161 kubelet[2557]: E0129 11:10:51.997741 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:04.924706 systemd[1]: Started sshd@5-143.110.233.113:22-139.178.89.65:55250.service - OpenSSH per-connection server daemon (139.178.89.65:55250). Jan 29 11:11:05.024308 sshd[3515]: Accepted publickey for core from 139.178.89.65 port 55250 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:05.026515 sshd-session[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:05.038226 systemd-logind[1449]: New session 6 of user core. Jan 29 11:11:05.046398 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:11:05.247179 sshd[3517]: Connection closed by 139.178.89.65 port 55250 Jan 29 11:11:05.248161 sshd-session[3515]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:05.254004 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:11:05.255127 systemd[1]: sshd@5-143.110.233.113:22-139.178.89.65:55250.service: Deactivated successfully. Jan 29 11:11:05.258417 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:11:05.260945 systemd-logind[1449]: Removed session 6. Jan 29 11:11:10.272614 systemd[1]: Started sshd@6-143.110.233.113:22-139.178.89.65:55266.service - OpenSSH per-connection server daemon (139.178.89.65:55266). Jan 29 11:11:10.328897 sshd[3550]: Accepted publickey for core from 139.178.89.65 port 55266 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:10.330818 sshd-session[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:10.338338 systemd-logind[1449]: New session 7 of user core. Jan 29 11:11:10.349392 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:11:10.493930 sshd[3552]: Connection closed by 139.178.89.65 port 55266 Jan 29 11:11:10.495063 sshd-session[3550]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:10.500604 systemd[1]: sshd@6-143.110.233.113:22-139.178.89.65:55266.service: Deactivated successfully. Jan 29 11:11:10.503011 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:11:10.503740 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:11:10.504635 systemd-logind[1449]: Removed session 7. Jan 29 11:11:15.523554 systemd[1]: Started sshd@7-143.110.233.113:22-139.178.89.65:41142.service - OpenSSH per-connection server daemon (139.178.89.65:41142). Jan 29 11:11:15.573956 sshd[3586]: Accepted publickey for core from 139.178.89.65 port 41142 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:15.575947 sshd-session[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:15.583151 systemd-logind[1449]: New session 8 of user core. Jan 29 11:11:15.589314 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:11:15.740650 sshd[3588]: Connection closed by 139.178.89.65 port 41142 Jan 29 11:11:15.741867 sshd-session[3586]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:15.746769 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:11:15.748808 systemd[1]: sshd@7-143.110.233.113:22-139.178.89.65:41142.service: Deactivated successfully. Jan 29 11:11:15.751841 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:11:15.753625 systemd-logind[1449]: Removed session 8. Jan 29 11:11:20.754041 systemd[1]: Started sshd@8-143.110.233.113:22-139.178.89.65:41146.service - OpenSSH per-connection server daemon (139.178.89.65:41146). Jan 29 11:11:20.816877 sshd[3622]: Accepted publickey for core from 139.178.89.65 port 41146 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:20.818626 sshd-session[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:20.824351 systemd-logind[1449]: New session 9 of user core. Jan 29 11:11:20.830283 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:11:20.966068 sshd[3624]: Connection closed by 139.178.89.65 port 41146 Jan 29 11:11:20.965958 sshd-session[3622]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:20.979206 systemd[1]: sshd@8-143.110.233.113:22-139.178.89.65:41146.service: Deactivated successfully. Jan 29 11:11:20.982825 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:11:20.984680 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:11:20.989525 systemd[1]: Started sshd@9-143.110.233.113:22-139.178.89.65:41150.service - OpenSSH per-connection server daemon (139.178.89.65:41150). Jan 29 11:11:20.992007 systemd-logind[1449]: Removed session 9. Jan 29 11:11:21.060016 sshd[3635]: Accepted publickey for core from 139.178.89.65 port 41150 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:21.062856 sshd-session[3635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:21.069653 systemd-logind[1449]: New session 10 of user core. Jan 29 11:11:21.076446 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:11:21.267183 sshd[3637]: Connection closed by 139.178.89.65 port 41150 Jan 29 11:11:21.267566 sshd-session[3635]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:21.273422 systemd[1]: sshd@9-143.110.233.113:22-139.178.89.65:41150.service: Deactivated successfully. Jan 29 11:11:21.273771 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:11:21.276762 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:11:21.286433 systemd-logind[1449]: Removed session 10. Jan 29 11:11:21.288205 systemd[1]: Started sshd@10-143.110.233.113:22-139.178.89.65:36042.service - OpenSSH per-connection server daemon (139.178.89.65:36042). Jan 29 11:11:21.388629 sshd[3645]: Accepted publickey for core from 139.178.89.65 port 36042 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:21.390572 sshd-session[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:21.397441 systemd-logind[1449]: New session 11 of user core. Jan 29 11:11:21.413572 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:11:21.561697 sshd[3653]: Connection closed by 139.178.89.65 port 36042 Jan 29 11:11:21.562509 sshd-session[3645]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:21.566149 systemd[1]: sshd@10-143.110.233.113:22-139.178.89.65:36042.service: Deactivated successfully. Jan 29 11:11:21.568707 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:11:21.570805 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:11:21.571818 systemd-logind[1449]: Removed session 11. Jan 29 11:11:26.579585 systemd[1]: Started sshd@11-143.110.233.113:22-139.178.89.65:36050.service - OpenSSH per-connection server daemon (139.178.89.65:36050). Jan 29 11:11:26.654461 sshd[3700]: Accepted publickey for core from 139.178.89.65 port 36050 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:26.657723 sshd-session[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:26.665796 systemd-logind[1449]: New session 12 of user core. Jan 29 11:11:26.673415 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:11:26.833943 sshd[3702]: Connection closed by 139.178.89.65 port 36050 Jan 29 11:11:26.835203 sshd-session[3700]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:26.840271 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:11:26.840974 systemd[1]: sshd@11-143.110.233.113:22-139.178.89.65:36050.service: Deactivated successfully. Jan 29 11:11:26.843669 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:11:26.845200 systemd-logind[1449]: Removed session 12. Jan 29 11:11:29.821472 kubelet[2557]: E0129 11:11:29.819922 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:29.822105 kubelet[2557]: E0129 11:11:29.822042 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:31.858568 systemd[1]: Started sshd@12-143.110.233.113:22-139.178.89.65:50576.service - OpenSSH per-connection server daemon (139.178.89.65:50576). Jan 29 11:11:31.914191 sshd[3736]: Accepted publickey for core from 139.178.89.65 port 50576 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:31.916341 sshd-session[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:31.924516 systemd-logind[1449]: New session 13 of user core. Jan 29 11:11:31.932371 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:11:32.070709 sshd[3738]: Connection closed by 139.178.89.65 port 50576 Jan 29 11:11:32.071625 sshd-session[3736]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:32.076167 systemd[1]: sshd@12-143.110.233.113:22-139.178.89.65:50576.service: Deactivated successfully. Jan 29 11:11:32.079339 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:11:32.080509 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:11:32.081491 systemd-logind[1449]: Removed session 13. Jan 29 11:11:37.094451 systemd[1]: Started sshd@13-143.110.233.113:22-139.178.89.65:50588.service - OpenSSH per-connection server daemon (139.178.89.65:50588). Jan 29 11:11:37.178564 sshd[3769]: Accepted publickey for core from 139.178.89.65 port 50588 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:37.181475 sshd-session[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:37.191245 systemd-logind[1449]: New session 14 of user core. Jan 29 11:11:37.197374 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:11:37.343608 sshd[3771]: Connection closed by 139.178.89.65 port 50588 Jan 29 11:11:37.344666 sshd-session[3769]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:37.355523 systemd[1]: sshd@13-143.110.233.113:22-139.178.89.65:50588.service: Deactivated successfully. Jan 29 11:11:37.359262 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:11:37.361261 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:11:37.365556 systemd[1]: Started sshd@14-143.110.233.113:22-139.178.89.65:50600.service - OpenSSH per-connection server daemon (139.178.89.65:50600). Jan 29 11:11:37.367275 systemd-logind[1449]: Removed session 14. Jan 29 11:11:37.430880 sshd[3782]: Accepted publickey for core from 139.178.89.65 port 50600 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:37.432804 sshd-session[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:37.440051 systemd-logind[1449]: New session 15 of user core. Jan 29 11:11:37.450518 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:11:37.767748 sshd[3784]: Connection closed by 139.178.89.65 port 50600 Jan 29 11:11:37.768632 sshd-session[3782]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:37.781341 systemd[1]: sshd@14-143.110.233.113:22-139.178.89.65:50600.service: Deactivated successfully. Jan 29 11:11:37.784340 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:11:37.785519 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:11:37.794661 systemd[1]: Started sshd@15-143.110.233.113:22-139.178.89.65:50604.service - OpenSSH per-connection server daemon (139.178.89.65:50604). Jan 29 11:11:37.797830 systemd-logind[1449]: Removed session 15. Jan 29 11:11:37.863896 sshd[3793]: Accepted publickey for core from 139.178.89.65 port 50604 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:37.866117 sshd-session[3793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:37.873986 systemd-logind[1449]: New session 16 of user core. Jan 29 11:11:37.881436 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:11:39.671129 sshd[3795]: Connection closed by 139.178.89.65 port 50604 Jan 29 11:11:39.673496 sshd-session[3793]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:39.688498 systemd[1]: sshd@15-143.110.233.113:22-139.178.89.65:50604.service: Deactivated successfully. Jan 29 11:11:39.692461 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:11:39.695754 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:11:39.704465 systemd[1]: Started sshd@16-143.110.233.113:22-139.178.89.65:50608.service - OpenSSH per-connection server daemon (139.178.89.65:50608). Jan 29 11:11:39.710262 systemd-logind[1449]: Removed session 16. Jan 29 11:11:39.765479 sshd[3809]: Accepted publickey for core from 139.178.89.65 port 50608 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:39.767629 sshd-session[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:39.773767 systemd-logind[1449]: New session 17 of user core. Jan 29 11:11:39.781308 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:11:40.032121 sshd[3813]: Connection closed by 139.178.89.65 port 50608 Jan 29 11:11:40.033347 sshd-session[3809]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:40.050224 systemd[1]: sshd@16-143.110.233.113:22-139.178.89.65:50608.service: Deactivated successfully. Jan 29 11:11:40.053185 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:11:40.055548 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:11:40.063607 systemd[1]: Started sshd@17-143.110.233.113:22-139.178.89.65:50624.service - OpenSSH per-connection server daemon (139.178.89.65:50624). Jan 29 11:11:40.066388 systemd-logind[1449]: Removed session 17. Jan 29 11:11:40.117768 sshd[3822]: Accepted publickey for core from 139.178.89.65 port 50624 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:40.119659 sshd-session[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:40.126284 systemd-logind[1449]: New session 18 of user core. Jan 29 11:11:40.133361 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:11:40.272974 sshd[3824]: Connection closed by 139.178.89.65 port 50624 Jan 29 11:11:40.273521 sshd-session[3822]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:40.280781 systemd[1]: sshd@17-143.110.233.113:22-139.178.89.65:50624.service: Deactivated successfully. Jan 29 11:11:40.283583 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:11:40.285662 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:11:40.287017 systemd-logind[1449]: Removed session 18. Jan 29 11:11:44.824163 kubelet[2557]: E0129 11:11:44.824092 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:45.300648 systemd[1]: Started sshd@18-143.110.233.113:22-139.178.89.65:59698.service - OpenSSH per-connection server daemon (139.178.89.65:59698). Jan 29 11:11:45.361523 sshd[3856]: Accepted publickey for core from 139.178.89.65 port 59698 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:45.363350 sshd-session[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:45.370474 systemd-logind[1449]: New session 19 of user core. Jan 29 11:11:45.376335 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:11:45.523116 sshd[3858]: Connection closed by 139.178.89.65 port 59698 Jan 29 11:11:45.523813 sshd-session[3856]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:45.527987 systemd[1]: sshd@18-143.110.233.113:22-139.178.89.65:59698.service: Deactivated successfully. Jan 29 11:11:45.531752 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:11:45.533347 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:11:45.535159 systemd-logind[1449]: Removed session 19. Jan 29 11:11:45.820991 kubelet[2557]: E0129 11:11:45.820939 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:46.820273 kubelet[2557]: E0129 11:11:46.820178 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:11:50.546455 systemd[1]: Started sshd@19-143.110.233.113:22-139.178.89.65:59704.service - OpenSSH per-connection server daemon (139.178.89.65:59704). Jan 29 11:11:50.619457 sshd[3891]: Accepted publickey for core from 139.178.89.65 port 59704 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:50.621735 sshd-session[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:50.630212 systemd-logind[1449]: New session 20 of user core. Jan 29 11:11:50.635366 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:11:50.779536 sshd[3893]: Connection closed by 139.178.89.65 port 59704 Jan 29 11:11:50.780391 sshd-session[3891]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:50.788316 systemd[1]: sshd@19-143.110.233.113:22-139.178.89.65:59704.service: Deactivated successfully. Jan 29 11:11:50.791969 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:11:50.795711 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:11:50.797790 systemd-logind[1449]: Removed session 20. Jan 29 11:11:55.800616 systemd[1]: Started sshd@20-143.110.233.113:22-139.178.89.65:37610.service - OpenSSH per-connection server daemon (139.178.89.65:37610). Jan 29 11:11:55.886999 sshd[3928]: Accepted publickey for core from 139.178.89.65 port 37610 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:11:55.889620 sshd-session[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:55.899059 systemd-logind[1449]: New session 21 of user core. Jan 29 11:11:55.906441 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:11:56.102251 sshd[3930]: Connection closed by 139.178.89.65 port 37610 Jan 29 11:11:56.103180 sshd-session[3928]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:56.107675 systemd[1]: sshd@20-143.110.233.113:22-139.178.89.65:37610.service: Deactivated successfully. Jan 29 11:11:56.111260 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:11:56.113930 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:11:56.115885 systemd-logind[1449]: Removed session 21. Jan 29 11:12:01.122693 systemd[1]: Started sshd@21-143.110.233.113:22-139.178.89.65:49538.service - OpenSSH per-connection server daemon (139.178.89.65:49538). Jan 29 11:12:01.194835 sshd[3964]: Accepted publickey for core from 139.178.89.65 port 49538 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:12:01.196981 sshd-session[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:01.204489 systemd-logind[1449]: New session 22 of user core. Jan 29 11:12:01.212453 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:12:01.373438 sshd[3966]: Connection closed by 139.178.89.65 port 49538 Jan 29 11:12:01.374434 sshd-session[3964]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:01.381246 systemd[1]: sshd@21-143.110.233.113:22-139.178.89.65:49538.service: Deactivated successfully. Jan 29 11:12:01.385190 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:12:01.386969 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:12:01.388599 systemd-logind[1449]: Removed session 22. Jan 29 11:12:02.820534 kubelet[2557]: E0129 11:12:02.820286 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 29 11:12:06.399645 systemd[1]: Started sshd@22-143.110.233.113:22-139.178.89.65:49540.service - OpenSSH per-connection server daemon (139.178.89.65:49540). Jan 29 11:12:06.455469 sshd[3998]: Accepted publickey for core from 139.178.89.65 port 49540 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:12:06.457721 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:06.466321 systemd-logind[1449]: New session 23 of user core. Jan 29 11:12:06.472287 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:12:06.638416 sshd[4006]: Connection closed by 139.178.89.65 port 49540 Jan 29 11:12:06.639441 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:06.644874 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:12:06.646202 systemd[1]: sshd@22-143.110.233.113:22-139.178.89.65:49540.service: Deactivated successfully. Jan 29 11:12:06.650675 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:12:06.653396 systemd-logind[1449]: Removed session 23.