Jan 30 14:00:13.130435 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 14:00:13.130474 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:00:13.130493 kernel: BIOS-provided physical RAM map: Jan 30 14:00:13.130503 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 14:00:13.130513 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 14:00:13.130524 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 14:00:13.130537 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 30 14:00:13.130549 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 30 14:00:13.130560 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 14:00:13.130575 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 14:00:13.130586 kernel: NX (Execute Disable) protection: active Jan 30 14:00:13.130621 kernel: APIC: Static calls initialized Jan 30 14:00:13.130632 kernel: SMBIOS 2.8 present. Jan 30 14:00:13.130643 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 14:00:13.130657 kernel: Hypervisor detected: KVM Jan 30 14:00:13.130674 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 14:00:13.130687 kernel: kvm-clock: using sched offset of 5268260086 cycles Jan 30 14:00:13.130700 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 14:00:13.130712 kernel: tsc: Detected 2494.138 MHz processor Jan 30 14:00:13.130725 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:00:13.130738 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:00:13.130749 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 30 14:00:13.130762 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 14:00:13.130788 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:00:13.130805 kernel: ACPI: Early table checksum verification disabled Jan 30 14:00:13.130818 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 30 14:00:13.130827 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:00:13.130839 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:00:13.130851 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:00:13.130865 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 14:00:13.130878 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:00:13.130893 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:00:13.130906 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:00:13.130924 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:00:13.130938 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 14:00:13.130952 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 14:00:13.130960 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 14:00:13.130969 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 14:00:13.130977 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 14:00:13.130985 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 14:00:13.131001 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 14:00:13.131010 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 14:00:13.131019 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 14:00:13.131028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 14:00:13.131040 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 14:00:13.131055 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 30 14:00:13.131067 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 30 14:00:13.131084 kernel: Zone ranges: Jan 30 14:00:13.131098 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:00:13.131110 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 30 14:00:13.131124 kernel: Normal empty Jan 30 14:00:13.131138 kernel: Movable zone start for each node Jan 30 14:00:13.131152 kernel: Early memory node ranges Jan 30 14:00:13.131168 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 14:00:13.131182 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 30 14:00:13.131197 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 30 14:00:13.131216 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:00:13.131230 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 14:00:13.131244 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 30 14:00:13.131258 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 14:00:13.131274 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 14:00:13.131289 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 14:00:13.131302 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:00:13.131315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 14:00:13.131325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:00:13.131337 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 14:00:13.131347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 14:00:13.131356 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:00:13.131365 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 14:00:13.131374 kernel: TSC deadline timer available Jan 30 14:00:13.131383 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 14:00:13.131396 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 14:00:13.131406 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 14:00:13.131415 kernel: Booting paravirtualized kernel on KVM Jan 30 14:00:13.131428 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:00:13.131437 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 14:00:13.131446 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 14:00:13.131455 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 14:00:13.131464 kernel: pcpu-alloc: [0] 0 1 Jan 30 14:00:13.131479 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 14:00:13.131495 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:00:13.131511 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:00:13.131526 kernel: random: crng init done Jan 30 14:00:13.131536 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:00:13.131547 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 14:00:13.131556 kernel: Fallback order for Node 0: 0 Jan 30 14:00:13.131567 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 30 14:00:13.131577 kernel: Policy zone: DMA32 Jan 30 14:00:13.131585 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:00:13.131595 kernel: Memory: 1971188K/2096600K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 14:00:13.134263 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:00:13.134300 kernel: Kernel/User page tables isolation: enabled Jan 30 14:00:13.134310 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 14:00:13.134320 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:00:13.134329 kernel: Dynamic Preempt: voluntary Jan 30 14:00:13.134341 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:00:13.134359 kernel: rcu: RCU event tracing is enabled. Jan 30 14:00:13.134374 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:00:13.134389 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:00:13.134417 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:00:13.134437 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:00:13.134451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:00:13.134465 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:00:13.134478 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 14:00:13.134492 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:00:13.134505 kernel: Console: colour VGA+ 80x25 Jan 30 14:00:13.134518 kernel: printk: console [tty0] enabled Jan 30 14:00:13.134532 kernel: printk: console [ttyS0] enabled Jan 30 14:00:13.134545 kernel: ACPI: Core revision 20230628 Jan 30 14:00:13.134559 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 14:00:13.134577 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:00:13.134608 kernel: x2apic enabled Jan 30 14:00:13.134623 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 14:00:13.134637 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 14:00:13.134652 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 30 14:00:13.134662 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 30 14:00:13.134671 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 14:00:13.134681 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 14:00:13.134704 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:00:13.134713 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 14:00:13.134723 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:00:13.134736 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 14:00:13.134745 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 14:00:13.134755 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:00:13.134765 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:00:13.134775 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 14:00:13.134784 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 14:00:13.134797 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:00:13.134807 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:00:13.134817 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:00:13.134826 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:00:13.134836 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 14:00:13.134846 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:00:13.134861 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:00:13.134888 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:00:13.134908 kernel: landlock: Up and running. Jan 30 14:00:13.134921 kernel: SELinux: Initializing. Jan 30 14:00:13.134937 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 14:00:13.134951 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 14:00:13.134968 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 14:00:13.134977 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:00:13.134987 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:00:13.134997 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:00:13.135009 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 14:00:13.135019 kernel: signal: max sigframe size: 1776 Jan 30 14:00:13.135029 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:00:13.135039 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:00:13.135048 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 14:00:13.135057 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:00:13.135067 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:00:13.135079 kernel: .... node #0, CPUs: #1 Jan 30 14:00:13.135095 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:00:13.135114 kernel: smpboot: Max logical packages: 1 Jan 30 14:00:13.135130 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 30 14:00:13.135145 kernel: devtmpfs: initialized Jan 30 14:00:13.135159 kernel: x86/mm: Memory block size: 128MB Jan 30 14:00:13.135174 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:00:13.135188 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:00:13.135203 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:00:13.135217 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:00:13.135230 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:00:13.135244 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:00:13.135262 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:00:13.135277 kernel: audit: type=2000 audit(1738245611.762:1): state=initialized audit_enabled=0 res=1 Jan 30 14:00:13.135292 kernel: cpuidle: using governor menu Jan 30 14:00:13.135307 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:00:13.135321 kernel: dca service started, version 1.12.1 Jan 30 14:00:13.135335 kernel: PCI: Using configuration type 1 for base access Jan 30 14:00:13.135349 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:00:13.135362 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:00:13.135382 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:00:13.135396 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:00:13.135410 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:00:13.135424 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:00:13.135437 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:00:13.135451 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:00:13.135466 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 14:00:13.135480 kernel: ACPI: Interpreter enabled Jan 30 14:00:13.135494 kernel: ACPI: PM: (supports S0 S5) Jan 30 14:00:13.135509 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:00:13.135526 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:00:13.135539 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:00:13.135553 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 14:00:13.135568 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:00:13.135910 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:00:13.136087 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 14:00:13.136241 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 14:00:13.136271 kernel: acpiphp: Slot [3] registered Jan 30 14:00:13.136286 kernel: acpiphp: Slot [4] registered Jan 30 14:00:13.136299 kernel: acpiphp: Slot [5] registered Jan 30 14:00:13.136312 kernel: acpiphp: Slot [6] registered Jan 30 14:00:13.136325 kernel: acpiphp: Slot [7] registered Jan 30 14:00:13.136338 kernel: acpiphp: Slot [8] registered Jan 30 14:00:13.136351 kernel: acpiphp: Slot [9] registered Jan 30 14:00:13.136365 kernel: acpiphp: Slot [10] registered Jan 30 14:00:13.136378 kernel: acpiphp: Slot [11] registered Jan 30 14:00:13.136400 kernel: acpiphp: Slot [12] registered Jan 30 14:00:13.136416 kernel: acpiphp: Slot [13] registered Jan 30 14:00:13.136430 kernel: acpiphp: Slot [14] registered Jan 30 14:00:13.136443 kernel: acpiphp: Slot [15] registered Jan 30 14:00:13.136457 kernel: acpiphp: Slot [16] registered Jan 30 14:00:13.136471 kernel: acpiphp: Slot [17] registered Jan 30 14:00:13.136486 kernel: acpiphp: Slot [18] registered Jan 30 14:00:13.136500 kernel: acpiphp: Slot [19] registered Jan 30 14:00:13.136514 kernel: acpiphp: Slot [20] registered Jan 30 14:00:13.136529 kernel: acpiphp: Slot [21] registered Jan 30 14:00:13.136550 kernel: acpiphp: Slot [22] registered Jan 30 14:00:13.136564 kernel: acpiphp: Slot [23] registered Jan 30 14:00:13.136577 kernel: acpiphp: Slot [24] registered Jan 30 14:00:13.139341 kernel: acpiphp: Slot [25] registered Jan 30 14:00:13.139373 kernel: acpiphp: Slot [26] registered Jan 30 14:00:13.139387 kernel: acpiphp: Slot [27] registered Jan 30 14:00:13.139401 kernel: acpiphp: Slot [28] registered Jan 30 14:00:13.139414 kernel: acpiphp: Slot [29] registered Jan 30 14:00:13.139427 kernel: acpiphp: Slot [30] registered Jan 30 14:00:13.139452 kernel: acpiphp: Slot [31] registered Jan 30 14:00:13.139466 kernel: PCI host bridge to bus 0000:00 Jan 30 14:00:13.139727 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:00:13.139851 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:00:13.139966 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:00:13.140078 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 14:00:13.140191 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 14:00:13.140304 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:00:13.140476 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 14:00:13.140664 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 14:00:13.140804 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 14:00:13.140935 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 14:00:13.141062 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 14:00:13.141188 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 14:00:13.141321 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 14:00:13.141445 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 14:00:13.141579 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 14:00:13.143904 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 14:00:13.144092 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 14:00:13.144241 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 14:00:13.144395 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 14:00:13.144562 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 14:00:13.146388 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 14:00:13.146651 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 14:00:13.146842 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 14:00:13.147017 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 14:00:13.147177 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:00:13.147357 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:00:13.147507 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 14:00:13.148782 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 14:00:13.148983 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 14:00:13.149171 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:00:13.149348 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 14:00:13.149522 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 14:00:13.151835 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 14:00:13.152002 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 14:00:13.152151 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 14:00:13.152276 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 14:00:13.152434 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 14:00:13.152579 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 14:00:13.152815 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 14:00:13.152972 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 14:00:13.153114 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 14:00:13.153316 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 14:00:13.153451 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 14:00:13.153635 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 14:00:13.153796 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 14:00:13.153962 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 14:00:13.154120 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 14:00:13.154255 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 14:00:13.154279 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 14:00:13.154298 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 14:00:13.154317 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 14:00:13.154334 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 14:00:13.154352 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 14:00:13.154374 kernel: iommu: Default domain type: Translated Jan 30 14:00:13.154392 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:00:13.154424 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:00:13.154437 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:00:13.154451 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 14:00:13.154466 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 30 14:00:13.156824 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 14:00:13.157056 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 14:00:13.157257 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:00:13.157286 kernel: vgaarb: loaded Jan 30 14:00:13.157302 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 14:00:13.157319 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 14:00:13.157335 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 14:00:13.157349 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:00:13.157366 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:00:13.157385 kernel: pnp: PnP ACPI init Jan 30 14:00:13.157404 kernel: pnp: PnP ACPI: found 4 devices Jan 30 14:00:13.157431 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:00:13.157449 kernel: NET: Registered PF_INET protocol family Jan 30 14:00:13.157465 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:00:13.157483 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 14:00:13.157499 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:00:13.157518 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 14:00:13.157536 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 14:00:13.157551 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 14:00:13.157569 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 14:00:13.157618 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 14:00:13.157637 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:00:13.157657 kernel: NET: Registered PF_XDP protocol family Jan 30 14:00:13.157815 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:00:13.157913 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:00:13.158006 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:00:13.158098 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 14:00:13.158193 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 14:00:13.158311 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 14:00:13.158496 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 14:00:13.158519 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 14:00:13.160834 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 38821 usecs Jan 30 14:00:13.160877 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:00:13.160892 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 14:00:13.160906 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 30 14:00:13.160920 kernel: Initialise system trusted keyrings Jan 30 14:00:13.160944 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 14:00:13.160970 kernel: Key type asymmetric registered Jan 30 14:00:13.160983 kernel: Asymmetric key parser 'x509' registered Jan 30 14:00:13.160997 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:00:13.161011 kernel: io scheduler mq-deadline registered Jan 30 14:00:13.161024 kernel: io scheduler kyber registered Jan 30 14:00:13.161038 kernel: io scheduler bfq registered Jan 30 14:00:13.161052 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:00:13.161068 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 14:00:13.161080 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 14:00:13.161101 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 14:00:13.161116 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:00:13.161130 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:00:13.161143 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 14:00:13.161153 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 14:00:13.161167 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 14:00:13.161184 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 14:00:13.161407 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 14:00:13.161579 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 14:00:13.163852 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T14:00:12 UTC (1738245612) Jan 30 14:00:13.164069 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 14:00:13.164098 kernel: intel_pstate: CPU model not supported Jan 30 14:00:13.164120 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:00:13.164140 kernel: Segment Routing with IPv6 Jan 30 14:00:13.164161 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:00:13.164181 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:00:13.164215 kernel: Key type dns_resolver registered Jan 30 14:00:13.164236 kernel: IPI shorthand broadcast: enabled Jan 30 14:00:13.164256 kernel: sched_clock: Marking stable (1258008178, 135030593)->(1484220459, -91181688) Jan 30 14:00:13.164273 kernel: registered taskstats version 1 Jan 30 14:00:13.164289 kernel: Loading compiled-in X.509 certificates Jan 30 14:00:13.164303 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 14:00:13.164313 kernel: Key type .fscrypt registered Jan 30 14:00:13.164323 kernel: Key type fscrypt-provisioning registered Jan 30 14:00:13.164333 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:00:13.164349 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:00:13.164367 kernel: ima: No architecture policies found Jan 30 14:00:13.164381 kernel: clk: Disabling unused clocks Jan 30 14:00:13.164395 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 14:00:13.164409 kernel: Write protecting the kernel read-only data: 36864k Jan 30 14:00:13.164452 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 14:00:13.164472 kernel: Run /init as init process Jan 30 14:00:13.164487 kernel: with arguments: Jan 30 14:00:13.164501 kernel: /init Jan 30 14:00:13.164520 kernel: with environment: Jan 30 14:00:13.164537 kernel: HOME=/ Jan 30 14:00:13.164553 kernel: TERM=linux Jan 30 14:00:13.164567 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:00:13.164587 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:00:13.164662 systemd[1]: Detected virtualization kvm. Jan 30 14:00:13.164677 systemd[1]: Detected architecture x86-64. Jan 30 14:00:13.164691 systemd[1]: Running in initrd. Jan 30 14:00:13.164714 systemd[1]: No hostname configured, using default hostname. Jan 30 14:00:13.164729 systemd[1]: Hostname set to . Jan 30 14:00:13.164746 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:00:13.164757 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:00:13.164769 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:13.164785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:13.164806 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:00:13.164819 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:00:13.164834 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:00:13.164845 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:00:13.164866 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:00:13.164881 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:00:13.164896 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:13.164912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:13.164932 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:00:13.164947 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:00:13.164965 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:00:13.164988 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:00:13.165004 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:00:13.165020 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:00:13.165041 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:00:13.165056 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:00:13.165068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:13.165086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:13.165103 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:13.165120 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:00:13.165136 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:00:13.165154 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:00:13.165178 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:00:13.165195 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:00:13.165213 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:00:13.165233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:00:13.165248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:13.165263 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:00:13.165279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:13.165295 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:00:13.165321 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:00:13.165393 systemd-journald[182]: Collecting audit messages is disabled. Jan 30 14:00:13.165425 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:00:13.165438 systemd-journald[182]: Journal started Jan 30 14:00:13.165461 systemd-journald[182]: Runtime Journal (/run/log/journal/c03f74d447a24e98b904c39b8bc55d32) is 4.9M, max 39.3M, 34.4M free. Jan 30 14:00:13.141658 systemd-modules-load[183]: Inserted module 'overlay' Jan 30 14:00:13.177486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:00:13.177534 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:00:13.183713 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:13.196173 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:13.201633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:00:13.202880 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:00:13.206336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:13.209494 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 30 14:00:13.212553 kernel: Bridge firewalling registered Jan 30 14:00:13.211218 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:13.226059 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:13.234540 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:13.245464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:13.253902 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:00:13.254930 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:13.262981 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:00:13.296626 dracut-cmdline[220]: dracut-dracut-053 Jan 30 14:00:13.299548 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:00:13.315031 systemd-resolved[219]: Positive Trust Anchors: Jan 30 14:00:13.316118 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:00:13.316193 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:00:13.325317 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 30 14:00:13.328272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:00:13.329825 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:13.429696 kernel: SCSI subsystem initialized Jan 30 14:00:13.441683 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:00:13.457654 kernel: iscsi: registered transport (tcp) Jan 30 14:00:13.485710 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:00:13.485809 kernel: QLogic iSCSI HBA Driver Jan 30 14:00:13.578669 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:00:13.587959 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:00:13.621940 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:00:13.622046 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:00:13.624641 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:00:13.675663 kernel: raid6: avx2x4 gen() 13516 MB/s Jan 30 14:00:13.692659 kernel: raid6: avx2x2 gen() 13886 MB/s Jan 30 14:00:13.709743 kernel: raid6: avx2x1 gen() 9730 MB/s Jan 30 14:00:13.709848 kernel: raid6: using algorithm avx2x2 gen() 13886 MB/s Jan 30 14:00:13.727881 kernel: raid6: .... xor() 14244 MB/s, rmw enabled Jan 30 14:00:13.727985 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:00:13.756675 kernel: xor: automatically using best checksumming function avx Jan 30 14:00:13.976670 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:00:13.996006 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:00:14.002942 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:14.032377 systemd-udevd[404]: Using default interface naming scheme 'v255'. Jan 30 14:00:14.040131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:14.047859 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:00:14.083082 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Jan 30 14:00:14.134915 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:00:14.140958 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:00:14.224059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:14.233996 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:00:14.271002 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:00:14.274971 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:00:14.275984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:14.277338 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:00:14.304977 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:00:14.332149 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:00:14.339642 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 14:00:14.372851 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 14:00:14.373093 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:00:14.373118 kernel: GPT:9289727 != 125829119 Jan 30 14:00:14.373142 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:00:14.373176 kernel: GPT:9289727 != 125829119 Jan 30 14:00:14.373198 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:00:14.373216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:00:14.373237 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 14:00:14.411995 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 30 14:00:14.412240 kernel: scsi host0: Virtio SCSI HBA Jan 30 14:00:14.421618 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:00:14.481402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:00:14.482836 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:14.486450 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:14.487060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:00:14.487332 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:14.487991 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:14.498633 kernel: ACPI: bus type USB registered Jan 30 14:00:14.498717 kernel: usbcore: registered new interface driver usbfs Jan 30 14:00:14.501035 kernel: usbcore: registered new interface driver hub Jan 30 14:00:14.501118 kernel: usbcore: registered new device driver usb Jan 30 14:00:14.504143 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:14.526710 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:00:14.535628 kernel: AES CTR mode by8 optimization enabled Jan 30 14:00:14.561667 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 14:00:14.654705 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (448) Jan 30 14:00:14.654752 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (466) Jan 30 14:00:14.654777 kernel: libata version 3.00 loaded. Jan 30 14:00:14.654799 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 14:00:14.655167 kernel: scsi host1: ata_piix Jan 30 14:00:14.655411 kernel: scsi host2: ata_piix Jan 30 14:00:14.655635 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 14:00:14.655661 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 14:00:14.655683 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 14:00:14.655923 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 14:00:14.656138 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 14:00:14.656325 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 14:00:14.656527 kernel: hub 1-0:1.0: USB hub found Jan 30 14:00:14.656786 kernel: hub 1-0:1.0: 2 ports detected Jan 30 14:00:14.658213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:14.682219 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 14:00:14.695315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:00:14.701816 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 14:00:14.703861 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 14:00:14.728024 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:00:14.731892 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:00:14.744788 disk-uuid[533]: Primary Header is updated. Jan 30 14:00:14.744788 disk-uuid[533]: Secondary Entries is updated. Jan 30 14:00:14.744788 disk-uuid[533]: Secondary Header is updated. Jan 30 14:00:14.757742 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:00:14.774842 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:00:14.786050 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:14.801819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:00:15.813758 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:00:15.815654 disk-uuid[534]: The operation has completed successfully. Jan 30 14:00:15.889311 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:00:15.889507 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:00:15.908012 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:00:15.928277 sh[564]: Success Jan 30 14:00:15.946746 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 14:00:16.033734 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:00:16.049001 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:00:16.057152 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:00:16.081707 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 14:00:16.081815 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:00:16.081840 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:00:16.082693 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:00:16.083763 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:00:16.097415 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:00:16.099451 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:00:16.109973 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:00:16.113938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:00:16.134634 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:00:16.138090 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:00:16.138174 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:00:16.148635 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:00:16.169647 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:00:16.170177 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:00:16.182841 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:00:16.191989 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:00:16.342922 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:00:16.362934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:00:16.388296 ignition[658]: Ignition 2.19.0 Jan 30 14:00:16.388318 ignition[658]: Stage: fetch-offline Jan 30 14:00:16.391240 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:00:16.388387 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:16.388405 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:00:16.388752 ignition[658]: parsed url from cmdline: "" Jan 30 14:00:16.388761 ignition[658]: no config URL provided Jan 30 14:00:16.388774 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:00:16.388796 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:00:16.388805 ignition[658]: failed to fetch config: resource requires networking Jan 30 14:00:16.389449 ignition[658]: Ignition finished successfully Jan 30 14:00:16.404921 systemd-networkd[754]: lo: Link UP Jan 30 14:00:16.404939 systemd-networkd[754]: lo: Gained carrier Jan 30 14:00:16.408033 systemd-networkd[754]: Enumeration completed Jan 30 14:00:16.408450 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 14:00:16.408453 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 14:00:16.408776 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:00:16.409650 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:16.409655 systemd-networkd[754]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:00:16.409713 systemd[1]: Reached target network.target - Network. Jan 30 14:00:16.411247 systemd-networkd[754]: eth0: Link UP Jan 30 14:00:16.411252 systemd-networkd[754]: eth0: Gained carrier Jan 30 14:00:16.411263 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 14:00:16.417060 systemd-networkd[754]: eth1: Link UP Jan 30 14:00:16.417065 systemd-networkd[754]: eth1: Gained carrier Jan 30 14:00:16.417081 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:00:16.418835 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:00:16.431733 systemd-networkd[754]: eth0: DHCPv4 address 143.198.106.130/20, gateway 143.198.96.1 acquired from 169.254.169.253 Jan 30 14:00:16.436743 systemd-networkd[754]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Jan 30 14:00:16.453893 ignition[758]: Ignition 2.19.0 Jan 30 14:00:16.453906 ignition[758]: Stage: fetch Jan 30 14:00:16.454150 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:16.454162 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:00:16.454282 ignition[758]: parsed url from cmdline: "" Jan 30 14:00:16.454289 ignition[758]: no config URL provided Jan 30 14:00:16.454300 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:00:16.454314 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:00:16.454349 ignition[758]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 14:00:16.471651 ignition[758]: GET result: OK Jan 30 14:00:16.471896 ignition[758]: parsing config with SHA512: d1f0f25f8b3fd7a10255334f07fbe96286ebfff691e40da8aa6c3631380ae59c19a6ba5a4403b6003089567218ca7675fb538d260bda149a692349baaf331ffa Jan 30 14:00:16.481253 unknown[758]: fetched base config from "system" Jan 30 14:00:16.481296 unknown[758]: fetched base config from "system" Jan 30 14:00:16.481866 ignition[758]: fetch: fetch complete Jan 30 14:00:16.481307 unknown[758]: fetched user config from "digitalocean" Jan 30 14:00:16.481874 ignition[758]: fetch: fetch passed Jan 30 14:00:16.481952 ignition[758]: Ignition finished successfully Jan 30 14:00:16.484836 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:00:16.499995 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:00:16.534542 ignition[766]: Ignition 2.19.0 Jan 30 14:00:16.534567 ignition[766]: Stage: kargs Jan 30 14:00:16.535890 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:16.535915 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:00:16.537526 ignition[766]: kargs: kargs passed Jan 30 14:00:16.537690 ignition[766]: Ignition finished successfully Jan 30 14:00:16.540540 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:00:16.550056 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:00:16.595048 ignition[772]: Ignition 2.19.0 Jan 30 14:00:16.595064 ignition[772]: Stage: disks Jan 30 14:00:16.595405 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:16.595422 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:00:16.605285 ignition[772]: disks: disks passed Jan 30 14:00:16.605394 ignition[772]: Ignition finished successfully Jan 30 14:00:16.607151 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:00:16.608438 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:00:16.608925 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:00:16.609416 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:00:16.610894 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:00:16.612119 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:00:16.619952 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:00:16.650848 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:00:16.655487 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:00:16.663260 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:00:16.789633 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 14:00:16.790015 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:00:16.791477 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:00:16.797795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:00:16.801340 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:00:16.804585 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 14:00:16.812798 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:00:16.817006 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (788) Jan 30 14:00:16.816720 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:00:16.816763 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:00:16.829940 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:00:16.829975 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:00:16.829989 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:00:16.830002 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:00:16.831516 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:00:16.836000 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:00:16.847878 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:00:16.920192 coreos-metadata[791]: Jan 30 14:00:16.919 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 14:00:16.933282 coreos-metadata[791]: Jan 30 14:00:16.931 INFO Fetch successful Jan 30 14:00:16.934298 coreos-metadata[790]: Jan 30 14:00:16.934 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 14:00:16.937977 coreos-metadata[791]: Jan 30 14:00:16.937 INFO wrote hostname ci-4081.3.0-5-054816032d to /sysroot/etc/hostname Jan 30 14:00:16.940947 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:00:16.943457 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:00:16.948476 coreos-metadata[790]: Jan 30 14:00:16.948 INFO Fetch successful Jan 30 14:00:16.952377 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:00:16.954947 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 14:00:16.955839 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 14:00:16.960584 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:00:16.967731 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:00:17.093025 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:00:17.097858 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:00:17.099803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:00:17.113816 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:00:17.115864 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:00:17.154184 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:00:17.165168 ignition[909]: INFO : Ignition 2.19.0 Jan 30 14:00:17.166784 ignition[909]: INFO : Stage: mount Jan 30 14:00:17.166784 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:17.166784 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:00:17.168602 ignition[909]: INFO : mount: mount passed Jan 30 14:00:17.168602 ignition[909]: INFO : Ignition finished successfully Jan 30 14:00:17.169622 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:00:17.175830 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:00:17.211940 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:00:17.226647 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (922) Jan 30 14:00:17.229988 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:00:17.230062 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:00:17.232060 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:00:17.243706 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:00:17.246885 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:00:17.293406 ignition[939]: INFO : Ignition 2.19.0 Jan 30 14:00:17.293406 ignition[939]: INFO : Stage: files Jan 30 14:00:17.294815 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:17.294815 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:00:17.295941 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:00:17.296468 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:00:17.296468 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:00:17.300462 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:00:17.301464 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:00:17.302557 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:00:17.301812 unknown[939]: wrote ssh authorized keys file for user: core Jan 30 14:00:17.304431 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:00:17.304431 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 14:00:17.340076 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:00:17.408496 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:00:17.408496 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:00:17.410211 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:00:17.410211 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:00:17.410211 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:00:17.410211 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:00:17.410211 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:00:17.410211 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:00:17.410211 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:00:17.415743 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:00:17.415743 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:00:17.415743 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:00:17.415743 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:00:17.415743 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:00:17.415743 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 14:00:17.588908 systemd-networkd[754]: eth1: Gained IPv6LL Jan 30 14:00:17.912348 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:00:17.972739 systemd-networkd[754]: eth0: Gained IPv6LL Jan 30 14:00:18.244415 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 14:00:18.244415 ignition[939]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:00:18.246319 ignition[939]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:00:18.246319 ignition[939]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:00:18.246319 ignition[939]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:00:18.246319 ignition[939]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:00:18.251828 ignition[939]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:00:18.251828 ignition[939]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:00:18.251828 ignition[939]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:00:18.251828 ignition[939]: INFO : files: files passed Jan 30 14:00:18.251828 ignition[939]: INFO : Ignition finished successfully Jan 30 14:00:18.249016 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:00:18.258887 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:00:18.261852 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:00:18.268888 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:00:18.269024 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:00:18.293817 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:18.293817 initrd-setup-root-after-ignition[968]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:18.297543 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:00:18.300122 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:00:18.301334 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:00:18.305965 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:00:18.363529 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:00:18.363729 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:00:18.365560 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:00:18.366289 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:00:18.367557 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:00:18.375867 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:00:18.397798 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:00:18.406995 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:00:18.422060 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:18.423411 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:18.424755 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:00:18.425225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:00:18.425395 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:00:18.427046 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:00:18.427610 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:00:18.428965 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:00:18.429889 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:00:18.430956 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:00:18.432025 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:00:18.433085 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:00:18.434279 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:00:18.435370 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:00:18.436531 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:00:18.437628 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:00:18.437783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:00:18.439108 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:18.439752 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:18.440840 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:00:18.442878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:18.444044 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:00:18.444285 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:00:18.446019 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:00:18.446508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:00:18.447651 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:00:18.447938 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:00:18.448894 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:00:18.449090 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:00:18.457201 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:00:18.459893 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:00:18.460180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:18.476741 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:00:18.477338 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:00:18.477531 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:18.479083 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:00:18.479264 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:00:18.483146 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:00:18.483264 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:00:18.507506 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:00:18.510479 ignition[992]: INFO : Ignition 2.19.0 Jan 30 14:00:18.510479 ignition[992]: INFO : Stage: umount Jan 30 14:00:18.510479 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:00:18.510479 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 14:00:18.514870 ignition[992]: INFO : umount: umount passed Jan 30 14:00:18.514870 ignition[992]: INFO : Ignition finished successfully Jan 30 14:00:18.513101 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:00:18.513258 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:00:18.516989 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:00:18.517186 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:00:18.523427 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:00:18.523557 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:00:18.524456 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:00:18.524549 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:00:18.527344 systemd[1]: Stopped target network.target - Network. Jan 30 14:00:18.527953 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:00:18.528098 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:00:18.528937 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:00:18.530156 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:00:18.533771 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:18.535210 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:00:18.536414 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:00:18.538091 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:00:18.538185 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:00:18.540057 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:00:18.540132 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:00:18.541551 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:00:18.541713 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:00:18.542894 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:00:18.542999 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:00:18.549310 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:00:18.551241 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:00:18.552465 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:00:18.552843 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:00:18.553754 systemd-networkd[754]: eth0: DHCPv6 lease lost Jan 30 14:00:18.555562 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:00:18.556103 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:00:18.559698 systemd-networkd[754]: eth1: DHCPv6 lease lost Jan 30 14:00:18.560013 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:00:18.560202 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:00:18.564128 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:00:18.564783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:00:18.568627 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:00:18.568718 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:18.573794 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:00:18.575942 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:00:18.576055 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:00:18.577077 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:00:18.577172 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:18.580022 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:00:18.580102 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:18.581089 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:00:18.581171 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:18.582172 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:18.613144 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:00:18.613375 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:18.614664 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:00:18.614721 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:18.615180 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:00:18.615223 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:18.615864 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:00:18.615923 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:00:18.617440 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:00:18.617501 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:00:18.618769 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:00:18.618876 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:00:18.624972 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:00:18.625666 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:00:18.625769 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:18.628906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:00:18.629012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:18.630667 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:00:18.632176 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:00:18.646273 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:00:18.646505 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:00:18.648017 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:00:18.660188 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:00:18.674859 systemd[1]: Switching root. Jan 30 14:00:18.739931 systemd-journald[182]: Journal stopped Jan 30 14:00:20.522360 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 30 14:00:20.522500 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:00:20.522529 kernel: SELinux: policy capability open_perms=1 Jan 30 14:00:20.522550 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:00:20.522578 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:00:20.522699 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:00:20.522723 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:00:20.522743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:00:20.522762 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:00:20.522796 kernel: audit: type=1403 audit(1738245619.015:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:00:20.522826 systemd[1]: Successfully loaded SELinux policy in 48.458ms. Jan 30 14:00:20.522868 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.936ms. Jan 30 14:00:20.522899 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:00:20.522923 systemd[1]: Detected virtualization kvm. Jan 30 14:00:20.522957 systemd[1]: Detected architecture x86-64. Jan 30 14:00:20.523008 systemd[1]: Detected first boot. Jan 30 14:00:20.523030 systemd[1]: Hostname set to . Jan 30 14:00:20.523058 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:00:20.523079 zram_generator::config[1034]: No configuration found. Jan 30 14:00:20.523103 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:00:20.523123 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:00:20.523142 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:00:20.523163 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:00:20.523188 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:00:20.523212 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:00:20.523237 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:00:20.523267 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:00:20.523290 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:00:20.523312 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:00:20.523333 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:00:20.523356 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:00:20.523378 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:00:20.523400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:00:20.523422 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:00:20.523450 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:00:20.523472 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:00:20.523516 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:00:20.523541 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:00:20.523563 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:00:20.523585 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:00:20.523654 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:00:20.523686 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:00:20.523710 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:00:20.523733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:00:20.523758 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:00:20.523780 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:00:20.523805 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:00:20.523829 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:00:20.523851 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:00:20.523875 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:00:20.523906 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:00:20.523928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:00:20.523949 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:00:20.523970 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:00:20.523993 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:00:20.524017 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:00:20.524040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:20.524063 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:00:20.524087 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:00:20.524126 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:00:20.524152 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:00:20.524176 systemd[1]: Reached target machines.target - Containers. Jan 30 14:00:20.524202 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:00:20.524225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:20.524250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:00:20.524274 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:00:20.524316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:00:20.524348 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:00:20.524371 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:00:20.524396 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:00:20.524422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:00:20.524446 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:00:20.524469 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:00:20.524491 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:00:20.524515 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:00:20.524544 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:00:20.524569 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:00:20.524630 kernel: fuse: init (API version 7.39) Jan 30 14:00:20.524659 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:00:20.524685 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:00:20.524709 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:00:20.524732 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:00:20.524757 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:00:20.524780 systemd[1]: Stopped verity-setup.service. Jan 30 14:00:20.524805 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:20.524837 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:00:20.524876 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:00:20.524917 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:00:20.524941 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:00:20.524971 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:00:20.524996 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:00:20.525020 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:00:20.525044 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:00:20.525069 kernel: ACPI: bus type drm_connector registered Jan 30 14:00:20.525094 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:00:20.525115 kernel: loop: module loaded Jan 30 14:00:20.525143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:00:20.525167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:00:20.525190 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:00:20.525214 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:00:20.525238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:00:20.525262 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:00:20.525286 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:00:20.525313 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:00:20.525334 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:00:20.525356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:00:20.525378 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:00:20.525398 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:00:20.525435 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:00:20.525456 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:00:20.525478 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:00:20.525500 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:00:20.525527 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:00:20.525550 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:00:20.525574 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:00:20.539263 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:00:20.539329 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:00:20.539356 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:00:20.539454 systemd-journald[1107]: Collecting audit messages is disabled. Jan 30 14:00:20.539508 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:00:20.539539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:00:20.539562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:20.539584 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:00:20.540710 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:00:20.540754 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:00:20.540787 systemd-journald[1107]: Journal started Jan 30 14:00:20.540849 systemd-journald[1107]: Runtime Journal (/run/log/journal/c03f74d447a24e98b904c39b8bc55d32) is 4.9M, max 39.3M, 34.4M free. Jan 30 14:00:19.976380 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:00:20.001771 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 14:00:20.002634 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:00:20.556493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:00:20.565774 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:00:20.572951 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:00:20.577097 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:00:20.579504 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:00:20.653499 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:00:20.660716 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:00:20.667952 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:00:20.679673 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 14:00:20.676219 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:00:20.680525 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:00:20.734105 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:00:20.750171 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:00:20.749117 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:00:20.779429 systemd-journald[1107]: Time spent on flushing to /var/log/journal/c03f74d447a24e98b904c39b8bc55d32 is 88.639ms for 994 entries. Jan 30 14:00:20.779429 systemd-journald[1107]: System Journal (/var/log/journal/c03f74d447a24e98b904c39b8bc55d32) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:00:20.890441 systemd-journald[1107]: Received client request to flush runtime journal. Jan 30 14:00:20.890513 kernel: loop1: detected capacity change from 0 to 8 Jan 30 14:00:20.890543 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 14:00:20.782192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:00:20.807796 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:00:20.846802 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:00:20.852271 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:00:20.893723 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:00:20.940641 kernel: loop3: detected capacity change from 0 to 210664 Jan 30 14:00:21.009240 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:00:21.025737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:00:21.045639 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 14:00:21.085766 kernel: loop5: detected capacity change from 0 to 8 Jan 30 14:00:21.091633 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 14:00:21.126627 kernel: loop7: detected capacity change from 0 to 210664 Jan 30 14:00:21.135698 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 30 14:00:21.136644 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 30 14:00:21.158246 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:00:21.161057 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 14:00:21.163509 (sd-merge)[1177]: Merged extensions into '/usr'. Jan 30 14:00:21.172513 systemd[1]: Reloading requested from client PID 1136 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:00:21.172550 systemd[1]: Reloading... Jan 30 14:00:21.471637 zram_generator::config[1209]: No configuration found. Jan 30 14:00:21.630028 ldconfig[1132]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:00:21.794822 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:00:21.903255 systemd[1]: Reloading finished in 729 ms. Jan 30 14:00:21.944801 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:00:21.947526 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:00:21.963043 systemd[1]: Starting ensure-sysext.service... Jan 30 14:00:21.967702 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:00:21.985268 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:00:21.985477 systemd[1]: Reloading... Jan 30 14:00:22.073404 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:00:22.074037 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:00:22.079823 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:00:22.080263 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 30 14:00:22.080373 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Jan 30 14:00:22.096489 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:00:22.096511 systemd-tmpfiles[1249]: Skipping /boot Jan 30 14:00:22.143175 zram_generator::config[1271]: No configuration found. Jan 30 14:00:22.161275 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:00:22.161300 systemd-tmpfiles[1249]: Skipping /boot Jan 30 14:00:22.401278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:00:22.471339 systemd[1]: Reloading finished in 485 ms. Jan 30 14:00:22.490738 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:00:22.496488 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:00:22.521989 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:00:22.546784 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:00:22.555143 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:00:22.567840 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:00:22.572581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:00:22.576959 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:00:22.590575 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:22.591167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:22.602996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:00:22.615843 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:00:22.626017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:00:22.627137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:22.627372 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:22.633547 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:22.634997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:22.635290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:22.653551 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:00:22.654623 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:22.657699 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:00:22.659153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:00:22.660526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:00:22.662310 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:00:22.663410 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:00:22.665218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:00:22.666622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:00:22.689903 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:22.690459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:22.699516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:00:22.709116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:00:22.713426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:00:22.722082 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:00:22.727158 augenrules[1352]: No rules Jan 30 14:00:22.722955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:22.730073 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:00:22.730870 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:22.733343 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:00:22.739212 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:00:22.740418 systemd[1]: Finished ensure-sysext.service. Jan 30 14:00:22.757073 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:00:22.779426 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:00:22.782755 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Jan 30 14:00:22.783076 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:00:22.783283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:00:22.787578 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:00:22.793816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:00:22.794356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:00:22.825174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:00:22.826051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:00:22.827701 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:00:22.828057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:00:22.829477 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:00:22.832302 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:00:22.832481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:00:22.866096 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:00:22.909883 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:00:22.923911 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:00:23.056477 systemd-networkd[1376]: lo: Link UP Jan 30 14:00:23.056492 systemd-networkd[1376]: lo: Gained carrier Jan 30 14:00:23.058396 systemd-networkd[1376]: Enumeration completed Jan 30 14:00:23.058569 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:00:23.066916 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:00:23.104195 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:00:23.105271 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:00:23.129145 systemd-resolved[1325]: Positive Trust Anchors: Jan 30 14:00:23.129175 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:00:23.129226 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:00:23.139202 systemd-resolved[1325]: Using system hostname 'ci-4081.3.0-5-054816032d'. Jan 30 14:00:23.142233 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:00:23.144681 systemd[1]: Reached target network.target - Network. Jan 30 14:00:23.145371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:00:23.165543 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 14:00:23.192642 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1392) Jan 30 14:00:23.235742 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 14:00:23.237707 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:23.237938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:00:23.248847 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:00:23.253779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:00:23.266943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:00:23.267561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:00:23.267642 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:00:23.267669 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:00:23.277140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:00:23.278621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:00:23.288020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:00:23.307318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:00:23.308429 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:00:23.309807 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:00:23.312345 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 14:00:23.311754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:00:23.320451 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 14:00:23.327607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:00:23.356027 systemd-networkd[1376]: eth0: Configuring with /run/systemd/network/10-e2:3a:50:1d:d7:a6.network. Jan 30 14:00:23.357665 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:00:23.361325 systemd-networkd[1376]: eth0: Link UP Jan 30 14:00:23.362841 systemd-networkd[1376]: eth0: Gained carrier Jan 30 14:00:23.366905 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:00:23.369537 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. Jan 30 14:00:23.405894 systemd-networkd[1376]: eth1: Configuring with /run/systemd/network/10-76:2c:d8:2d:f3:29.network. Jan 30 14:00:23.407503 systemd-networkd[1376]: eth1: Link UP Jan 30 14:00:23.407519 systemd-networkd[1376]: eth1: Gained carrier Jan 30 14:00:23.419098 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:00:23.444638 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 14:00:23.467496 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 14:00:23.472645 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 14:00:23.492696 kernel: ACPI: button: Power Button [PWRF] Jan 30 14:00:23.550740 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 14:00:23.554626 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 14:00:23.566224 kernel: Console: switching to colour dummy device 80x25 Jan 30 14:00:23.568166 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 14:00:23.568260 kernel: [drm] features: -context_init Jan 30 14:00:23.569130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:23.572635 kernel: [drm] number of scanouts: 1 Jan 30 14:00:23.572722 kernel: [drm] number of cap sets: 0 Jan 30 14:00:23.574638 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 14:00:23.582896 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 14:00:23.583040 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 14:00:23.591663 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 14:00:23.597456 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:00:23.618024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:00:23.618684 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:24.185418 systemd-timesyncd[1362]: Contacted time server 51.81.226.229:123 (0.flatcar.pool.ntp.org). Jan 30 14:00:24.185512 systemd-timesyncd[1362]: Initial clock synchronization to Thu 2025-01-30 14:00:24.184431 UTC. Jan 30 14:00:24.185601 systemd-resolved[1325]: Clock change detected. Flushing caches. Jan 30 14:00:24.192580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:24.201792 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:00:24.202143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:24.213740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:00:24.397739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:00:24.423292 kernel: EDAC MC: Ver: 3.0.0 Jan 30 14:00:24.452119 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:00:24.468635 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:00:24.488275 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:00:24.521400 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:00:24.523169 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:00:24.524488 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:00:24.524807 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:00:24.525185 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:00:24.525626 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:00:24.526006 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:00:24.526133 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:00:24.526274 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:00:24.526316 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:00:24.526412 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:00:24.527810 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:00:24.532468 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:00:24.540320 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:00:24.549567 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:00:24.551047 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:00:24.554971 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:00:24.555880 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:00:24.559065 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:00:24.559098 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:00:24.559136 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:00:24.567533 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:00:24.580194 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:00:24.587397 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:00:24.597189 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:00:24.610425 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:00:24.611180 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:00:24.617473 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:00:24.629085 jq[1441]: false Jan 30 14:00:24.629419 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:00:24.635081 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:00:24.646450 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:00:24.664939 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:00:24.668014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:00:24.671277 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:00:24.679530 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:00:24.689539 coreos-metadata[1439]: Jan 30 14:00:24.687 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 14:00:24.694554 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:00:24.698992 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:00:24.709453 coreos-metadata[1439]: Jan 30 14:00:24.708 INFO Fetch successful Jan 30 14:00:24.709801 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:00:24.710041 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:00:24.726712 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:00:24.726942 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:00:24.755980 dbus-daemon[1440]: [system] SELinux support is enabled Jan 30 14:00:24.757336 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:00:24.779644 jq[1452]: true Jan 30 14:00:24.775103 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:00:24.775273 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:00:24.780545 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:00:24.780709 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 14:00:24.780749 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:00:24.794273 extend-filesystems[1442]: Found loop4 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found loop5 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found loop6 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found loop7 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found vda Jan 30 14:00:24.794273 extend-filesystems[1442]: Found vda1 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found vda2 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found vda3 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found usr Jan 30 14:00:24.794273 extend-filesystems[1442]: Found vda4 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found vda6 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found vda7 Jan 30 14:00:24.794273 extend-filesystems[1442]: Found vda9 Jan 30 14:00:24.794273 extend-filesystems[1442]: Checking size of /dev/vda9 Jan 30 14:00:24.869841 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:00:24.916412 tar[1456]: linux-amd64/helm Jan 30 14:00:24.916766 update_engine[1451]: I20250130 14:00:24.888187 1451 main.cc:92] Flatcar Update Engine starting Jan 30 14:00:24.916766 update_engine[1451]: I20250130 14:00:24.912618 1451 update_check_scheduler.cc:74] Next update check in 3m23s Jan 30 14:00:24.923342 extend-filesystems[1442]: Resized partition /dev/vda9 Jan 30 14:00:24.895167 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:00:24.937339 jq[1471]: true Jan 30 14:00:24.937740 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:00:24.895544 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:00:24.910303 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:00:24.917288 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:00:24.920009 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:00:24.927597 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:00:24.971881 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 14:00:25.090269 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1385) Jan 30 14:00:25.093621 systemd-logind[1449]: New seat seat0. Jan 30 14:00:25.103651 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 14:00:25.103688 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 14:00:25.105577 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:00:25.123574 systemd-networkd[1376]: eth1: Gained IPv6LL Jan 30 14:00:25.143189 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:00:25.150845 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:00:25.159396 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:00:25.161370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:25.171718 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:00:25.173735 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:00:25.229401 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:00:25.248699 systemd[1]: Starting sshkeys.service... Jan 30 14:00:25.252721 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 30 14:00:25.337166 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:00:25.360880 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:00:25.371693 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:00:25.384275 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 14:00:25.427325 extend-filesystems[1493]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 14:00:25.427325 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 14:00:25.427325 extend-filesystems[1493]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 14:00:25.439801 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Jan 30 14:00:25.439801 extend-filesystems[1442]: Found vdb Jan 30 14:00:25.436877 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:00:25.438341 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:00:25.532435 coreos-metadata[1521]: Jan 30 14:00:25.530 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 14:00:25.551976 coreos-metadata[1521]: Jan 30 14:00:25.550 INFO Fetch successful Jan 30 14:00:25.573636 unknown[1521]: wrote ssh authorized keys file for user: core Jan 30 14:00:25.602036 containerd[1472]: time="2025-01-30T14:00:25.600596218Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:00:25.649119 update-ssh-keys[1530]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:00:25.648903 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:00:25.662109 systemd[1]: Finished sshkeys.service. Jan 30 14:00:25.751144 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:00:25.758641 containerd[1472]: time="2025-01-30T14:00:25.757940710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:25.765814 containerd[1472]: time="2025-01-30T14:00:25.765652811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:25.765814 containerd[1472]: time="2025-01-30T14:00:25.765730349Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:00:25.765814 containerd[1472]: time="2025-01-30T14:00:25.765754416Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:00:25.766315 containerd[1472]: time="2025-01-30T14:00:25.765929520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:00:25.766315 containerd[1472]: time="2025-01-30T14:00:25.765962221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:25.766315 containerd[1472]: time="2025-01-30T14:00:25.766048540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:25.766315 containerd[1472]: time="2025-01-30T14:00:25.766075005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:25.766483 containerd[1472]: time="2025-01-30T14:00:25.766372261Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:25.766483 containerd[1472]: time="2025-01-30T14:00:25.766397588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:25.766483 containerd[1472]: time="2025-01-30T14:00:25.766418053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:25.766483 containerd[1472]: time="2025-01-30T14:00:25.766430927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:25.766633 containerd[1472]: time="2025-01-30T14:00:25.766557087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:25.767979 containerd[1472]: time="2025-01-30T14:00:25.766809900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:00:25.767979 containerd[1472]: time="2025-01-30T14:00:25.766947514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:00:25.767979 containerd[1472]: time="2025-01-30T14:00:25.766970309Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:00:25.767979 containerd[1472]: time="2025-01-30T14:00:25.767060030Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:00:25.767979 containerd[1472]: time="2025-01-30T14:00:25.767111210Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:00:25.797146 containerd[1472]: time="2025-01-30T14:00:25.796553453Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:00:25.797146 containerd[1472]: time="2025-01-30T14:00:25.796653204Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:00:25.797146 containerd[1472]: time="2025-01-30T14:00:25.796688029Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:00:25.797146 containerd[1472]: time="2025-01-30T14:00:25.796759899Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:00:25.797146 containerd[1472]: time="2025-01-30T14:00:25.796782672Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:00:25.797146 containerd[1472]: time="2025-01-30T14:00:25.797087855Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797625688Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797782996Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797799728Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797825185Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797853919Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797873464Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797888938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797902586Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797919203Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797934087Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797947166Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.797959035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.798022044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.798108 containerd[1472]: time="2025-01-30T14:00:25.798038042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798050873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798074877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798089935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798125080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798140211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798153833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798182249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798196735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798211272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798322462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798342054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798365350Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798425049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798442892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799080 containerd[1472]: time="2025-01-30T14:00:25.798456194Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:00:25.799845 containerd[1472]: time="2025-01-30T14:00:25.798515796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:00:25.799845 containerd[1472]: time="2025-01-30T14:00:25.798535622Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:00:25.799845 containerd[1472]: time="2025-01-30T14:00:25.798547784Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:00:25.799845 containerd[1472]: time="2025-01-30T14:00:25.798560078Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:00:25.799845 containerd[1472]: time="2025-01-30T14:00:25.798583934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.799845 containerd[1472]: time="2025-01-30T14:00:25.798598637Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:00:25.799845 containerd[1472]: time="2025-01-30T14:00:25.798609087Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:00:25.799845 containerd[1472]: time="2025-01-30T14:00:25.798619653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:00:25.801791 containerd[1472]: time="2025-01-30T14:00:25.800144380Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:00:25.801791 containerd[1472]: time="2025-01-30T14:00:25.800262845Z" level=info msg="Connect containerd service" Jan 30 14:00:25.801791 containerd[1472]: time="2025-01-30T14:00:25.800326195Z" level=info msg="using legacy CRI server" Jan 30 14:00:25.801791 containerd[1472]: time="2025-01-30T14:00:25.800338175Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:00:25.801791 containerd[1472]: time="2025-01-30T14:00:25.800661012Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:00:25.806250 containerd[1472]: time="2025-01-30T14:00:25.804818736Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:00:25.806250 containerd[1472]: time="2025-01-30T14:00:25.805023161Z" level=info msg="Start subscribing containerd event" Jan 30 14:00:25.806250 containerd[1472]: time="2025-01-30T14:00:25.805140258Z" level=info msg="Start recovering state" Jan 30 14:00:25.810079 containerd[1472]: time="2025-01-30T14:00:25.809066625Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:00:25.810079 containerd[1472]: time="2025-01-30T14:00:25.809195918Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:00:25.815574 containerd[1472]: time="2025-01-30T14:00:25.812562778Z" level=info msg="Start event monitor" Jan 30 14:00:25.815574 containerd[1472]: time="2025-01-30T14:00:25.812662625Z" level=info msg="Start snapshots syncer" Jan 30 14:00:25.815574 containerd[1472]: time="2025-01-30T14:00:25.812682652Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:00:25.815574 containerd[1472]: time="2025-01-30T14:00:25.812695259Z" level=info msg="Start streaming server" Jan 30 14:00:25.815574 containerd[1472]: time="2025-01-30T14:00:25.815275048Z" level=info msg="containerd successfully booted in 0.217059s" Jan 30 14:00:25.813077 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:00:25.859098 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:00:25.878756 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:00:25.913351 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:00:25.914453 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:00:25.926744 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:00:25.969567 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:00:25.981657 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:00:25.984975 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:00:25.988165 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:00:26.325092 tar[1456]: linux-amd64/LICENSE Jan 30 14:00:26.327927 tar[1456]: linux-amd64/README.md Jan 30 14:00:26.343540 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:00:26.920742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:26.926678 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:00:26.931179 systemd[1]: Startup finished in 1.442s (kernel) + 6.237s (initrd) + 7.401s (userspace) = 15.081s. Jan 30 14:00:26.945290 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:00:27.838565 kubelet[1563]: E0130 14:00:27.838454 1563 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:00:27.841320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:00:27.841504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:00:27.841886 systemd[1]: kubelet.service: Consumed 1.482s CPU time. Jan 30 14:00:34.549504 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:00:34.551127 systemd[1]: Started sshd@0-143.198.106.130:22-147.75.109.163:43014.service - OpenSSH per-connection server daemon (147.75.109.163:43014). Jan 30 14:00:34.651291 sshd[1575]: Accepted publickey for core from 147.75.109.163 port 43014 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:34.654863 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:34.668480 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:00:34.674728 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:00:34.680100 systemd-logind[1449]: New session 1 of user core. Jan 30 14:00:34.694382 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:00:34.702919 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:00:34.720141 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:00:34.864350 systemd[1579]: Queued start job for default target default.target. Jan 30 14:00:34.873200 systemd[1579]: Created slice app.slice - User Application Slice. Jan 30 14:00:34.873311 systemd[1579]: Reached target paths.target - Paths. Jan 30 14:00:34.873336 systemd[1579]: Reached target timers.target - Timers. Jan 30 14:00:34.875650 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:00:34.893704 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:00:34.893891 systemd[1579]: Reached target sockets.target - Sockets. Jan 30 14:00:34.893918 systemd[1579]: Reached target basic.target - Basic System. Jan 30 14:00:34.893998 systemd[1579]: Reached target default.target - Main User Target. Jan 30 14:00:34.894043 systemd[1579]: Startup finished in 161ms. Jan 30 14:00:34.894374 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:00:34.901580 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:00:34.987361 systemd[1]: Started sshd@1-143.198.106.130:22-147.75.109.163:43020.service - OpenSSH per-connection server daemon (147.75.109.163:43020). Jan 30 14:00:35.040138 sshd[1591]: Accepted publickey for core from 147.75.109.163 port 43020 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:35.042682 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:35.048577 systemd-logind[1449]: New session 2 of user core. Jan 30 14:00:35.057577 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:00:35.122625 sshd[1591]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:35.137974 systemd[1]: sshd@1-143.198.106.130:22-147.75.109.163:43020.service: Deactivated successfully. Jan 30 14:00:35.140975 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:00:35.143916 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:00:35.149745 systemd[1]: Started sshd@2-143.198.106.130:22-147.75.109.163:43028.service - OpenSSH per-connection server daemon (147.75.109.163:43028). Jan 30 14:00:35.151911 systemd-logind[1449]: Removed session 2. Jan 30 14:00:35.206895 sshd[1598]: Accepted publickey for core from 147.75.109.163 port 43028 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:35.209100 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:35.216969 systemd-logind[1449]: New session 3 of user core. Jan 30 14:00:35.223585 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:00:35.284021 sshd[1598]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:35.301785 systemd[1]: sshd@2-143.198.106.130:22-147.75.109.163:43028.service: Deactivated successfully. Jan 30 14:00:35.305787 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:00:35.308542 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:00:35.315814 systemd[1]: Started sshd@3-143.198.106.130:22-147.75.109.163:43044.service - OpenSSH per-connection server daemon (147.75.109.163:43044). Jan 30 14:00:35.318372 systemd-logind[1449]: Removed session 3. Jan 30 14:00:35.381335 sshd[1605]: Accepted publickey for core from 147.75.109.163 port 43044 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:35.383905 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:35.390408 systemd-logind[1449]: New session 4 of user core. Jan 30 14:00:35.397515 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:00:35.465503 sshd[1605]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:35.480270 systemd[1]: sshd@3-143.198.106.130:22-147.75.109.163:43044.service: Deactivated successfully. Jan 30 14:00:35.482466 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:00:35.485006 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:00:35.486487 systemd[1]: Started sshd@4-143.198.106.130:22-147.75.109.163:43052.service - OpenSSH per-connection server daemon (147.75.109.163:43052). Jan 30 14:00:35.487978 systemd-logind[1449]: Removed session 4. Jan 30 14:00:35.552068 sshd[1612]: Accepted publickey for core from 147.75.109.163 port 43052 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:35.554364 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:35.561714 systemd-logind[1449]: New session 5 of user core. Jan 30 14:00:35.567565 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:00:35.640910 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:00:35.641330 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:00:36.207846 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:00:36.222939 (dockerd)[1630]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:00:36.688072 dockerd[1630]: time="2025-01-30T14:00:36.687273462Z" level=info msg="Starting up" Jan 30 14:00:36.845465 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3799644527-merged.mount: Deactivated successfully. Jan 30 14:00:36.856090 systemd[1]: var-lib-docker-metacopy\x2dcheck1571623668-merged.mount: Deactivated successfully. Jan 30 14:00:36.883890 dockerd[1630]: time="2025-01-30T14:00:36.883833048Z" level=info msg="Loading containers: start." Jan 30 14:00:37.027358 kernel: Initializing XFRM netlink socket Jan 30 14:00:37.134800 systemd-networkd[1376]: docker0: Link UP Jan 30 14:00:37.153306 dockerd[1630]: time="2025-01-30T14:00:37.153200158Z" level=info msg="Loading containers: done." Jan 30 14:00:37.171584 dockerd[1630]: time="2025-01-30T14:00:37.171435428Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:00:37.172210 dockerd[1630]: time="2025-01-30T14:00:37.171669659Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:00:37.172210 dockerd[1630]: time="2025-01-30T14:00:37.171797629Z" level=info msg="Daemon has completed initialization" Jan 30 14:00:37.232730 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:00:37.233627 dockerd[1630]: time="2025-01-30T14:00:37.233291182Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:00:38.092104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:00:38.099575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:38.269462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:38.285949 (kubelet)[1784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:00:38.427174 containerd[1472]: time="2025-01-30T14:00:38.427017566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:00:38.436403 kubelet[1784]: E0130 14:00:38.436299 1784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:00:38.442729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:00:38.443031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:00:39.121734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654553201.mount: Deactivated successfully. Jan 30 14:00:40.926027 containerd[1472]: time="2025-01-30T14:00:40.925905860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:40.927673 containerd[1472]: time="2025-01-30T14:00:40.927594169Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 14:00:40.928540 containerd[1472]: time="2025-01-30T14:00:40.928192919Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:40.932249 containerd[1472]: time="2025-01-30T14:00:40.931633928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:40.933871 containerd[1472]: time="2025-01-30T14:00:40.933563707Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.505380219s" Jan 30 14:00:40.933871 containerd[1472]: time="2025-01-30T14:00:40.933630330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 14:00:40.973646 containerd[1472]: time="2025-01-30T14:00:40.973599509Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:00:43.033999 containerd[1472]: time="2025-01-30T14:00:43.032110803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:43.033999 containerd[1472]: time="2025-01-30T14:00:43.033518153Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 14:00:43.035010 containerd[1472]: time="2025-01-30T14:00:43.034958509Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:43.040282 containerd[1472]: time="2025-01-30T14:00:43.040180806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:43.042096 containerd[1472]: time="2025-01-30T14:00:43.041394047Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.067468612s" Jan 30 14:00:43.042096 containerd[1472]: time="2025-01-30T14:00:43.041538662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 14:00:43.090991 containerd[1472]: time="2025-01-30T14:00:43.090919445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:00:44.508103 containerd[1472]: time="2025-01-30T14:00:44.507963121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:44.511047 containerd[1472]: time="2025-01-30T14:00:44.510346945Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 14:00:44.512526 containerd[1472]: time="2025-01-30T14:00:44.512399970Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:44.519105 containerd[1472]: time="2025-01-30T14:00:44.518245341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:44.520021 containerd[1472]: time="2025-01-30T14:00:44.519968205Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.428603632s" Jan 30 14:00:44.520021 containerd[1472]: time="2025-01-30T14:00:44.520016112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 14:00:44.568104 containerd[1472]: time="2025-01-30T14:00:44.568030423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:00:44.608013 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 14:00:45.909910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232068937.mount: Deactivated successfully. Jan 30 14:00:46.540905 containerd[1472]: time="2025-01-30T14:00:46.540647244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:46.542879 containerd[1472]: time="2025-01-30T14:00:46.542809423Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 14:00:46.543617 containerd[1472]: time="2025-01-30T14:00:46.543533859Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:46.546735 containerd[1472]: time="2025-01-30T14:00:46.546684631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:46.547814 containerd[1472]: time="2025-01-30T14:00:46.547739812Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.979649001s" Jan 30 14:00:46.547814 containerd[1472]: time="2025-01-30T14:00:46.547811579Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 14:00:46.582816 containerd[1472]: time="2025-01-30T14:00:46.582739126Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:00:47.185460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860093685.mount: Deactivated successfully. Jan 30 14:00:47.715535 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 14:00:48.335410 containerd[1472]: time="2025-01-30T14:00:48.335325002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:48.338315 containerd[1472]: time="2025-01-30T14:00:48.338190135Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 14:00:48.342419 containerd[1472]: time="2025-01-30T14:00:48.341995018Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:48.352958 containerd[1472]: time="2025-01-30T14:00:48.352556442Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.769746784s" Jan 30 14:00:48.352958 containerd[1472]: time="2025-01-30T14:00:48.352785482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 14:00:48.355477 containerd[1472]: time="2025-01-30T14:00:48.353950113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:48.396371 containerd[1472]: time="2025-01-30T14:00:48.396321217Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:00:48.693699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:00:48.701620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:48.867600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:48.878820 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:00:48.953419 kubelet[1942]: E0130 14:00:48.953172 1942 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:00:48.955981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:00:48.956192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:00:48.999209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494619034.mount: Deactivated successfully. Jan 30 14:00:49.010330 containerd[1472]: time="2025-01-30T14:00:49.009535720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:49.011934 containerd[1472]: time="2025-01-30T14:00:49.011614988Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 14:00:49.013548 containerd[1472]: time="2025-01-30T14:00:49.013466568Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:49.017757 containerd[1472]: time="2025-01-30T14:00:49.017662321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:49.019296 containerd[1472]: time="2025-01-30T14:00:49.018603569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 622.231563ms" Jan 30 14:00:49.019296 containerd[1472]: time="2025-01-30T14:00:49.018655699Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 14:00:49.051817 containerd[1472]: time="2025-01-30T14:00:49.051511243Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:00:49.594824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539405715.mount: Deactivated successfully. Jan 30 14:00:51.872065 containerd[1472]: time="2025-01-30T14:00:51.871999670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:51.875211 containerd[1472]: time="2025-01-30T14:00:51.875146074Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 14:00:51.883531 containerd[1472]: time="2025-01-30T14:00:51.883047775Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:51.888939 containerd[1472]: time="2025-01-30T14:00:51.888857830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:00:51.891038 containerd[1472]: time="2025-01-30T14:00:51.890972591Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.839407948s" Jan 30 14:00:51.891809 containerd[1472]: time="2025-01-30T14:00:51.891275629Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 14:00:56.429857 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:56.441819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:56.474991 systemd[1]: Reloading requested from client PID 2070 ('systemctl') (unit session-5.scope)... Jan 30 14:00:56.475017 systemd[1]: Reloading... Jan 30 14:00:56.669250 zram_generator::config[2115]: No configuration found. Jan 30 14:00:56.834743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:00:56.959391 systemd[1]: Reloading finished in 483 ms. Jan 30 14:00:57.028077 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:00:57.028274 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:00:57.029406 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:57.037584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:00:57.221601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:00:57.221944 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:00:57.306129 kubelet[2163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:00:57.306129 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:00:57.306129 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:00:57.309308 kubelet[2163]: I0130 14:00:57.309164 2163 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:00:58.324868 kubelet[2163]: I0130 14:00:58.324779 2163 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:00:58.324868 kubelet[2163]: I0130 14:00:58.324839 2163 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:00:58.325498 kubelet[2163]: I0130 14:00:58.325261 2163 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:00:58.351617 kubelet[2163]: I0130 14:00:58.350940 2163 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:00:58.352335 kubelet[2163]: E0130 14:00:58.352212 2163 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.106.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.370080 kubelet[2163]: I0130 14:00:58.370024 2163 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:00:58.371632 kubelet[2163]: I0130 14:00:58.371538 2163 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:00:58.371873 kubelet[2163]: I0130 14:00:58.371610 2163 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-5-054816032d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:00:58.372485 kubelet[2163]: I0130 14:00:58.372451 2163 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:00:58.372485 kubelet[2163]: I0130 14:00:58.372487 2163 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:00:58.372835 kubelet[2163]: I0130 14:00:58.372793 2163 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:00:58.373712 kubelet[2163]: I0130 14:00:58.373675 2163 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:00:58.373712 kubelet[2163]: I0130 14:00:58.373702 2163 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:00:58.375378 kubelet[2163]: I0130 14:00:58.373728 2163 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:00:58.375378 kubelet[2163]: I0130 14:00:58.373744 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:00:58.377832 kubelet[2163]: W0130 14:00:58.377757 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.106.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.377832 kubelet[2163]: E0130 14:00:58.377827 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.106.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.378128 kubelet[2163]: W0130 14:00:58.378091 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.106.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-5-054816032d&limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.378128 kubelet[2163]: E0130 14:00:58.378130 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.106.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-5-054816032d&limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.378661 kubelet[2163]: I0130 14:00:58.378629 2163 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:00:58.381495 kubelet[2163]: I0130 14:00:58.380523 2163 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:00:58.381495 kubelet[2163]: W0130 14:00:58.380820 2163 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:00:58.382992 kubelet[2163]: I0130 14:00:58.382418 2163 server.go:1264] "Started kubelet" Jan 30 14:00:58.395196 kubelet[2163]: I0130 14:00:58.395114 2163 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:00:58.397851 kubelet[2163]: E0130 14:00:58.397374 2163 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.106.130:6443/api/v1/namespaces/default/events\": dial tcp 143.198.106.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-5-054816032d.181f7d37783f2228 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-5-054816032d,UID:ci-4081.3.0-5-054816032d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-5-054816032d,},FirstTimestamp:2025-01-30 14:00:58.382377512 +0000 UTC m=+1.152579042,LastTimestamp:2025-01-30 14:00:58.382377512 +0000 UTC m=+1.152579042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-5-054816032d,}" Jan 30 14:00:58.401309 kubelet[2163]: I0130 14:00:58.401275 2163 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:00:58.401713 kubelet[2163]: I0130 14:00:58.401688 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:00:58.402851 kubelet[2163]: I0130 14:00:58.402636 2163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:00:58.406698 kubelet[2163]: I0130 14:00:58.406672 2163 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:00:58.406993 kubelet[2163]: I0130 14:00:58.405359 2163 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:00:58.407471 kubelet[2163]: W0130 14:00:58.407368 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.106.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.407471 kubelet[2163]: E0130 14:00:58.407433 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.106.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.407979 kubelet[2163]: E0130 14:00:58.407659 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.106.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-5-054816032d?timeout=10s\": dial tcp 143.198.106.130:6443: connect: connection refused" interval="200ms" Jan 30 14:00:58.407979 kubelet[2163]: I0130 14:00:58.405330 2163 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:00:58.408800 kubelet[2163]: I0130 14:00:58.408587 2163 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:00:58.409189 kubelet[2163]: E0130 14:00:58.409164 2163 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:00:58.409497 kubelet[2163]: I0130 14:00:58.409462 2163 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:00:58.411809 kubelet[2163]: I0130 14:00:58.411653 2163 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:00:58.411809 kubelet[2163]: I0130 14:00:58.411675 2163 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:00:58.431288 kubelet[2163]: I0130 14:00:58.431031 2163 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:00:58.431288 kubelet[2163]: I0130 14:00:58.431060 2163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:00:58.431288 kubelet[2163]: I0130 14:00:58.431088 2163 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:00:58.439429 kubelet[2163]: I0130 14:00:58.439381 2163 policy_none.go:49] "None policy: Start" Jan 30 14:00:58.443059 kubelet[2163]: I0130 14:00:58.442639 2163 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:00:58.443059 kubelet[2163]: I0130 14:00:58.442679 2163 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:00:58.446163 kubelet[2163]: I0130 14:00:58.446081 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:00:58.448403 kubelet[2163]: I0130 14:00:58.447645 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:00:58.448403 kubelet[2163]: I0130 14:00:58.447668 2163 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:00:58.448403 kubelet[2163]: I0130 14:00:58.447691 2163 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:00:58.448403 kubelet[2163]: E0130 14:00:58.447740 2163 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:00:58.454070 kubelet[2163]: W0130 14:00:58.453988 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.106.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.454070 kubelet[2163]: E0130 14:00:58.454070 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.106.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:58.460123 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:00:58.477507 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:00:58.481839 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:00:58.494255 kubelet[2163]: I0130 14:00:58.493996 2163 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:00:58.494440 kubelet[2163]: I0130 14:00:58.494358 2163 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:00:58.494773 kubelet[2163]: I0130 14:00:58.494553 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:00:58.499039 kubelet[2163]: E0130 14:00:58.498961 2163 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-5-054816032d\" not found" Jan 30 14:00:58.507631 kubelet[2163]: I0130 14:00:58.507155 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-5-054816032d" Jan 30 14:00:58.507631 kubelet[2163]: E0130 14:00:58.507582 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.106.130:6443/api/v1/nodes\": dial tcp 143.198.106.130:6443: connect: connection refused" node="ci-4081.3.0-5-054816032d" Jan 30 14:00:58.548371 kubelet[2163]: I0130 14:00:58.548255 2163 topology_manager.go:215] "Topology Admit Handler" podUID="25be7df92b2e3158917ecca012b8a886" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.551071 kubelet[2163]: I0130 14:00:58.550163 2163 topology_manager.go:215] "Topology Admit Handler" podUID="48e79947874395423fdf5ff9f2a911ea" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.551898 kubelet[2163]: I0130 14:00:58.551858 2163 topology_manager.go:215] "Topology Admit Handler" podUID="155c10b389ece0a3dd0a820580d89eb8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.564407 systemd[1]: Created slice kubepods-burstable-pod48e79947874395423fdf5ff9f2a911ea.slice - libcontainer container kubepods-burstable-pod48e79947874395423fdf5ff9f2a911ea.slice. Jan 30 14:00:58.577685 systemd[1]: Created slice kubepods-burstable-pod25be7df92b2e3158917ecca012b8a886.slice - libcontainer container kubepods-burstable-pod25be7df92b2e3158917ecca012b8a886.slice. Jan 30 14:00:58.588594 systemd[1]: Created slice kubepods-burstable-pod155c10b389ece0a3dd0a820580d89eb8.slice - libcontainer container kubepods-burstable-pod155c10b389ece0a3dd0a820580d89eb8.slice. Jan 30 14:00:58.608375 kubelet[2163]: E0130 14:00:58.608322 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.106.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-5-054816032d?timeout=10s\": dial tcp 143.198.106.130:6443: connect: connection refused" interval="400ms" Jan 30 14:00:58.609431 kubelet[2163]: I0130 14:00:58.609360 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.609431 kubelet[2163]: I0130 14:00:58.609423 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.609590 kubelet[2163]: I0130 14:00:58.609445 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.609590 kubelet[2163]: I0130 14:00:58.609462 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48e79947874395423fdf5ff9f2a911ea-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-5-054816032d\" (UID: \"48e79947874395423fdf5ff9f2a911ea\") " pod="kube-system/kube-scheduler-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.609590 kubelet[2163]: I0130 14:00:58.609484 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.609590 kubelet[2163]: I0130 14:00:58.609499 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.609590 kubelet[2163]: I0130 14:00:58.609516 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/155c10b389ece0a3dd0a820580d89eb8-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-5-054816032d\" (UID: \"155c10b389ece0a3dd0a820580d89eb8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.609720 kubelet[2163]: I0130 14:00:58.609530 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/155c10b389ece0a3dd0a820580d89eb8-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-5-054816032d\" (UID: \"155c10b389ece0a3dd0a820580d89eb8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.609720 kubelet[2163]: I0130 14:00:58.609549 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/155c10b389ece0a3dd0a820580d89eb8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-5-054816032d\" (UID: \"155c10b389ece0a3dd0a820580d89eb8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-5-054816032d" Jan 30 14:00:58.709291 kubelet[2163]: I0130 14:00:58.709243 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-5-054816032d" Jan 30 14:00:58.709949 kubelet[2163]: E0130 14:00:58.709701 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.106.130:6443/api/v1/nodes\": dial tcp 143.198.106.130:6443: connect: connection refused" node="ci-4081.3.0-5-054816032d" Jan 30 14:00:58.875094 kubelet[2163]: E0130 14:00:58.874953 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:58.876336 containerd[1472]: time="2025-01-30T14:00:58.876180151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-5-054816032d,Uid:48e79947874395423fdf5ff9f2a911ea,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:58.879011 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 14:00:58.886076 kubelet[2163]: E0130 14:00:58.886032 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:58.892448 kubelet[2163]: E0130 14:00:58.892031 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:58.892724 containerd[1472]: time="2025-01-30T14:00:58.892404267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-5-054816032d,Uid:25be7df92b2e3158917ecca012b8a886,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:58.893100 containerd[1472]: time="2025-01-30T14:00:58.893047694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-5-054816032d,Uid:155c10b389ece0a3dd0a820580d89eb8,Namespace:kube-system,Attempt:0,}" Jan 30 14:00:59.009491 kubelet[2163]: E0130 14:00:59.009410 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.106.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-5-054816032d?timeout=10s\": dial tcp 143.198.106.130:6443: connect: connection refused" interval="800ms" Jan 30 14:00:59.111674 kubelet[2163]: I0130 14:00:59.111632 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-5-054816032d" Jan 30 14:00:59.112052 kubelet[2163]: E0130 14:00:59.112026 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.106.130:6443/api/v1/nodes\": dial tcp 143.198.106.130:6443: connect: connection refused" node="ci-4081.3.0-5-054816032d" Jan 30 14:00:59.390134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981510297.mount: Deactivated successfully. Jan 30 14:00:59.399955 containerd[1472]: time="2025-01-30T14:00:59.399876703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:59.401313 containerd[1472]: time="2025-01-30T14:00:59.401210171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 14:00:59.402891 containerd[1472]: time="2025-01-30T14:00:59.402700096Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:59.404734 containerd[1472]: time="2025-01-30T14:00:59.404526196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:59.406436 containerd[1472]: time="2025-01-30T14:00:59.405854361Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:59.406436 containerd[1472]: time="2025-01-30T14:00:59.406341378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:00:59.406436 containerd[1472]: time="2025-01-30T14:00:59.406397664Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:00:59.411884 containerd[1472]: time="2025-01-30T14:00:59.411819786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:00:59.413416 containerd[1472]: time="2025-01-30T14:00:59.413022659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.895988ms" Jan 30 14:00:59.415290 containerd[1472]: time="2025-01-30T14:00:59.414667798Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.260979ms" Jan 30 14:00:59.418628 containerd[1472]: time="2025-01-30T14:00:59.418454513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.951764ms" Jan 30 14:00:59.489306 kubelet[2163]: W0130 14:00:59.489175 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.106.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-5-054816032d&limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:59.494513 kubelet[2163]: E0130 14:00:59.490811 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.106.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-5-054816032d&limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:59.611162 kubelet[2163]: W0130 14:00:59.611070 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.106.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:59.612154 kubelet[2163]: E0130 14:00:59.611578 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.106.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:59.648340 containerd[1472]: time="2025-01-30T14:00:59.647736656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:59.648340 containerd[1472]: time="2025-01-30T14:00:59.647932350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:59.648340 containerd[1472]: time="2025-01-30T14:00:59.648095393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:59.650879 containerd[1472]: time="2025-01-30T14:00:59.650578932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:59.658724 containerd[1472]: time="2025-01-30T14:00:59.658609353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:59.658724 containerd[1472]: time="2025-01-30T14:00:59.658669492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:59.659106 containerd[1472]: time="2025-01-30T14:00:59.658684244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:59.659106 containerd[1472]: time="2025-01-30T14:00:59.658771919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:59.659106 containerd[1472]: time="2025-01-30T14:00:59.658858043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:59.659106 containerd[1472]: time="2025-01-30T14:00:59.658902178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:59.659106 containerd[1472]: time="2025-01-30T14:00:59.658912980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:59.659106 containerd[1472]: time="2025-01-30T14:00:59.658978827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:59.695542 systemd[1]: Started cri-containerd-d034139c313b69245c539acc07ea008274b1eff4675a90827a28098d4ec2c538.scope - libcontainer container d034139c313b69245c539acc07ea008274b1eff4675a90827a28098d4ec2c538. Jan 30 14:00:59.712511 systemd[1]: Started cri-containerd-267b25a1d755e11dc4e067f4eab78d0b1310d1e3faa16f088400e1eb6453f341.scope - libcontainer container 267b25a1d755e11dc4e067f4eab78d0b1310d1e3faa16f088400e1eb6453f341. Jan 30 14:00:59.715827 systemd[1]: Started cri-containerd-7959810c6c04a96f02b3d43792dc2a7a8d7ed7d46b492bdb71cf3ca940e4df81.scope - libcontainer container 7959810c6c04a96f02b3d43792dc2a7a8d7ed7d46b492bdb71cf3ca940e4df81. Jan 30 14:00:59.733235 kubelet[2163]: W0130 14:00:59.733150 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.106.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:59.733235 kubelet[2163]: E0130 14:00:59.733209 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.106.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:59.797322 containerd[1472]: time="2025-01-30T14:00:59.796058923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-5-054816032d,Uid:48e79947874395423fdf5ff9f2a911ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d034139c313b69245c539acc07ea008274b1eff4675a90827a28098d4ec2c538\"" Jan 30 14:00:59.798899 kubelet[2163]: E0130 14:00:59.798756 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:59.810290 kubelet[2163]: E0130 14:00:59.810134 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.106.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-5-054816032d?timeout=10s\": dial tcp 143.198.106.130:6443: connect: connection refused" interval="1.6s" Jan 30 14:00:59.810679 containerd[1472]: time="2025-01-30T14:00:59.810467942Z" level=info msg="CreateContainer within sandbox \"d034139c313b69245c539acc07ea008274b1eff4675a90827a28098d4ec2c538\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:00:59.844270 containerd[1472]: time="2025-01-30T14:00:59.843115171Z" level=info msg="CreateContainer within sandbox \"d034139c313b69245c539acc07ea008274b1eff4675a90827a28098d4ec2c538\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"26089e4fcade768dd4cb7d3f5124f9baf98b60d335b61a540c6e5d2205e8dbd3\"" Jan 30 14:00:59.846187 containerd[1472]: time="2025-01-30T14:00:59.846124718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-5-054816032d,Uid:155c10b389ece0a3dd0a820580d89eb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7959810c6c04a96f02b3d43792dc2a7a8d7ed7d46b492bdb71cf3ca940e4df81\"" Jan 30 14:00:59.846487 containerd[1472]: time="2025-01-30T14:00:59.846189463Z" level=info msg="StartContainer for \"26089e4fcade768dd4cb7d3f5124f9baf98b60d335b61a540c6e5d2205e8dbd3\"" Jan 30 14:00:59.847719 kubelet[2163]: E0130 14:00:59.847686 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:59.852130 containerd[1472]: time="2025-01-30T14:00:59.852078502Z" level=info msg="CreateContainer within sandbox \"7959810c6c04a96f02b3d43792dc2a7a8d7ed7d46b492bdb71cf3ca940e4df81\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:00:59.857963 containerd[1472]: time="2025-01-30T14:00:59.857909886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-5-054816032d,Uid:25be7df92b2e3158917ecca012b8a886,Namespace:kube-system,Attempt:0,} returns sandbox id \"267b25a1d755e11dc4e067f4eab78d0b1310d1e3faa16f088400e1eb6453f341\"" Jan 30 14:00:59.860552 kubelet[2163]: E0130 14:00:59.860242 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:59.870423 containerd[1472]: time="2025-01-30T14:00:59.870104830Z" level=info msg="CreateContainer within sandbox \"267b25a1d755e11dc4e067f4eab78d0b1310d1e3faa16f088400e1eb6453f341\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:00:59.889039 containerd[1472]: time="2025-01-30T14:00:59.888960128Z" level=info msg="CreateContainer within sandbox \"7959810c6c04a96f02b3d43792dc2a7a8d7ed7d46b492bdb71cf3ca940e4df81\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6df6a8bcaed5bd39674f1b3e9af2623826efabda7d62fda692b36ac35d3ac846\"" Jan 30 14:00:59.891547 containerd[1472]: time="2025-01-30T14:00:59.891490804Z" level=info msg="StartContainer for \"6df6a8bcaed5bd39674f1b3e9af2623826efabda7d62fda692b36ac35d3ac846\"" Jan 30 14:00:59.899114 kubelet[2163]: W0130 14:00:59.898881 2163 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.106.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:59.899114 kubelet[2163]: E0130 14:00:59.898978 2163 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.106.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:00:59.913575 systemd[1]: Started cri-containerd-26089e4fcade768dd4cb7d3f5124f9baf98b60d335b61a540c6e5d2205e8dbd3.scope - libcontainer container 26089e4fcade768dd4cb7d3f5124f9baf98b60d335b61a540c6e5d2205e8dbd3. Jan 30 14:00:59.914306 kubelet[2163]: I0130 14:00:59.914024 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-5-054816032d" Jan 30 14:00:59.915565 kubelet[2163]: E0130 14:00:59.915509 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.106.130:6443/api/v1/nodes\": dial tcp 143.198.106.130:6443: connect: connection refused" node="ci-4081.3.0-5-054816032d" Jan 30 14:00:59.924976 containerd[1472]: time="2025-01-30T14:00:59.923199015Z" level=info msg="CreateContainer within sandbox \"267b25a1d755e11dc4e067f4eab78d0b1310d1e3faa16f088400e1eb6453f341\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf418a3a65d023eec9526b5c6c6a21f08b1c262555bc160b579c71dd4c86d7bf\"" Jan 30 14:00:59.927248 containerd[1472]: time="2025-01-30T14:00:59.925774471Z" level=info msg="StartContainer for \"bf418a3a65d023eec9526b5c6c6a21f08b1c262555bc160b579c71dd4c86d7bf\"" Jan 30 14:00:59.955551 systemd[1]: Started cri-containerd-6df6a8bcaed5bd39674f1b3e9af2623826efabda7d62fda692b36ac35d3ac846.scope - libcontainer container 6df6a8bcaed5bd39674f1b3e9af2623826efabda7d62fda692b36ac35d3ac846. Jan 30 14:01:00.011618 systemd[1]: Started cri-containerd-bf418a3a65d023eec9526b5c6c6a21f08b1c262555bc160b579c71dd4c86d7bf.scope - libcontainer container bf418a3a65d023eec9526b5c6c6a21f08b1c262555bc160b579c71dd4c86d7bf. Jan 30 14:01:00.037788 containerd[1472]: time="2025-01-30T14:01:00.037709261Z" level=info msg="StartContainer for \"26089e4fcade768dd4cb7d3f5124f9baf98b60d335b61a540c6e5d2205e8dbd3\" returns successfully" Jan 30 14:01:00.082301 containerd[1472]: time="2025-01-30T14:01:00.082052087Z" level=info msg="StartContainer for \"6df6a8bcaed5bd39674f1b3e9af2623826efabda7d62fda692b36ac35d3ac846\" returns successfully" Jan 30 14:01:00.145523 containerd[1472]: time="2025-01-30T14:01:00.145453947Z" level=info msg="StartContainer for \"bf418a3a65d023eec9526b5c6c6a21f08b1c262555bc160b579c71dd4c86d7bf\" returns successfully" Jan 30 14:01:00.492125 kubelet[2163]: E0130 14:01:00.492071 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:00.503286 kubelet[2163]: E0130 14:01:00.502422 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:00.507242 kubelet[2163]: E0130 14:01:00.505200 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:00.535399 kubelet[2163]: E0130 14:01:00.535352 2163 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.106.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.106.130:6443: connect: connection refused Jan 30 14:01:01.513157 kubelet[2163]: E0130 14:01:01.513088 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:01.519024 kubelet[2163]: I0130 14:01:01.518963 2163 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-5-054816032d" Jan 30 14:01:02.699749 kubelet[2163]: E0130 14:01:02.699701 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:03.220245 kubelet[2163]: I0130 14:01:03.220112 2163 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-5-054816032d" Jan 30 14:01:03.299727 kubelet[2163]: E0130 14:01:03.299673 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 30 14:01:03.381624 kubelet[2163]: I0130 14:01:03.381277 2163 apiserver.go:52] "Watching apiserver" Jan 30 14:01:03.408576 kubelet[2163]: I0130 14:01:03.407931 2163 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:01:05.684579 systemd[1]: Reloading requested from client PID 2436 ('systemctl') (unit session-5.scope)... Jan 30 14:01:05.684611 systemd[1]: Reloading... Jan 30 14:01:05.799073 zram_generator::config[2474]: No configuration found. Jan 30 14:01:05.999389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:01:06.130602 systemd[1]: Reloading finished in 445 ms. Jan 30 14:01:06.187891 kubelet[2163]: I0130 14:01:06.187809 2163 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:01:06.187902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:06.201903 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:01:06.202130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:06.202200 systemd[1]: kubelet.service: Consumed 1.679s CPU time, 112.2M memory peak, 0B memory swap peak. Jan 30 14:01:06.208851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:01:06.385833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:01:06.401850 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:01:06.498272 kubelet[2525]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:06.498272 kubelet[2525]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:01:06.498272 kubelet[2525]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:01:06.499766 kubelet[2525]: I0130 14:01:06.498365 2525 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:01:06.507648 kubelet[2525]: I0130 14:01:06.507199 2525 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:01:06.507648 kubelet[2525]: I0130 14:01:06.507290 2525 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:01:06.507648 kubelet[2525]: I0130 14:01:06.507554 2525 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:01:06.509480 kubelet[2525]: I0130 14:01:06.509442 2525 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:01:06.512185 kubelet[2525]: I0130 14:01:06.512030 2525 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:01:06.521825 kubelet[2525]: I0130 14:01:06.521783 2525 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:01:06.522195 kubelet[2525]: I0130 14:01:06.522138 2525 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:01:06.522410 kubelet[2525]: I0130 14:01:06.522179 2525 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-5-054816032d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:01:06.522522 kubelet[2525]: I0130 14:01:06.522432 2525 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:01:06.522522 kubelet[2525]: I0130 14:01:06.522444 2525 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:01:06.522522 kubelet[2525]: I0130 14:01:06.522507 2525 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:06.522657 kubelet[2525]: I0130 14:01:06.522643 2525 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:01:06.523134 kubelet[2525]: I0130 14:01:06.523115 2525 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:01:06.523195 kubelet[2525]: I0130 14:01:06.523158 2525 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:01:06.523195 kubelet[2525]: I0130 14:01:06.523173 2525 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:01:06.525285 kubelet[2525]: I0130 14:01:06.525096 2525 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:01:06.527236 kubelet[2525]: I0130 14:01:06.527066 2525 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:01:06.527649 kubelet[2525]: I0130 14:01:06.527609 2525 server.go:1264] "Started kubelet" Jan 30 14:01:06.533418 kubelet[2525]: I0130 14:01:06.533243 2525 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:01:06.541876 kubelet[2525]: I0130 14:01:06.541814 2525 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:01:06.547319 kubelet[2525]: I0130 14:01:06.545109 2525 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:01:06.571287 kubelet[2525]: I0130 14:01:06.552434 2525 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:01:06.571478 kubelet[2525]: I0130 14:01:06.571461 2525 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:01:06.571519 kubelet[2525]: I0130 14:01:06.555143 2525 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:01:06.571519 kubelet[2525]: I0130 14:01:06.555130 2525 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:01:06.571594 kubelet[2525]: I0130 14:01:06.561709 2525 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:01:06.571729 kubelet[2525]: I0130 14:01:06.571680 2525 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:01:06.572578 kubelet[2525]: I0130 14:01:06.572528 2525 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:01:06.575051 kubelet[2525]: E0130 14:01:06.573605 2525 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:01:06.575051 kubelet[2525]: I0130 14:01:06.574137 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:01:06.575232 kubelet[2525]: I0130 14:01:06.575143 2525 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:01:06.580324 kubelet[2525]: I0130 14:01:06.580277 2525 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:01:06.580962 kubelet[2525]: I0130 14:01:06.580893 2525 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:01:06.585305 kubelet[2525]: I0130 14:01:06.584960 2525 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:01:06.585305 kubelet[2525]: E0130 14:01:06.585032 2525 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:01:06.651725 kubelet[2525]: I0130 14:01:06.651601 2525 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:01:06.651725 kubelet[2525]: I0130 14:01:06.651630 2525 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:01:06.651725 kubelet[2525]: I0130 14:01:06.651663 2525 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:01:06.651972 kubelet[2525]: I0130 14:01:06.651943 2525 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:01:06.652001 kubelet[2525]: I0130 14:01:06.651961 2525 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:01:06.652031 kubelet[2525]: I0130 14:01:06.652001 2525 policy_none.go:49] "None policy: Start" Jan 30 14:01:06.654486 kubelet[2525]: I0130 14:01:06.654443 2525 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:01:06.654486 kubelet[2525]: I0130 14:01:06.654486 2525 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:01:06.654752 kubelet[2525]: I0130 14:01:06.654725 2525 state_mem.go:75] "Updated machine memory state" Jan 30 14:01:06.657406 kubelet[2525]: I0130 14:01:06.657381 2525 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-5-054816032d" Jan 30 14:01:06.672496 kubelet[2525]: I0130 14:01:06.672153 2525 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-5-054816032d" Jan 30 14:01:06.672496 kubelet[2525]: I0130 14:01:06.672256 2525 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-5-054816032d" Jan 30 14:01:06.674250 kubelet[2525]: I0130 14:01:06.674191 2525 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:01:06.674589 kubelet[2525]: I0130 14:01:06.674516 2525 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:01:06.674742 kubelet[2525]: I0130 14:01:06.674722 2525 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:01:06.686051 kubelet[2525]: I0130 14:01:06.685849 2525 topology_manager.go:215] "Topology Admit Handler" podUID="155c10b389ece0a3dd0a820580d89eb8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.686863 kubelet[2525]: I0130 14:01:06.686806 2525 topology_manager.go:215] "Topology Admit Handler" podUID="25be7df92b2e3158917ecca012b8a886" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.687258 kubelet[2525]: I0130 14:01:06.687076 2525 topology_manager.go:215] "Topology Admit Handler" podUID="48e79947874395423fdf5ff9f2a911ea" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.732171 kubelet[2525]: W0130 14:01:06.732131 2525 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:01:06.734056 kubelet[2525]: W0130 14:01:06.733691 2525 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:01:06.736914 kubelet[2525]: W0130 14:01:06.736523 2525 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 14:01:06.773324 kubelet[2525]: I0130 14:01:06.773181 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.773324 kubelet[2525]: I0130 14:01:06.773251 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/155c10b389ece0a3dd0a820580d89eb8-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-5-054816032d\" (UID: \"155c10b389ece0a3dd0a820580d89eb8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.773324 kubelet[2525]: I0130 14:01:06.773276 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/155c10b389ece0a3dd0a820580d89eb8-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-5-054816032d\" (UID: \"155c10b389ece0a3dd0a820580d89eb8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.773324 kubelet[2525]: I0130 14:01:06.773293 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/155c10b389ece0a3dd0a820580d89eb8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-5-054816032d\" (UID: \"155c10b389ece0a3dd0a820580d89eb8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.773324 kubelet[2525]: I0130 14:01:06.773310 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.874523 kubelet[2525]: I0130 14:01:06.874107 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.874523 kubelet[2525]: I0130 14:01:06.874166 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.874523 kubelet[2525]: I0130 14:01:06.874185 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25be7df92b2e3158917ecca012b8a886-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-5-054816032d\" (UID: \"25be7df92b2e3158917ecca012b8a886\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" Jan 30 14:01:06.874523 kubelet[2525]: I0130 14:01:06.874202 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48e79947874395423fdf5ff9f2a911ea-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-5-054816032d\" (UID: \"48e79947874395423fdf5ff9f2a911ea\") " pod="kube-system/kube-scheduler-ci-4081.3.0-5-054816032d" Jan 30 14:01:07.036204 kubelet[2525]: E0130 14:01:07.035444 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:07.036204 kubelet[2525]: E0130 14:01:07.036126 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:07.038003 kubelet[2525]: E0130 14:01:07.037937 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:07.541254 kubelet[2525]: I0130 14:01:07.540923 2525 apiserver.go:52] "Watching apiserver" Jan 30 14:01:07.572832 kubelet[2525]: I0130 14:01:07.572777 2525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:01:07.626588 kubelet[2525]: E0130 14:01:07.624178 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:07.626588 kubelet[2525]: E0130 14:01:07.625130 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:07.627429 kubelet[2525]: E0130 14:01:07.627391 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:07.719832 kubelet[2525]: I0130 14:01:07.719753 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-5-054816032d" podStartSLOduration=1.7197276590000001 podStartE2EDuration="1.719727659s" podCreationTimestamp="2025-01-30 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:07.689975188 +0000 UTC m=+1.279606892" watchObservedRunningTime="2025-01-30 14:01:07.719727659 +0000 UTC m=+1.309359362" Jan 30 14:01:07.751312 kubelet[2525]: I0130 14:01:07.751186 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-5-054816032d" podStartSLOduration=1.75115814 podStartE2EDuration="1.75115814s" podCreationTimestamp="2025-01-30 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:07.722690996 +0000 UTC m=+1.312322700" watchObservedRunningTime="2025-01-30 14:01:07.75115814 +0000 UTC m=+1.340789846" Jan 30 14:01:07.751584 kubelet[2525]: I0130 14:01:07.751405 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-5-054816032d" podStartSLOduration=1.751387313 podStartE2EDuration="1.751387313s" podCreationTimestamp="2025-01-30 14:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:07.751003678 +0000 UTC m=+1.340635442" watchObservedRunningTime="2025-01-30 14:01:07.751387313 +0000 UTC m=+1.341019028" Jan 30 14:01:08.058874 sudo[1615]: pam_unix(sudo:session): session closed for user root Jan 30 14:01:08.063693 sshd[1612]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:08.071888 systemd[1]: sshd@4-143.198.106.130:22-147.75.109.163:43052.service: Deactivated successfully. Jan 30 14:01:08.075651 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:01:08.076493 systemd[1]: session-5.scope: Consumed 6.435s CPU time, 189.5M memory peak, 0B memory swap peak. Jan 30 14:01:08.077570 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:01:08.079988 systemd-logind[1449]: Removed session 5. Jan 30 14:01:08.624664 kubelet[2525]: E0130 14:01:08.624620 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:09.627025 kubelet[2525]: E0130 14:01:09.626318 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:09.851602 update_engine[1451]: I20250130 14:01:09.851478 1451 update_attempter.cc:509] Updating boot flags... Jan 30 14:01:09.897259 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2593) Jan 30 14:01:09.997500 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2592) Jan 30 14:01:10.113001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2592) Jan 30 14:01:11.396488 kubelet[2525]: E0130 14:01:11.396424 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:11.632244 kubelet[2525]: E0130 14:01:11.630609 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:12.075197 kubelet[2525]: E0130 14:01:12.074653 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:12.633596 kubelet[2525]: E0130 14:01:12.633560 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:16.328896 systemd[1]: Started sshd@5-143.198.106.130:22-218.92.0.157:50911.service - OpenSSH per-connection server daemon (218.92.0.157:50911). Jan 30 14:01:17.532170 sshd[2604]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Jan 30 14:01:18.986481 kubelet[2525]: E0130 14:01:18.986432 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:19.649713 kubelet[2525]: E0130 14:01:19.649648 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:19.863877 sshd[2602]: PAM: Permission denied for root from 218.92.0.157 Jan 30 14:01:21.199554 kubelet[2525]: I0130 14:01:21.199520 2525 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:01:21.203107 containerd[1472]: time="2025-01-30T14:01:21.202900111Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:01:21.203600 kubelet[2525]: I0130 14:01:21.203405 2525 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:01:21.761923 kubelet[2525]: I0130 14:01:21.761857 2525 topology_manager.go:215] "Topology Admit Handler" podUID="a5f32428-9eb8-4904-9bcb-24e2614a106b" podNamespace="kube-system" podName="kube-proxy-zncdp" Jan 30 14:01:21.777655 systemd[1]: Created slice kubepods-besteffort-poda5f32428_9eb8_4904_9bcb_24e2614a106b.slice - libcontainer container kubepods-besteffort-poda5f32428_9eb8_4904_9bcb_24e2614a106b.slice. Jan 30 14:01:21.779834 kubelet[2525]: I0130 14:01:21.779774 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5f32428-9eb8-4904-9bcb-24e2614a106b-kube-proxy\") pod \"kube-proxy-zncdp\" (UID: \"a5f32428-9eb8-4904-9bcb-24e2614a106b\") " pod="kube-system/kube-proxy-zncdp" Jan 30 14:01:21.779834 kubelet[2525]: I0130 14:01:21.779822 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5f32428-9eb8-4904-9bcb-24e2614a106b-lib-modules\") pod \"kube-proxy-zncdp\" (UID: \"a5f32428-9eb8-4904-9bcb-24e2614a106b\") " pod="kube-system/kube-proxy-zncdp" Jan 30 14:01:21.779834 kubelet[2525]: I0130 14:01:21.779848 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5f32428-9eb8-4904-9bcb-24e2614a106b-xtables-lock\") pod \"kube-proxy-zncdp\" (UID: \"a5f32428-9eb8-4904-9bcb-24e2614a106b\") " pod="kube-system/kube-proxy-zncdp" Jan 30 14:01:21.780343 kubelet[2525]: I0130 14:01:21.779870 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptqsk\" (UniqueName: \"kubernetes.io/projected/a5f32428-9eb8-4904-9bcb-24e2614a106b-kube-api-access-ptqsk\") pod \"kube-proxy-zncdp\" (UID: \"a5f32428-9eb8-4904-9bcb-24e2614a106b\") " pod="kube-system/kube-proxy-zncdp" Jan 30 14:01:21.788101 kubelet[2525]: I0130 14:01:21.788049 2525 topology_manager.go:215] "Topology Admit Handler" podUID="6553fd48-0cff-439f-9dc3-d2527d792119" podNamespace="kube-flannel" podName="kube-flannel-ds-b55dn" Jan 30 14:01:21.804781 systemd[1]: Created slice kubepods-burstable-pod6553fd48_0cff_439f_9dc3_d2527d792119.slice - libcontainer container kubepods-burstable-pod6553fd48_0cff_439f_9dc3_d2527d792119.slice. Jan 30 14:01:21.880583 kubelet[2525]: I0130 14:01:21.880513 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6553fd48-0cff-439f-9dc3-d2527d792119-run\") pod \"kube-flannel-ds-b55dn\" (UID: \"6553fd48-0cff-439f-9dc3-d2527d792119\") " pod="kube-flannel/kube-flannel-ds-b55dn" Jan 30 14:01:21.880583 kubelet[2525]: I0130 14:01:21.880587 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6553fd48-0cff-439f-9dc3-d2527d792119-flannel-cfg\") pod \"kube-flannel-ds-b55dn\" (UID: \"6553fd48-0cff-439f-9dc3-d2527d792119\") " pod="kube-flannel/kube-flannel-ds-b55dn" Jan 30 14:01:21.880958 kubelet[2525]: I0130 14:01:21.880628 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6553fd48-0cff-439f-9dc3-d2527d792119-cni-plugin\") pod \"kube-flannel-ds-b55dn\" (UID: \"6553fd48-0cff-439f-9dc3-d2527d792119\") " pod="kube-flannel/kube-flannel-ds-b55dn" Jan 30 14:01:21.880958 kubelet[2525]: I0130 14:01:21.880644 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6553fd48-0cff-439f-9dc3-d2527d792119-cni\") pod \"kube-flannel-ds-b55dn\" (UID: \"6553fd48-0cff-439f-9dc3-d2527d792119\") " pod="kube-flannel/kube-flannel-ds-b55dn" Jan 30 14:01:21.880958 kubelet[2525]: I0130 14:01:21.880915 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6553fd48-0cff-439f-9dc3-d2527d792119-xtables-lock\") pod \"kube-flannel-ds-b55dn\" (UID: \"6553fd48-0cff-439f-9dc3-d2527d792119\") " pod="kube-flannel/kube-flannel-ds-b55dn" Jan 30 14:01:21.880958 kubelet[2525]: I0130 14:01:21.880943 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlnc\" (UniqueName: \"kubernetes.io/projected/6553fd48-0cff-439f-9dc3-d2527d792119-kube-api-access-gxlnc\") pod \"kube-flannel-ds-b55dn\" (UID: \"6553fd48-0cff-439f-9dc3-d2527d792119\") " pod="kube-flannel/kube-flannel-ds-b55dn" Jan 30 14:01:22.091687 kubelet[2525]: E0130 14:01:22.091536 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:22.093485 containerd[1472]: time="2025-01-30T14:01:22.092950977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zncdp,Uid:a5f32428-9eb8-4904-9bcb-24e2614a106b,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:22.112944 kubelet[2525]: E0130 14:01:22.112426 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:22.113614 containerd[1472]: time="2025-01-30T14:01:22.113579008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-b55dn,Uid:6553fd48-0cff-439f-9dc3-d2527d792119,Namespace:kube-flannel,Attempt:0,}" Jan 30 14:01:22.148987 containerd[1472]: time="2025-01-30T14:01:22.148640075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:22.148987 containerd[1472]: time="2025-01-30T14:01:22.148876446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:22.148987 containerd[1472]: time="2025-01-30T14:01:22.148908439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.149409 containerd[1472]: time="2025-01-30T14:01:22.149135470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.171544 containerd[1472]: time="2025-01-30T14:01:22.171423289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:22.171544 containerd[1472]: time="2025-01-30T14:01:22.171504796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:22.172140 containerd[1472]: time="2025-01-30T14:01:22.171580935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.172140 containerd[1472]: time="2025-01-30T14:01:22.171867664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:22.185868 systemd[1]: Started cri-containerd-5cea29889dc21e5ed5f90645905fb7b644972bcc36ba62e390a21102c35090fc.scope - libcontainer container 5cea29889dc21e5ed5f90645905fb7b644972bcc36ba62e390a21102c35090fc. Jan 30 14:01:22.205515 systemd[1]: Started cri-containerd-30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5.scope - libcontainer container 30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5. Jan 30 14:01:22.252476 containerd[1472]: time="2025-01-30T14:01:22.251506855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zncdp,Uid:a5f32428-9eb8-4904-9bcb-24e2614a106b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cea29889dc21e5ed5f90645905fb7b644972bcc36ba62e390a21102c35090fc\"" Jan 30 14:01:22.258554 kubelet[2525]: E0130 14:01:22.255578 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:22.269286 containerd[1472]: time="2025-01-30T14:01:22.268095757Z" level=info msg="CreateContainer within sandbox \"5cea29889dc21e5ed5f90645905fb7b644972bcc36ba62e390a21102c35090fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:01:22.294115 containerd[1472]: time="2025-01-30T14:01:22.294062282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-b55dn,Uid:6553fd48-0cff-439f-9dc3-d2527d792119,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5\"" Jan 30 14:01:22.297414 kubelet[2525]: E0130 14:01:22.296031 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:22.301765 containerd[1472]: time="2025-01-30T14:01:22.301614546Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 14:01:22.302482 containerd[1472]: time="2025-01-30T14:01:22.302425369Z" level=info msg="CreateContainer within sandbox \"5cea29889dc21e5ed5f90645905fb7b644972bcc36ba62e390a21102c35090fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"56cc067e3ee1d7f97db86cb7dae161060b37cbcd61c1754d94b33a1479a0e3e8\"" Jan 30 14:01:22.303272 containerd[1472]: time="2025-01-30T14:01:22.303155222Z" level=info msg="StartContainer for \"56cc067e3ee1d7f97db86cb7dae161060b37cbcd61c1754d94b33a1479a0e3e8\"" Jan 30 14:01:22.344561 systemd[1]: Started cri-containerd-56cc067e3ee1d7f97db86cb7dae161060b37cbcd61c1754d94b33a1479a0e3e8.scope - libcontainer container 56cc067e3ee1d7f97db86cb7dae161060b37cbcd61c1754d94b33a1479a0e3e8. Jan 30 14:01:22.390861 containerd[1472]: time="2025-01-30T14:01:22.390740974Z" level=info msg="StartContainer for \"56cc067e3ee1d7f97db86cb7dae161060b37cbcd61c1754d94b33a1479a0e3e8\" returns successfully" Jan 30 14:01:22.676108 kubelet[2525]: E0130 14:01:22.676006 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:24.404616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950833342.mount: Deactivated successfully. Jan 30 14:01:24.481611 containerd[1472]: time="2025-01-30T14:01:24.481525442Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:24.484290 containerd[1472]: time="2025-01-30T14:01:24.484153825Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 30 14:01:24.486516 containerd[1472]: time="2025-01-30T14:01:24.486418978Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:24.500294 containerd[1472]: time="2025-01-30T14:01:24.499785568Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:24.502406 containerd[1472]: time="2025-01-30T14:01:24.502339894Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.200587541s" Jan 30 14:01:24.502653 containerd[1472]: time="2025-01-30T14:01:24.502628503Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 30 14:01:24.510398 containerd[1472]: time="2025-01-30T14:01:24.510328834Z" level=info msg="CreateContainer within sandbox \"30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 14:01:24.536033 containerd[1472]: time="2025-01-30T14:01:24.535972436Z" level=info msg="CreateContainer within sandbox \"30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57\"" Jan 30 14:01:24.537204 containerd[1472]: time="2025-01-30T14:01:24.537157750Z" level=info msg="StartContainer for \"261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57\"" Jan 30 14:01:24.589186 systemd[1]: Started cri-containerd-261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57.scope - libcontainer container 261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57. Jan 30 14:01:24.628452 containerd[1472]: time="2025-01-30T14:01:24.628364278Z" level=info msg="StartContainer for \"261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57\" returns successfully" Jan 30 14:01:24.629688 systemd[1]: cri-containerd-261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57.scope: Deactivated successfully. Jan 30 14:01:24.683765 kubelet[2525]: E0130 14:01:24.682762 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:24.688077 containerd[1472]: time="2025-01-30T14:01:24.687634214Z" level=info msg="shim disconnected" id=261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57 namespace=k8s.io Jan 30 14:01:24.688077 containerd[1472]: time="2025-01-30T14:01:24.687811673Z" level=warning msg="cleaning up after shim disconnected" id=261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57 namespace=k8s.io Jan 30 14:01:24.688077 containerd[1472]: time="2025-01-30T14:01:24.687846568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:24.701209 kubelet[2525]: I0130 14:01:24.700939 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zncdp" podStartSLOduration=3.700917246 podStartE2EDuration="3.700917246s" podCreationTimestamp="2025-01-30 14:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:22.696560724 +0000 UTC m=+16.286192441" watchObservedRunningTime="2025-01-30 14:01:24.700917246 +0000 UTC m=+18.290548954" Jan 30 14:01:25.250852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-261f6dc392048ecd0caf8afd9eef1e6d79b3f9a604cd61defc8ed969a99acc57-rootfs.mount: Deactivated successfully. Jan 30 14:01:25.686368 kubelet[2525]: E0130 14:01:25.685672 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:25.689116 containerd[1472]: time="2025-01-30T14:01:25.688642248Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 14:01:27.857731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount878221663.mount: Deactivated successfully. Jan 30 14:01:29.715979 containerd[1472]: time="2025-01-30T14:01:29.714301373Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:29.715979 containerd[1472]: time="2025-01-30T14:01:29.715881911Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 30 14:01:29.716864 containerd[1472]: time="2025-01-30T14:01:29.716815428Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:29.720359 containerd[1472]: time="2025-01-30T14:01:29.720274548Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:01:29.722267 containerd[1472]: time="2025-01-30T14:01:29.722204078Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.033483368s" Jan 30 14:01:29.722445 containerd[1472]: time="2025-01-30T14:01:29.722428782Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 30 14:01:29.727766 containerd[1472]: time="2025-01-30T14:01:29.727710545Z" level=info msg="CreateContainer within sandbox \"30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:01:29.758817 containerd[1472]: time="2025-01-30T14:01:29.758738692Z" level=info msg="CreateContainer within sandbox \"30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2\"" Jan 30 14:01:29.769037 containerd[1472]: time="2025-01-30T14:01:29.768942221Z" level=info msg="StartContainer for \"9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2\"" Jan 30 14:01:29.821560 systemd[1]: Started cri-containerd-9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2.scope - libcontainer container 9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2. Jan 30 14:01:29.855692 systemd[1]: cri-containerd-9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2.scope: Deactivated successfully. Jan 30 14:01:29.859773 containerd[1472]: time="2025-01-30T14:01:29.859703876Z" level=info msg="StartContainer for \"9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2\" returns successfully" Jan 30 14:01:29.890092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2-rootfs.mount: Deactivated successfully. Jan 30 14:01:29.901046 kubelet[2525]: I0130 14:01:29.901006 2525 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:01:29.969970 containerd[1472]: time="2025-01-30T14:01:29.969695557Z" level=info msg="shim disconnected" id=9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2 namespace=k8s.io Jan 30 14:01:29.969970 containerd[1472]: time="2025-01-30T14:01:29.969771718Z" level=warning msg="cleaning up after shim disconnected" id=9509d1f8c59c5ad160883d1fe62aa0864aaf5b668c14cde94aad049c1cd736e2 namespace=k8s.io Jan 30 14:01:29.969970 containerd[1472]: time="2025-01-30T14:01:29.969786803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:30.011954 kubelet[2525]: I0130 14:01:30.010991 2525 topology_manager.go:215] "Topology Admit Handler" podUID="41101e2b-4d58-4662-b8cc-7b08a2ab085a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wqm5g" Jan 30 14:01:30.011954 kubelet[2525]: I0130 14:01:30.011314 2525 topology_manager.go:215] "Topology Admit Handler" podUID="a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s2ggh" Jan 30 14:01:30.026724 systemd[1]: Created slice kubepods-burstable-poda2ae7b8a_c27c_4e37_9f6f_0a77ccf40069.slice - libcontainer container kubepods-burstable-poda2ae7b8a_c27c_4e37_9f6f_0a77ccf40069.slice. Jan 30 14:01:30.043596 systemd[1]: Created slice kubepods-burstable-pod41101e2b_4d58_4662_b8cc_7b08a2ab085a.slice - libcontainer container kubepods-burstable-pod41101e2b_4d58_4662_b8cc_7b08a2ab085a.slice. Jan 30 14:01:30.148320 kubelet[2525]: I0130 14:01:30.148253 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99jv\" (UniqueName: \"kubernetes.io/projected/41101e2b-4d58-4662-b8cc-7b08a2ab085a-kube-api-access-g99jv\") pod \"coredns-7db6d8ff4d-wqm5g\" (UID: \"41101e2b-4d58-4662-b8cc-7b08a2ab085a\") " pod="kube-system/coredns-7db6d8ff4d-wqm5g" Jan 30 14:01:30.148320 kubelet[2525]: I0130 14:01:30.148343 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069-config-volume\") pod \"coredns-7db6d8ff4d-s2ggh\" (UID: \"a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069\") " pod="kube-system/coredns-7db6d8ff4d-s2ggh" Jan 30 14:01:30.148847 kubelet[2525]: I0130 14:01:30.148390 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41101e2b-4d58-4662-b8cc-7b08a2ab085a-config-volume\") pod \"coredns-7db6d8ff4d-wqm5g\" (UID: \"41101e2b-4d58-4662-b8cc-7b08a2ab085a\") " pod="kube-system/coredns-7db6d8ff4d-wqm5g" Jan 30 14:01:30.148847 kubelet[2525]: I0130 14:01:30.148427 2525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm5zb\" (UniqueName: \"kubernetes.io/projected/a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069-kube-api-access-tm5zb\") pod \"coredns-7db6d8ff4d-s2ggh\" (UID: \"a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069\") " pod="kube-system/coredns-7db6d8ff4d-s2ggh" Jan 30 14:01:30.337526 kubelet[2525]: E0130 14:01:30.337368 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:30.340848 containerd[1472]: time="2025-01-30T14:01:30.340092140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s2ggh,Uid:a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:30.360173 kubelet[2525]: E0130 14:01:30.360124 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:30.362242 containerd[1472]: time="2025-01-30T14:01:30.362151972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqm5g,Uid:41101e2b-4d58-4662-b8cc-7b08a2ab085a,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:30.427322 containerd[1472]: time="2025-01-30T14:01:30.427247894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqm5g,Uid:41101e2b-4d58-4662-b8cc-7b08a2ab085a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d5d260f2d0745ec0c3cf8c68bbd5bb6c2a36426fde572691c907ea1660ad878\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:01:30.427671 kubelet[2525]: E0130 14:01:30.427606 2525 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d5d260f2d0745ec0c3cf8c68bbd5bb6c2a36426fde572691c907ea1660ad878\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:01:30.427803 kubelet[2525]: E0130 14:01:30.427700 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d5d260f2d0745ec0c3cf8c68bbd5bb6c2a36426fde572691c907ea1660ad878\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wqm5g" Jan 30 14:01:30.427803 kubelet[2525]: E0130 14:01:30.427729 2525 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d5d260f2d0745ec0c3cf8c68bbd5bb6c2a36426fde572691c907ea1660ad878\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wqm5g" Jan 30 14:01:30.428092 kubelet[2525]: E0130 14:01:30.427798 2525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wqm5g_kube-system(41101e2b-4d58-4662-b8cc-7b08a2ab085a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wqm5g_kube-system(41101e2b-4d58-4662-b8cc-7b08a2ab085a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d5d260f2d0745ec0c3cf8c68bbd5bb6c2a36426fde572691c907ea1660ad878\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-wqm5g" podUID="41101e2b-4d58-4662-b8cc-7b08a2ab085a" Jan 30 14:01:30.430001 containerd[1472]: time="2025-01-30T14:01:30.429884557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s2ggh,Uid:a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3016ac0c4924a7d63827bda859acaa8bd28813a0aa4d1ae17a6ab7d6fec712a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:01:30.430648 kubelet[2525]: E0130 14:01:30.430405 2525 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3016ac0c4924a7d63827bda859acaa8bd28813a0aa4d1ae17a6ab7d6fec712a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:01:30.430648 kubelet[2525]: E0130 14:01:30.430484 2525 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3016ac0c4924a7d63827bda859acaa8bd28813a0aa4d1ae17a6ab7d6fec712a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-s2ggh" Jan 30 14:01:30.430648 kubelet[2525]: E0130 14:01:30.430510 2525 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3016ac0c4924a7d63827bda859acaa8bd28813a0aa4d1ae17a6ab7d6fec712a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-s2ggh" Jan 30 14:01:30.430648 kubelet[2525]: E0130 14:01:30.430594 2525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s2ggh_kube-system(a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s2ggh_kube-system(a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3016ac0c4924a7d63827bda859acaa8bd28813a0aa4d1ae17a6ab7d6fec712a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-s2ggh" podUID="a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069" Jan 30 14:01:30.709672 kubelet[2525]: E0130 14:01:30.709148 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:30.714749 containerd[1472]: time="2025-01-30T14:01:30.714679101Z" level=info msg="CreateContainer within sandbox \"30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 14:01:30.761810 containerd[1472]: time="2025-01-30T14:01:30.760580125Z" level=info msg="CreateContainer within sandbox \"30119488ac58f5624eae3062fec453475ca86a99200bd49e12aa7e305fc3adc5\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"cfdde63a471246e1d642f99ce8adc2cfa841f0f29c900e1a29d770d90bf13e05\"" Jan 30 14:01:30.764027 containerd[1472]: time="2025-01-30T14:01:30.763030986Z" level=info msg="StartContainer for \"cfdde63a471246e1d642f99ce8adc2cfa841f0f29c900e1a29d770d90bf13e05\"" Jan 30 14:01:30.810802 systemd[1]: run-containerd-runc-k8s.io-cfdde63a471246e1d642f99ce8adc2cfa841f0f29c900e1a29d770d90bf13e05-runc.5AYOqd.mount: Deactivated successfully. Jan 30 14:01:30.822517 systemd[1]: Started cri-containerd-cfdde63a471246e1d642f99ce8adc2cfa841f0f29c900e1a29d770d90bf13e05.scope - libcontainer container cfdde63a471246e1d642f99ce8adc2cfa841f0f29c900e1a29d770d90bf13e05. Jan 30 14:01:30.868424 containerd[1472]: time="2025-01-30T14:01:30.868353427Z" level=info msg="StartContainer for \"cfdde63a471246e1d642f99ce8adc2cfa841f0f29c900e1a29d770d90bf13e05\" returns successfully" Jan 30 14:01:31.714727 kubelet[2525]: E0130 14:01:31.714428 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:31.729611 kubelet[2525]: I0130 14:01:31.729527 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-b55dn" podStartSLOduration=3.304217565 podStartE2EDuration="10.729498344s" podCreationTimestamp="2025-01-30 14:01:21 +0000 UTC" firstStartedPulling="2025-01-30 14:01:22.298853597 +0000 UTC m=+15.888485280" lastFinishedPulling="2025-01-30 14:01:29.724134374 +0000 UTC m=+23.313766059" observedRunningTime="2025-01-30 14:01:31.72887374 +0000 UTC m=+25.318505445" watchObservedRunningTime="2025-01-30 14:01:31.729498344 +0000 UTC m=+25.319130040" Jan 30 14:01:31.963112 systemd-networkd[1376]: flannel.1: Link UP Jan 30 14:01:31.963125 systemd-networkd[1376]: flannel.1: Gained carrier Jan 30 14:01:32.716047 kubelet[2525]: E0130 14:01:32.715927 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:33.731607 systemd-networkd[1376]: flannel.1: Gained IPv6LL Jan 30 14:01:42.586719 kubelet[2525]: E0130 14:01:42.586079 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:42.588447 containerd[1472]: time="2025-01-30T14:01:42.587846580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqm5g,Uid:41101e2b-4d58-4662-b8cc-7b08a2ab085a,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:42.589636 kubelet[2525]: E0130 14:01:42.588354 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:42.590605 containerd[1472]: time="2025-01-30T14:01:42.589761323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s2ggh,Uid:a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:42.664066 systemd-networkd[1376]: cni0: Link UP Jan 30 14:01:42.664083 systemd-networkd[1376]: cni0: Gained carrier Jan 30 14:01:42.670404 systemd-networkd[1376]: cni0: Lost carrier Jan 30 14:01:42.680965 systemd-networkd[1376]: vethc1e1883d: Link UP Jan 30 14:01:42.684305 kernel: cni0: port 1(vethc1e1883d) entered blocking state Jan 30 14:01:42.684538 kernel: cni0: port 1(vethc1e1883d) entered disabled state Jan 30 14:01:42.684575 kernel: vethc1e1883d: entered allmulticast mode Jan 30 14:01:42.686702 kernel: vethc1e1883d: entered promiscuous mode Jan 30 14:01:42.688743 kernel: cni0: port 1(vethc1e1883d) entered blocking state Jan 30 14:01:42.688888 kernel: cni0: port 1(vethc1e1883d) entered forwarding state Jan 30 14:01:42.690830 kernel: cni0: port 1(vethc1e1883d) entered disabled state Jan 30 14:01:42.692951 systemd-networkd[1376]: veth853ed8cb: Link UP Jan 30 14:01:42.697285 kernel: cni0: port 2(veth853ed8cb) entered blocking state Jan 30 14:01:42.697414 kernel: cni0: port 2(veth853ed8cb) entered disabled state Jan 30 14:01:42.699267 kernel: veth853ed8cb: entered allmulticast mode Jan 30 14:01:42.703103 kernel: veth853ed8cb: entered promiscuous mode Jan 30 14:01:42.719065 kernel: cni0: port 2(veth853ed8cb) entered blocking state Jan 30 14:01:42.719177 kernel: cni0: port 2(veth853ed8cb) entered forwarding state Jan 30 14:01:42.720921 systemd-networkd[1376]: veth853ed8cb: Gained carrier Jan 30 14:01:42.724311 kernel: cni0: port 1(vethc1e1883d) entered blocking state Jan 30 14:01:42.724663 kernel: cni0: port 1(vethc1e1883d) entered forwarding state Jan 30 14:01:42.722330 systemd-networkd[1376]: cni0: Gained carrier Jan 30 14:01:42.723734 systemd-networkd[1376]: vethc1e1883d: Gained carrier Jan 30 14:01:42.744806 containerd[1472]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000106628), "name":"cbr0", "type":"bridge"} Jan 30 14:01:42.744806 containerd[1472]: delegateAdd: netconf sent to delegate plugin: Jan 30 14:01:42.744806 containerd[1472]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 30 14:01:42.744806 containerd[1472]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Jan 30 14:01:42.744806 containerd[1472]: delegateAdd: netconf sent to delegate plugin: Jan 30 14:01:42.782741 containerd[1472]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T14:01:42.782605600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:42.784073 containerd[1472]: time="2025-01-30T14:01:42.784000886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:42.785493 containerd[1472]: time="2025-01-30T14:01:42.785276418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:42.785493 containerd[1472]: time="2025-01-30T14:01:42.785409180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:42.798075 containerd[1472]: time="2025-01-30T14:01:42.797890077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:42.798542 containerd[1472]: time="2025-01-30T14:01:42.798477828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:42.798841 containerd[1472]: time="2025-01-30T14:01:42.798675433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:42.799128 containerd[1472]: time="2025-01-30T14:01:42.799057180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:42.830081 systemd[1]: Started cri-containerd-1dc70d50d23c1e8399fd956d6c4dd11f5838c567bf15229705516f4efd24b617.scope - libcontainer container 1dc70d50d23c1e8399fd956d6c4dd11f5838c567bf15229705516f4efd24b617. Jan 30 14:01:42.841146 systemd[1]: Started cri-containerd-d653f87e74bbfd365c29f7c84c82d688ed5d50b68a1c4d99cb2d8340797b0115.scope - libcontainer container d653f87e74bbfd365c29f7c84c82d688ed5d50b68a1c4d99cb2d8340797b0115. Jan 30 14:01:42.920881 containerd[1472]: time="2025-01-30T14:01:42.920827579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wqm5g,Uid:41101e2b-4d58-4662-b8cc-7b08a2ab085a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dc70d50d23c1e8399fd956d6c4dd11f5838c567bf15229705516f4efd24b617\"" Jan 30 14:01:42.925140 kubelet[2525]: E0130 14:01:42.924703 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:42.939183 containerd[1472]: time="2025-01-30T14:01:42.938814302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s2ggh,Uid:a2ae7b8a-c27c-4e37-9f6f-0a77ccf40069,Namespace:kube-system,Attempt:0,} returns sandbox id \"d653f87e74bbfd365c29f7c84c82d688ed5d50b68a1c4d99cb2d8340797b0115\"" Jan 30 14:01:42.958568 kubelet[2525]: E0130 14:01:42.958513 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:42.967870 containerd[1472]: time="2025-01-30T14:01:42.967804746Z" level=info msg="CreateContainer within sandbox \"1dc70d50d23c1e8399fd956d6c4dd11f5838c567bf15229705516f4efd24b617\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:01:42.970411 containerd[1472]: time="2025-01-30T14:01:42.970356699Z" level=info msg="CreateContainer within sandbox \"d653f87e74bbfd365c29f7c84c82d688ed5d50b68a1c4d99cb2d8340797b0115\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:01:43.026381 containerd[1472]: time="2025-01-30T14:01:43.026302604Z" level=info msg="CreateContainer within sandbox \"d653f87e74bbfd365c29f7c84c82d688ed5d50b68a1c4d99cb2d8340797b0115\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce332bf387b1b4b5f14f1f5b111f2836e7520bbd687ceb79c128da552d10941c\"" Jan 30 14:01:43.027283 containerd[1472]: time="2025-01-30T14:01:43.027092314Z" level=info msg="StartContainer for \"ce332bf387b1b4b5f14f1f5b111f2836e7520bbd687ceb79c128da552d10941c\"" Jan 30 14:01:43.028975 containerd[1472]: time="2025-01-30T14:01:43.028917894Z" level=info msg="CreateContainer within sandbox \"1dc70d50d23c1e8399fd956d6c4dd11f5838c567bf15229705516f4efd24b617\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cbb82bb8ac311db1451c859a4041ffcba104b0ac0228984024c07744816fc21\"" Jan 30 14:01:43.031602 containerd[1472]: time="2025-01-30T14:01:43.029964063Z" level=info msg="StartContainer for \"2cbb82bb8ac311db1451c859a4041ffcba104b0ac0228984024c07744816fc21\"" Jan 30 14:01:43.078512 systemd[1]: Started cri-containerd-2cbb82bb8ac311db1451c859a4041ffcba104b0ac0228984024c07744816fc21.scope - libcontainer container 2cbb82bb8ac311db1451c859a4041ffcba104b0ac0228984024c07744816fc21. Jan 30 14:01:43.079805 systemd[1]: Started cri-containerd-ce332bf387b1b4b5f14f1f5b111f2836e7520bbd687ceb79c128da552d10941c.scope - libcontainer container ce332bf387b1b4b5f14f1f5b111f2836e7520bbd687ceb79c128da552d10941c. Jan 30 14:01:43.142342 containerd[1472]: time="2025-01-30T14:01:43.142148840Z" level=info msg="StartContainer for \"2cbb82bb8ac311db1451c859a4041ffcba104b0ac0228984024c07744816fc21\" returns successfully" Jan 30 14:01:43.143257 containerd[1472]: time="2025-01-30T14:01:43.142493415Z" level=info msg="StartContainer for \"ce332bf387b1b4b5f14f1f5b111f2836e7520bbd687ceb79c128da552d10941c\" returns successfully" Jan 30 14:01:43.790606 kubelet[2525]: E0130 14:01:43.790462 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:43.792577 kubelet[2525]: E0130 14:01:43.792474 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:43.868263 kubelet[2525]: I0130 14:01:43.868176 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s2ggh" podStartSLOduration=22.868151501 podStartE2EDuration="22.868151501s" podCreationTimestamp="2025-01-30 14:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:43.829987164 +0000 UTC m=+37.419618872" watchObservedRunningTime="2025-01-30 14:01:43.868151501 +0000 UTC m=+37.457783263" Jan 30 14:01:44.035621 systemd-networkd[1376]: vethc1e1883d: Gained IPv6LL Jan 30 14:01:44.612027 systemd-networkd[1376]: veth853ed8cb: Gained IPv6LL Jan 30 14:01:44.675438 systemd-networkd[1376]: cni0: Gained IPv6LL Jan 30 14:01:44.795280 kubelet[2525]: E0130 14:01:44.795208 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:45.797856 kubelet[2525]: E0130 14:01:45.797298 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:50.340069 kubelet[2525]: E0130 14:01:50.339190 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:50.371724 kubelet[2525]: I0130 14:01:50.371663 2525 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wqm5g" podStartSLOduration=29.371640784 podStartE2EDuration="29.371640784s" podCreationTimestamp="2025-01-30 14:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:43.870528057 +0000 UTC m=+37.460159762" watchObservedRunningTime="2025-01-30 14:01:50.371640784 +0000 UTC m=+43.961272488" Jan 30 14:01:50.809327 kubelet[2525]: E0130 14:01:50.809209 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:57.816073 systemd[1]: Started sshd@6-143.198.106.130:22-147.75.109.163:44686.service - OpenSSH per-connection server daemon (147.75.109.163:44686). Jan 30 14:01:57.888027 sshd[3493]: Accepted publickey for core from 147.75.109.163 port 44686 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:57.890449 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:57.897470 systemd-logind[1449]: New session 6 of user core. Jan 30 14:01:57.903787 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:01:58.071106 sshd[3493]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:58.078186 systemd[1]: sshd@6-143.198.106.130:22-147.75.109.163:44686.service: Deactivated successfully. Jan 30 14:01:58.080641 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:01:58.081749 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:01:58.082861 systemd-logind[1449]: Removed session 6. Jan 30 14:02:03.095109 systemd[1]: Started sshd@7-143.198.106.130:22-147.75.109.163:44702.service - OpenSSH per-connection server daemon (147.75.109.163:44702). Jan 30 14:02:03.162250 sshd[3529]: Accepted publickey for core from 147.75.109.163 port 44702 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:03.163434 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:03.171266 systemd-logind[1449]: New session 7 of user core. Jan 30 14:02:03.181394 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:02:03.417670 sshd[3529]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:03.424102 systemd[1]: sshd@7-143.198.106.130:22-147.75.109.163:44702.service: Deactivated successfully. Jan 30 14:02:03.430445 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:02:03.433106 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:02:03.437841 systemd-logind[1449]: Removed session 7. Jan 30 14:02:08.440951 systemd[1]: Started sshd@8-143.198.106.130:22-147.75.109.163:38122.service - OpenSSH per-connection server daemon (147.75.109.163:38122). Jan 30 14:02:08.512288 sshd[3566]: Accepted publickey for core from 147.75.109.163 port 38122 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:08.514727 sshd[3566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:08.526009 systemd-logind[1449]: New session 8 of user core. Jan 30 14:02:08.537633 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:02:08.717568 sshd[3566]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:08.724015 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:02:08.724926 systemd[1]: sshd@8-143.198.106.130:22-147.75.109.163:38122.service: Deactivated successfully. Jan 30 14:02:08.729099 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:02:08.734265 systemd-logind[1449]: Removed session 8. Jan 30 14:02:13.740685 systemd[1]: Started sshd@9-143.198.106.130:22-147.75.109.163:38128.service - OpenSSH per-connection server daemon (147.75.109.163:38128). Jan 30 14:02:13.804291 sshd[3601]: Accepted publickey for core from 147.75.109.163 port 38128 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:13.807083 sshd[3601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:13.817344 systemd-logind[1449]: New session 9 of user core. Jan 30 14:02:13.820592 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:02:13.978643 sshd[3601]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:13.993585 systemd[1]: sshd@9-143.198.106.130:22-147.75.109.163:38128.service: Deactivated successfully. Jan 30 14:02:13.997238 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:02:14.001814 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:02:14.010789 systemd[1]: Started sshd@10-143.198.106.130:22-147.75.109.163:38140.service - OpenSSH per-connection server daemon (147.75.109.163:38140). Jan 30 14:02:14.014289 systemd-logind[1449]: Removed session 9. Jan 30 14:02:14.073766 sshd[3615]: Accepted publickey for core from 147.75.109.163 port 38140 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:14.075700 sshd[3615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:14.082630 systemd-logind[1449]: New session 10 of user core. Jan 30 14:02:14.090580 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:02:14.295053 sshd[3615]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:14.311765 systemd[1]: sshd@10-143.198.106.130:22-147.75.109.163:38140.service: Deactivated successfully. Jan 30 14:02:14.316645 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:02:14.319249 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:02:14.331754 systemd[1]: Started sshd@11-143.198.106.130:22-147.75.109.163:38152.service - OpenSSH per-connection server daemon (147.75.109.163:38152). Jan 30 14:02:14.333996 systemd-logind[1449]: Removed session 10. Jan 30 14:02:14.398059 sshd[3625]: Accepted publickey for core from 147.75.109.163 port 38152 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:14.399486 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:14.408143 systemd-logind[1449]: New session 11 of user core. Jan 30 14:02:14.418628 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:02:14.580096 sshd[3625]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:14.586694 systemd[1]: sshd@11-143.198.106.130:22-147.75.109.163:38152.service: Deactivated successfully. Jan 30 14:02:14.587800 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:02:14.592790 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:02:14.594637 systemd-logind[1449]: Removed session 11. Jan 30 14:02:19.600691 systemd[1]: Started sshd@12-143.198.106.130:22-147.75.109.163:47626.service - OpenSSH per-connection server daemon (147.75.109.163:47626). Jan 30 14:02:19.658319 sshd[3659]: Accepted publickey for core from 147.75.109.163 port 47626 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:19.660542 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:19.668571 systemd-logind[1449]: New session 12 of user core. Jan 30 14:02:19.677689 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:02:19.847922 sshd[3659]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:19.854914 systemd[1]: sshd@12-143.198.106.130:22-147.75.109.163:47626.service: Deactivated successfully. Jan 30 14:02:19.860365 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:02:19.862297 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:02:19.864564 systemd-logind[1449]: Removed session 12. Jan 30 14:02:24.878803 systemd[1]: Started sshd@13-143.198.106.130:22-147.75.109.163:47636.service - OpenSSH per-connection server daemon (147.75.109.163:47636). Jan 30 14:02:24.932797 sshd[3694]: Accepted publickey for core from 147.75.109.163 port 47636 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:24.936594 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:24.944324 systemd-logind[1449]: New session 13 of user core. Jan 30 14:02:24.949978 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:02:25.117681 sshd[3694]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:25.131583 systemd[1]: sshd@13-143.198.106.130:22-147.75.109.163:47636.service: Deactivated successfully. Jan 30 14:02:25.137681 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:02:25.141643 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:02:25.152376 systemd[1]: Started sshd@14-143.198.106.130:22-147.75.109.163:47640.service - OpenSSH per-connection server daemon (147.75.109.163:47640). Jan 30 14:02:25.154130 systemd-logind[1449]: Removed session 13. Jan 30 14:02:25.207457 sshd[3706]: Accepted publickey for core from 147.75.109.163 port 47640 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:25.210206 sshd[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:25.216872 systemd-logind[1449]: New session 14 of user core. Jan 30 14:02:25.222559 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:02:25.580168 sshd[3706]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:25.590825 systemd[1]: sshd@14-143.198.106.130:22-147.75.109.163:47640.service: Deactivated successfully. Jan 30 14:02:25.593908 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:02:25.596732 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:02:25.601834 systemd[1]: Started sshd@15-143.198.106.130:22-147.75.109.163:47654.service - OpenSSH per-connection server daemon (147.75.109.163:47654). Jan 30 14:02:25.606296 systemd-logind[1449]: Removed session 14. Jan 30 14:02:25.688449 sshd[3717]: Accepted publickey for core from 147.75.109.163 port 47654 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:25.691869 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:25.700787 systemd-logind[1449]: New session 15 of user core. Jan 30 14:02:25.707611 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:02:27.725270 sshd[3717]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:27.739477 systemd[1]: sshd@15-143.198.106.130:22-147.75.109.163:47654.service: Deactivated successfully. Jan 30 14:02:27.745303 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:02:27.747038 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:02:27.757851 systemd[1]: Started sshd@16-143.198.106.130:22-147.75.109.163:43226.service - OpenSSH per-connection server daemon (147.75.109.163:43226). Jan 30 14:02:27.762562 systemd-logind[1449]: Removed session 15. Jan 30 14:02:27.832343 sshd[3759]: Accepted publickey for core from 147.75.109.163 port 43226 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:27.835200 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:27.847392 systemd-logind[1449]: New session 16 of user core. Jan 30 14:02:27.853844 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:02:28.248378 sshd[3759]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:28.261117 systemd[1]: sshd@16-143.198.106.130:22-147.75.109.163:43226.service: Deactivated successfully. Jan 30 14:02:28.264425 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:02:28.269881 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:02:28.275869 systemd[1]: Started sshd@17-143.198.106.130:22-147.75.109.163:43230.service - OpenSSH per-connection server daemon (147.75.109.163:43230). Jan 30 14:02:28.279721 systemd-logind[1449]: Removed session 16. Jan 30 14:02:28.350457 sshd[3770]: Accepted publickey for core from 147.75.109.163 port 43230 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:28.353026 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:28.362634 systemd-logind[1449]: New session 17 of user core. Jan 30 14:02:28.371961 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:02:28.532911 sshd[3770]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:28.538176 systemd[1]: sshd@17-143.198.106.130:22-147.75.109.163:43230.service: Deactivated successfully. Jan 30 14:02:28.541162 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:02:28.544245 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:02:28.545843 systemd-logind[1449]: Removed session 17. Jan 30 14:02:32.589310 kubelet[2525]: E0130 14:02:32.587597 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:33.555429 systemd[1]: Started sshd@18-143.198.106.130:22-147.75.109.163:43236.service - OpenSSH per-connection server daemon (147.75.109.163:43236). Jan 30 14:02:33.604261 sshd[3804]: Accepted publickey for core from 147.75.109.163 port 43236 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:33.607044 sshd[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:33.615254 systemd-logind[1449]: New session 18 of user core. Jan 30 14:02:33.623711 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:02:33.799701 sshd[3804]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:33.804836 systemd[1]: sshd@18-143.198.106.130:22-147.75.109.163:43236.service: Deactivated successfully. Jan 30 14:02:33.808175 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:02:33.820211 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:02:33.822644 systemd-logind[1449]: Removed session 18. Jan 30 14:02:37.586173 kubelet[2525]: E0130 14:02:37.586135 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:38.835607 systemd[1]: Started sshd@19-143.198.106.130:22-147.75.109.163:57814.service - OpenSSH per-connection server daemon (147.75.109.163:57814). Jan 30 14:02:38.895892 sshd[3840]: Accepted publickey for core from 147.75.109.163 port 57814 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:38.898861 sshd[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:38.909040 systemd-logind[1449]: New session 19 of user core. Jan 30 14:02:38.918554 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:02:39.086370 sshd[3840]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:39.092529 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:02:39.093861 systemd[1]: sshd@19-143.198.106.130:22-147.75.109.163:57814.service: Deactivated successfully. Jan 30 14:02:39.098018 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:02:39.101540 systemd-logind[1449]: Removed session 19. Jan 30 14:02:43.788722 systemd[1]: Started sshd@20-143.198.106.130:22-193.32.162.139:58082.service - OpenSSH per-connection server daemon (193.32.162.139:58082). Jan 30 14:02:44.111035 systemd[1]: Started sshd@21-143.198.106.130:22-147.75.109.163:57828.service - OpenSSH per-connection server daemon (147.75.109.163:57828). Jan 30 14:02:44.176660 sshd[3876]: Accepted publickey for core from 147.75.109.163 port 57828 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:44.179886 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:44.191305 systemd-logind[1449]: New session 20 of user core. Jan 30 14:02:44.199157 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:02:44.360035 sshd[3876]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:44.366825 systemd[1]: sshd@21-143.198.106.130:22-147.75.109.163:57828.service: Deactivated successfully. Jan 30 14:02:44.371690 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:02:44.373051 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:02:44.375137 systemd-logind[1449]: Removed session 20. Jan 30 14:02:44.509430 sshd[3873]: Invalid user csadmin from 193.32.162.139 port 58082 Jan 30 14:02:44.587118 kubelet[2525]: E0130 14:02:44.586712 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:02:44.676366 sshd[3873]: Connection closed by invalid user csadmin 193.32.162.139 port 58082 [preauth] Jan 30 14:02:44.679662 systemd[1]: sshd@20-143.198.106.130:22-193.32.162.139:58082.service: Deactivated successfully. Jan 30 14:02:49.379815 systemd[1]: Started sshd@22-143.198.106.130:22-147.75.109.163:54526.service - OpenSSH per-connection server daemon (147.75.109.163:54526). Jan 30 14:02:49.440317 sshd[3911]: Accepted publickey for core from 147.75.109.163 port 54526 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:02:49.442315 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:49.451403 systemd-logind[1449]: New session 21 of user core. Jan 30 14:02:49.455588 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:02:49.628787 sshd[3911]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:49.637712 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:02:49.639016 systemd[1]: sshd@22-143.198.106.130:22-147.75.109.163:54526.service: Deactivated successfully. Jan 30 14:02:49.645624 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:02:49.650078 systemd-logind[1449]: Removed session 21. Jan 30 14:02:52.587284 kubelet[2525]: E0130 14:02:52.586985 2525 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"