Jan 16 09:03:31.010948 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 16 09:03:31.010984 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:03:31.011031 kernel: BIOS-provided physical RAM map: Jan 16 09:03:31.011042 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 16 09:03:31.011051 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 16 09:03:31.011062 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 16 09:03:31.011074 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 16 09:03:31.011085 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 16 09:03:31.011096 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 16 09:03:31.011109 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 16 09:03:31.011116 kernel: NX (Execute Disable) protection: active Jan 16 09:03:31.011123 kernel: APIC: Static calls initialized Jan 16 09:03:31.011154 kernel: SMBIOS 2.8 present. Jan 16 09:03:31.011161 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 16 09:03:31.011169 kernel: Hypervisor detected: KVM Jan 16 09:03:31.011179 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 16 09:03:31.011187 kernel: kvm-clock: using sched offset of 3782449659 cycles Jan 16 09:03:31.011221 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 16 09:03:31.011229 kernel: tsc: Detected 1995.311 MHz processor Jan 16 09:03:31.011236 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 16 09:03:31.011244 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 16 09:03:31.011262 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 16 09:03:31.011270 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 16 09:03:31.011296 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 16 09:03:31.011307 kernel: ACPI: Early table checksum verification disabled Jan 16 09:03:31.011314 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 16 09:03:31.011322 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:03:31.011330 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:03:31.011341 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:03:31.011353 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 16 09:03:31.011364 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:03:31.011376 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:03:31.011388 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:03:31.011403 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 09:03:31.011411 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 16 09:03:31.011418 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 16 09:03:31.011425 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 16 09:03:31.011432 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 16 09:03:31.011438 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 16 09:03:31.011446 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 16 09:03:31.011460 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 16 09:03:31.011467 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 16 09:03:31.011475 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 16 09:03:31.011487 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 16 09:03:31.011500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 16 09:03:31.011514 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 16 09:03:31.011527 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 16 09:03:31.011537 kernel: Zone ranges: Jan 16 09:03:31.011545 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 16 09:03:31.011552 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 16 09:03:31.011560 kernel: Normal empty Jan 16 09:03:31.011568 kernel: Movable zone start for each node Jan 16 09:03:31.011580 kernel: Early memory node ranges Jan 16 09:03:31.011588 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 16 09:03:31.011595 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 16 09:03:31.011603 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 16 09:03:31.011613 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 16 09:03:31.011620 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 16 09:03:31.011628 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 16 09:03:31.011635 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 16 09:03:31.011643 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 16 09:03:31.011650 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 16 09:03:31.011658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 16 09:03:31.011671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 16 09:03:31.011679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 16 09:03:31.011689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 16 09:03:31.011696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 16 09:03:31.011704 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 16 09:03:31.011711 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 16 09:03:31.011718 kernel: TSC deadline timer available Jan 16 09:03:31.011726 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 16 09:03:31.011733 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 16 09:03:31.011740 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 16 09:03:31.011748 kernel: Booting paravirtualized kernel on KVM Jan 16 09:03:31.011759 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 16 09:03:31.011766 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 16 09:03:31.011774 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 16 09:03:31.011781 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 16 09:03:31.011788 kernel: pcpu-alloc: [0] 0 1 Jan 16 09:03:31.011796 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 16 09:03:31.011805 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:03:31.011813 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 16 09:03:31.011823 kernel: random: crng init done Jan 16 09:03:31.011831 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 09:03:31.011843 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 16 09:03:31.011850 kernel: Fallback order for Node 0: 0 Jan 16 09:03:31.011858 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 16 09:03:31.011866 kernel: Policy zone: DMA32 Jan 16 09:03:31.011873 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 09:03:31.011881 kernel: Memory: 1971192K/2096600K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Jan 16 09:03:31.011889 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 09:03:31.011899 kernel: Kernel/User page tables isolation: enabled Jan 16 09:03:31.011907 kernel: ftrace: allocating 37918 entries in 149 pages Jan 16 09:03:31.011914 kernel: ftrace: allocated 149 pages with 4 groups Jan 16 09:03:31.011921 kernel: Dynamic Preempt: voluntary Jan 16 09:03:31.011929 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 09:03:31.011937 kernel: rcu: RCU event tracing is enabled. Jan 16 09:03:31.011945 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 09:03:31.011952 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 09:03:31.011959 kernel: Rude variant of Tasks RCU enabled. Jan 16 09:03:31.011970 kernel: Tracing variant of Tasks RCU enabled. Jan 16 09:03:31.011978 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 09:03:31.011985 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 09:03:31.011992 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 16 09:03:31.012000 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 09:03:31.012008 kernel: Console: colour VGA+ 80x25 Jan 16 09:03:31.012016 kernel: printk: console [tty0] enabled Jan 16 09:03:31.012023 kernel: printk: console [ttyS0] enabled Jan 16 09:03:31.012031 kernel: ACPI: Core revision 20230628 Jan 16 09:03:31.012041 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 16 09:03:31.012048 kernel: APIC: Switch to symmetric I/O mode setup Jan 16 09:03:31.012056 kernel: x2apic enabled Jan 16 09:03:31.012064 kernel: APIC: Switched APIC routing to: physical x2apic Jan 16 09:03:31.012071 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 16 09:03:31.012078 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c177478, max_idle_ns: 881590705666 ns Jan 16 09:03:31.012086 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995311) Jan 16 09:03:31.012093 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 16 09:03:31.012101 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 16 09:03:31.012120 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 16 09:03:31.012166 kernel: Spectre V2 : Mitigation: Retpolines Jan 16 09:03:31.012182 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 16 09:03:31.012193 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 16 09:03:31.012202 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 16 09:03:31.012210 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 16 09:03:31.012277 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 16 09:03:31.012286 kernel: MDS: Mitigation: Clear CPU buffers Jan 16 09:03:31.012295 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 16 09:03:31.012306 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 16 09:03:31.012314 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 16 09:03:31.012323 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 16 09:03:31.012331 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 16 09:03:31.012339 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 16 09:03:31.012348 kernel: Freeing SMP alternatives memory: 32K Jan 16 09:03:31.012355 kernel: pid_max: default: 32768 minimum: 301 Jan 16 09:03:31.012367 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 09:03:31.012375 kernel: landlock: Up and running. Jan 16 09:03:31.012383 kernel: SELinux: Initializing. Jan 16 09:03:31.012391 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:03:31.012403 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 16 09:03:31.012412 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 16 09:03:31.012420 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:03:31.012428 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:03:31.012436 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 09:03:31.012447 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 16 09:03:31.012455 kernel: signal: max sigframe size: 1776 Jan 16 09:03:31.012464 kernel: rcu: Hierarchical SRCU implementation. Jan 16 09:03:31.012472 kernel: rcu: Max phase no-delay instances is 400. Jan 16 09:03:31.012480 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 16 09:03:31.012488 kernel: smp: Bringing up secondary CPUs ... Jan 16 09:03:31.012496 kernel: smpboot: x86: Booting SMP configuration: Jan 16 09:03:31.012504 kernel: .... node #0, CPUs: #1 Jan 16 09:03:31.012512 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 09:03:31.012523 kernel: smpboot: Max logical packages: 1 Jan 16 09:03:31.012531 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Jan 16 09:03:31.012539 kernel: devtmpfs: initialized Jan 16 09:03:31.012547 kernel: x86/mm: Memory block size: 128MB Jan 16 09:03:31.012555 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 09:03:31.012563 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 09:03:31.012571 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 09:03:31.012579 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 09:03:31.012587 kernel: audit: initializing netlink subsys (disabled) Jan 16 09:03:31.012598 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 09:03:31.012606 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 16 09:03:31.012614 kernel: audit: type=2000 audit(1737018209.913:1): state=initialized audit_enabled=0 res=1 Jan 16 09:03:31.012622 kernel: cpuidle: using governor menu Jan 16 09:03:31.012630 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 09:03:31.012638 kernel: dca service started, version 1.12.1 Jan 16 09:03:31.012650 kernel: PCI: Using configuration type 1 for base access Jan 16 09:03:31.012665 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 16 09:03:31.012679 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 09:03:31.012697 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 09:03:31.012710 kernel: ACPI: Added _OSI(Module Device) Jan 16 09:03:31.012720 kernel: ACPI: Added _OSI(Processor Device) Jan 16 09:03:31.012728 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 16 09:03:31.012736 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 09:03:31.012744 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 09:03:31.012752 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 16 09:03:31.012760 kernel: ACPI: Interpreter enabled Jan 16 09:03:31.012768 kernel: ACPI: PM: (supports S0 S5) Jan 16 09:03:31.012776 kernel: ACPI: Using IOAPIC for interrupt routing Jan 16 09:03:31.012787 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 16 09:03:31.012796 kernel: PCI: Using E820 reservations for host bridge windows Jan 16 09:03:31.012810 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 16 09:03:31.012823 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 09:03:31.013042 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 16 09:03:31.013195 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 16 09:03:31.013291 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 16 09:03:31.013306 kernel: acpiphp: Slot [3] registered Jan 16 09:03:31.013314 kernel: acpiphp: Slot [4] registered Jan 16 09:03:31.013323 kernel: acpiphp: Slot [5] registered Jan 16 09:03:31.013331 kernel: acpiphp: Slot [6] registered Jan 16 09:03:31.013340 kernel: acpiphp: Slot [7] registered Jan 16 09:03:31.013348 kernel: acpiphp: Slot [8] registered Jan 16 09:03:31.013356 kernel: acpiphp: Slot [9] registered Jan 16 09:03:31.013365 kernel: acpiphp: Slot [10] registered Jan 16 09:03:31.013373 kernel: acpiphp: Slot [11] registered Jan 16 09:03:31.013384 kernel: acpiphp: Slot [12] registered Jan 16 09:03:31.013396 kernel: acpiphp: Slot [13] registered Jan 16 09:03:31.013410 kernel: acpiphp: Slot [14] registered Jan 16 09:03:31.013422 kernel: acpiphp: Slot [15] registered Jan 16 09:03:31.013430 kernel: acpiphp: Slot [16] registered Jan 16 09:03:31.013438 kernel: acpiphp: Slot [17] registered Jan 16 09:03:31.013446 kernel: acpiphp: Slot [18] registered Jan 16 09:03:31.013454 kernel: acpiphp: Slot [19] registered Jan 16 09:03:31.013462 kernel: acpiphp: Slot [20] registered Jan 16 09:03:31.013473 kernel: acpiphp: Slot [21] registered Jan 16 09:03:31.013485 kernel: acpiphp: Slot [22] registered Jan 16 09:03:31.013498 kernel: acpiphp: Slot [23] registered Jan 16 09:03:31.013509 kernel: acpiphp: Slot [24] registered Jan 16 09:03:31.013521 kernel: acpiphp: Slot [25] registered Jan 16 09:03:31.013533 kernel: acpiphp: Slot [26] registered Jan 16 09:03:31.013545 kernel: acpiphp: Slot [27] registered Jan 16 09:03:31.013556 kernel: acpiphp: Slot [28] registered Jan 16 09:03:31.013567 kernel: acpiphp: Slot [29] registered Jan 16 09:03:31.013578 kernel: acpiphp: Slot [30] registered Jan 16 09:03:31.013600 kernel: acpiphp: Slot [31] registered Jan 16 09:03:31.013614 kernel: PCI host bridge to bus 0000:00 Jan 16 09:03:31.013756 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 16 09:03:31.013885 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 16 09:03:31.014009 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 16 09:03:31.014101 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 16 09:03:31.014207 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 16 09:03:31.014305 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 09:03:31.014458 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 16 09:03:31.014565 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 16 09:03:31.014710 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 16 09:03:31.014850 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 16 09:03:31.014964 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 16 09:03:31.015064 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 16 09:03:31.015210 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 16 09:03:31.015332 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 16 09:03:31.015451 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 16 09:03:31.015550 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 16 09:03:31.015707 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 16 09:03:31.015839 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 16 09:03:31.015940 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 16 09:03:31.016044 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 16 09:03:31.016214 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 16 09:03:31.016311 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 16 09:03:31.016410 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 16 09:03:31.016504 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 16 09:03:31.016595 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 16 09:03:31.016712 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:03:31.016814 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 16 09:03:31.016913 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 16 09:03:31.017035 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 16 09:03:31.017323 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 16 09:03:31.017447 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 16 09:03:31.017613 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 16 09:03:31.017754 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 16 09:03:31.017893 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 16 09:03:31.018068 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 16 09:03:31.018238 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 16 09:03:31.018357 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 16 09:03:31.018511 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:03:31.018681 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 16 09:03:31.018810 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 16 09:03:31.018941 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 16 09:03:31.019100 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 16 09:03:31.019259 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 16 09:03:31.019369 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 16 09:03:31.019520 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 16 09:03:31.019717 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 16 09:03:31.019854 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 16 09:03:31.020009 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 16 09:03:31.020022 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 16 09:03:31.020037 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 16 09:03:31.020052 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 16 09:03:31.020065 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 16 09:03:31.020085 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 16 09:03:31.020101 kernel: iommu: Default domain type: Translated Jan 16 09:03:31.020111 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 16 09:03:31.020119 kernel: PCI: Using ACPI for IRQ routing Jan 16 09:03:31.020214 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 16 09:03:31.020236 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 16 09:03:31.020245 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 16 09:03:31.020403 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 16 09:03:31.020530 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 16 09:03:31.020663 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 16 09:03:31.020676 kernel: vgaarb: loaded Jan 16 09:03:31.020685 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 16 09:03:31.020693 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 16 09:03:31.020702 kernel: clocksource: Switched to clocksource kvm-clock Jan 16 09:03:31.020710 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 09:03:31.020719 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 09:03:31.020734 kernel: pnp: PnP ACPI init Jan 16 09:03:31.020748 kernel: pnp: PnP ACPI: found 4 devices Jan 16 09:03:31.020767 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 16 09:03:31.020781 kernel: NET: Registered PF_INET protocol family Jan 16 09:03:31.020795 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 09:03:31.020804 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 16 09:03:31.020812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 09:03:31.020820 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 16 09:03:31.020829 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 16 09:03:31.020837 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 16 09:03:31.020848 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:03:31.020857 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 16 09:03:31.020866 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 09:03:31.020874 kernel: NET: Registered PF_XDP protocol family Jan 16 09:03:31.021000 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 16 09:03:31.021555 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 16 09:03:31.021683 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 16 09:03:31.021794 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 16 09:03:31.021913 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 16 09:03:31.022207 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 16 09:03:31.022358 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 16 09:03:31.022379 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 16 09:03:31.022530 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 35508 usecs Jan 16 09:03:31.022550 kernel: PCI: CLS 0 bytes, default 64 Jan 16 09:03:31.022564 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 16 09:03:31.022579 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c177478, max_idle_ns: 881590705666 ns Jan 16 09:03:31.022594 kernel: Initialise system trusted keyrings Jan 16 09:03:31.022616 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 16 09:03:31.022626 kernel: Key type asymmetric registered Jan 16 09:03:31.022635 kernel: Asymmetric key parser 'x509' registered Jan 16 09:03:31.022643 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 16 09:03:31.022653 kernel: io scheduler mq-deadline registered Jan 16 09:03:31.022668 kernel: io scheduler kyber registered Jan 16 09:03:31.022682 kernel: io scheduler bfq registered Jan 16 09:03:31.022696 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 16 09:03:31.022711 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 16 09:03:31.022730 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 16 09:03:31.022745 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 16 09:03:31.022758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 09:03:31.022772 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 16 09:03:31.022787 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 16 09:03:31.022801 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 16 09:03:31.022816 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 16 09:03:31.022832 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 16 09:03:31.023611 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 16 09:03:31.023777 kernel: rtc_cmos 00:03: registered as rtc0 Jan 16 09:03:31.023910 kernel: rtc_cmos 00:03: setting system clock to 2025-01-16T09:03:30 UTC (1737018210) Jan 16 09:03:31.024027 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 16 09:03:31.024042 kernel: intel_pstate: CPU model not supported Jan 16 09:03:31.024056 kernel: NET: Registered PF_INET6 protocol family Jan 16 09:03:31.024069 kernel: Segment Routing with IPv6 Jan 16 09:03:31.024083 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 09:03:31.024095 kernel: NET: Registered PF_PACKET protocol family Jan 16 09:03:31.024114 kernel: Key type dns_resolver registered Jan 16 09:03:31.024163 kernel: IPI shorthand broadcast: enabled Jan 16 09:03:31.024176 kernel: sched_clock: Marking stable (1224040213, 164897960)->(1442193213, -53255040) Jan 16 09:03:31.024188 kernel: registered taskstats version 1 Jan 16 09:03:31.024200 kernel: Loading compiled-in X.509 certificates Jan 16 09:03:31.024213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 16 09:03:31.024225 kernel: Key type .fscrypt registered Jan 16 09:03:31.024248 kernel: Key type fscrypt-provisioning registered Jan 16 09:03:31.024260 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 09:03:31.024310 kernel: ima: Allocated hash algorithm: sha1 Jan 16 09:03:31.024324 kernel: ima: No architecture policies found Jan 16 09:03:31.024337 kernel: clk: Disabling unused clocks Jan 16 09:03:31.024351 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 16 09:03:31.024365 kernel: Write protecting the kernel read-only data: 36864k Jan 16 09:03:31.024418 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 16 09:03:31.024435 kernel: Run /init as init process Jan 16 09:03:31.024448 kernel: with arguments: Jan 16 09:03:31.024464 kernel: /init Jan 16 09:03:31.024485 kernel: with environment: Jan 16 09:03:31.024500 kernel: HOME=/ Jan 16 09:03:31.024634 kernel: TERM=linux Jan 16 09:03:31.024650 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 16 09:03:31.024672 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:03:31.024689 systemd[1]: Detected virtualization kvm. Jan 16 09:03:31.024703 systemd[1]: Detected architecture x86-64. Jan 16 09:03:31.024729 systemd[1]: Running in initrd. Jan 16 09:03:31.024744 systemd[1]: No hostname configured, using default hostname. Jan 16 09:03:31.024757 systemd[1]: Hostname set to . Jan 16 09:03:31.024771 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:03:31.024785 systemd[1]: Queued start job for default target initrd.target. Jan 16 09:03:31.024809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:03:31.024825 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:03:31.024836 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 09:03:31.024852 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:03:31.024866 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 09:03:31.024881 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 09:03:31.024899 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 09:03:31.024911 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 09:03:31.024921 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:03:31.024936 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:03:31.024948 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:03:31.024958 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:03:31.024967 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:03:31.024978 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:03:31.024987 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:03:31.024996 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:03:31.025008 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 09:03:31.025021 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 09:03:31.025032 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:03:31.025041 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:03:31.025050 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:03:31.025059 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:03:31.025074 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 09:03:31.025097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:03:31.025122 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 09:03:31.025219 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 09:03:31.025228 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:03:31.025238 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:03:31.025246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:03:31.025296 systemd-journald[184]: Collecting audit messages is disabled. Jan 16 09:03:31.025331 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 09:03:31.025341 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:03:31.025351 systemd-journald[184]: Journal started Jan 16 09:03:31.025377 systemd-journald[184]: Runtime Journal (/run/log/journal/562d7b63dd2147758299ba91e519e64f) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:03:31.028187 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:03:31.032500 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 09:03:31.040550 systemd-modules-load[185]: Inserted module 'overlay' Jan 16 09:03:31.092744 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 09:03:31.092774 kernel: Bridge firewalling registered Jan 16 09:03:31.042387 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:03:31.076330 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 16 09:03:31.103469 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:03:31.104566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:03:31.117513 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:03:31.118626 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:03:31.126466 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:03:31.129388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:03:31.135351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:03:31.136295 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:03:31.150962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:03:31.159354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:03:31.161193 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:03:31.163223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:03:31.168386 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 09:03:31.192887 systemd-resolved[217]: Positive Trust Anchors: Jan 16 09:03:31.193692 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:03:31.193754 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:03:31.197036 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 16 09:03:31.198484 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:03:31.199351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:03:31.202301 dracut-cmdline[220]: dracut-dracut-053 Jan 16 09:03:31.204735 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 16 09:03:31.312165 kernel: SCSI subsystem initialized Jan 16 09:03:31.324199 kernel: Loading iSCSI transport class v2.0-870. Jan 16 09:03:31.339206 kernel: iscsi: registered transport (tcp) Jan 16 09:03:31.374252 kernel: iscsi: registered transport (qla4xxx) Jan 16 09:03:31.374343 kernel: QLogic iSCSI HBA Driver Jan 16 09:03:31.437647 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 09:03:31.444479 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 09:03:31.483577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 09:03:31.483678 kernel: device-mapper: uevent: version 1.0.3 Jan 16 09:03:31.486400 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 09:03:31.541214 kernel: raid6: avx2x4 gen() 19382 MB/s Jan 16 09:03:31.558224 kernel: raid6: avx2x2 gen() 26660 MB/s Jan 16 09:03:31.576681 kernel: raid6: avx2x1 gen() 15906 MB/s Jan 16 09:03:31.576782 kernel: raid6: using algorithm avx2x2 gen() 26660 MB/s Jan 16 09:03:31.594438 kernel: raid6: .... xor() 12599 MB/s, rmw enabled Jan 16 09:03:31.594547 kernel: raid6: using avx2x2 recovery algorithm Jan 16 09:03:31.620204 kernel: xor: automatically using best checksumming function avx Jan 16 09:03:31.808192 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 09:03:31.825488 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:03:31.831449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:03:31.862708 systemd-udevd[402]: Using default interface naming scheme 'v255'. Jan 16 09:03:31.869639 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:03:31.880334 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 09:03:31.900991 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jan 16 09:03:31.951547 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:03:31.958480 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:03:32.030178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:03:32.037981 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 09:03:32.068258 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 09:03:32.071491 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:03:32.073719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:03:32.075303 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:03:32.083425 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 09:03:32.116121 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:03:32.121276 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 16 09:03:32.164770 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 16 09:03:32.164944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 09:03:32.164964 kernel: GPT:9289727 != 125829119 Jan 16 09:03:32.164979 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 09:03:32.165006 kernel: GPT:9289727 != 125829119 Jan 16 09:03:32.165023 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 09:03:32.165038 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:03:32.165052 kernel: cryptd: max_cpu_qlen set to 1000 Jan 16 09:03:32.170514 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 16 09:03:32.201553 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Jan 16 09:03:32.201787 kernel: AVX2 version of gcm_enc/dec engaged. Jan 16 09:03:32.201808 kernel: scsi host0: Virtio SCSI HBA Jan 16 09:03:32.207068 kernel: AES CTR mode by8 optimization enabled Jan 16 09:03:32.248205 kernel: libata version 3.00 loaded. Jan 16 09:03:32.249072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:03:32.250157 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:03:32.255567 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:03:32.257665 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:03:32.257859 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:03:32.260018 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:03:32.274214 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Jan 16 09:03:32.274889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:03:32.284526 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (457) Jan 16 09:03:32.304956 kernel: ACPI: bus type USB registered Jan 16 09:03:32.305027 kernel: usbcore: registered new interface driver usbfs Jan 16 09:03:32.306328 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 16 09:03:32.308321 kernel: usbcore: registered new interface driver hub Jan 16 09:03:32.308349 kernel: usbcore: registered new device driver usb Jan 16 09:03:32.321457 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 16 09:03:32.324245 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 16 09:03:32.410157 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 16 09:03:32.410514 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 16 09:03:32.410679 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 16 09:03:32.410798 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 16 09:03:32.410933 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 16 09:03:32.411050 kernel: hub 1-0:1.0: USB hub found Jan 16 09:03:32.411242 kernel: hub 1-0:1.0: 2 ports detected Jan 16 09:03:32.411393 kernel: scsi host1: ata_piix Jan 16 09:03:32.411565 kernel: scsi host2: ata_piix Jan 16 09:03:32.411701 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 16 09:03:32.411719 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 16 09:03:32.347380 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 16 09:03:32.416939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:03:32.428034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:03:32.436414 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 09:03:32.440319 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 09:03:32.447725 disk-uuid[540]: Primary Header is updated. Jan 16 09:03:32.447725 disk-uuid[540]: Secondary Entries is updated. Jan 16 09:03:32.447725 disk-uuid[540]: Secondary Header is updated. Jan 16 09:03:32.453693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:03:32.484561 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:03:33.467172 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 16 09:03:33.468121 disk-uuid[542]: The operation has completed successfully. Jan 16 09:03:33.521221 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 09:03:33.521395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 09:03:33.533550 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 09:03:33.545491 sh[563]: Success Jan 16 09:03:33.566360 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 16 09:03:33.652162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 09:03:33.664340 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 09:03:33.670292 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 09:03:33.703710 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 16 09:03:33.703803 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:03:33.705547 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 09:03:33.707335 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 09:03:33.709325 kernel: BTRFS info (device dm-0): using free space tree Jan 16 09:03:33.721839 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 09:03:33.723332 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 09:03:33.730494 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 09:03:33.735386 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 09:03:33.747589 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:03:33.747662 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:03:33.749454 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:03:33.757296 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:03:33.773685 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 09:03:33.775907 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:03:33.784894 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 09:03:33.793430 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 09:03:33.921018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:03:33.931399 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:03:33.961074 systemd-networkd[746]: lo: Link UP Jan 16 09:03:33.962139 systemd-networkd[746]: lo: Gained carrier Jan 16 09:03:33.965390 systemd-networkd[746]: Enumeration completed Jan 16 09:03:33.966198 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:03:33.966813 systemd[1]: Reached target network.target - Network. Jan 16 09:03:33.968824 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:03:33.968829 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 16 09:03:33.971819 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:03:33.971824 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 09:03:33.974110 systemd-networkd[746]: eth0: Link UP Jan 16 09:03:33.974116 systemd-networkd[746]: eth0: Gained carrier Jan 16 09:03:33.974151 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 16 09:03:33.979896 systemd-networkd[746]: eth1: Link UP Jan 16 09:03:33.979901 systemd-networkd[746]: eth1: Gained carrier Jan 16 09:03:33.979918 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 09:03:33.987194 ignition[646]: Ignition 2.19.0 Jan 16 09:03:33.987206 ignition[646]: Stage: fetch-offline Jan 16 09:03:33.989300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:03:33.987277 ignition[646]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:03:33.991235 systemd-networkd[746]: eth0: DHCPv4 address 64.227.96.98/20, gateway 64.227.96.1 acquired from 169.254.169.253 Jan 16 09:03:33.987289 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:03:33.987432 ignition[646]: parsed url from cmdline: "" Jan 16 09:03:33.987436 ignition[646]: no config URL provided Jan 16 09:03:33.987442 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:03:33.987451 ignition[646]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:03:33.987458 ignition[646]: failed to fetch config: resource requires networking Jan 16 09:03:33.987695 ignition[646]: Ignition finished successfully Jan 16 09:03:34.001230 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.5/20 acquired from 169.254.169.253 Jan 16 09:03:34.002411 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 09:03:34.025587 ignition[755]: Ignition 2.19.0 Jan 16 09:03:34.025603 ignition[755]: Stage: fetch Jan 16 09:03:34.025950 ignition[755]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:03:34.025963 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:03:34.026109 ignition[755]: parsed url from cmdline: "" Jan 16 09:03:34.026113 ignition[755]: no config URL provided Jan 16 09:03:34.026119 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 09:03:34.026152 ignition[755]: no config at "/usr/lib/ignition/user.ign" Jan 16 09:03:34.026178 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 16 09:03:34.059074 ignition[755]: GET result: OK Jan 16 09:03:34.059307 ignition[755]: parsing config with SHA512: 7a9835c9bdf4dbccf19948792d98a23756c03baba2a2641c64b9f945ee2ab623b455bfd161b5c963dbb2de4fc9db66131313aed5e8a967ead65a79e8dec31650 Jan 16 09:03:34.067518 unknown[755]: fetched base config from "system" Jan 16 09:03:34.067535 unknown[755]: fetched base config from "system" Jan 16 09:03:34.068417 ignition[755]: fetch: fetch complete Jan 16 09:03:34.067545 unknown[755]: fetched user config from "digitalocean" Jan 16 09:03:34.068426 ignition[755]: fetch: fetch passed Jan 16 09:03:34.071461 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 09:03:34.068497 ignition[755]: Ignition finished successfully Jan 16 09:03:34.089610 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 09:03:34.122562 ignition[761]: Ignition 2.19.0 Jan 16 09:03:34.122627 ignition[761]: Stage: kargs Jan 16 09:03:34.122867 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:03:34.122879 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:03:34.125649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 09:03:34.124037 ignition[761]: kargs: kargs passed Jan 16 09:03:34.124097 ignition[761]: Ignition finished successfully Jan 16 09:03:34.134495 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 09:03:34.164877 ignition[768]: Ignition 2.19.0 Jan 16 09:03:34.164893 ignition[768]: Stage: disks Jan 16 09:03:34.165559 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 16 09:03:34.165586 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:03:34.167003 ignition[768]: disks: disks passed Jan 16 09:03:34.169366 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 09:03:34.167109 ignition[768]: Ignition finished successfully Jan 16 09:03:34.175724 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 09:03:34.176608 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 09:03:34.178419 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:03:34.180038 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:03:34.181679 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:03:34.190485 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 09:03:34.215255 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 09:03:34.219643 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 09:03:34.230531 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 09:03:34.388183 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 16 09:03:34.390074 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 09:03:34.392364 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 09:03:34.399408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:03:34.409668 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 09:03:34.415423 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 16 09:03:34.418351 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 09:03:34.420695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 09:03:34.420745 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:03:34.427826 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 09:03:34.434719 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Jan 16 09:03:34.439376 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 09:03:34.446699 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:03:34.446749 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:03:34.446783 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:03:34.463283 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:03:34.466803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:03:34.556297 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 09:03:34.569593 coreos-metadata[787]: Jan 16 09:03:34.569 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:03:34.574873 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 16 09:03:34.576328 coreos-metadata[788]: Jan 16 09:03:34.575 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:03:34.583328 coreos-metadata[787]: Jan 16 09:03:34.582 INFO Fetch successful Jan 16 09:03:34.584214 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 09:03:34.589023 coreos-metadata[788]: Jan 16 09:03:34.588 INFO Fetch successful Jan 16 09:03:34.592211 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 09:03:34.593810 coreos-metadata[788]: Jan 16 09:03:34.593 INFO wrote hostname ci-4081.3.0-a-a78886c5b6 to /sysroot/etc/hostname Jan 16 09:03:34.598202 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:03:34.599737 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 16 09:03:34.599858 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 16 09:03:34.739040 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 09:03:34.745552 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 09:03:34.749391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 09:03:34.759678 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 09:03:34.761200 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:03:34.791703 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 09:03:34.810081 ignition[908]: INFO : Ignition 2.19.0 Jan 16 09:03:34.812316 ignition[908]: INFO : Stage: mount Jan 16 09:03:34.812316 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:03:34.812316 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:03:34.814541 ignition[908]: INFO : mount: mount passed Jan 16 09:03:34.814541 ignition[908]: INFO : Ignition finished successfully Jan 16 09:03:34.815611 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 09:03:34.823331 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 09:03:34.861623 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 09:03:34.872187 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Jan 16 09:03:34.875973 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 16 09:03:34.876069 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 16 09:03:34.876084 kernel: BTRFS info (device vda6): using free space tree Jan 16 09:03:34.881336 kernel: BTRFS info (device vda6): auto enabling async discard Jan 16 09:03:34.884948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 09:03:34.933653 ignition[936]: INFO : Ignition 2.19.0 Jan 16 09:03:34.933653 ignition[936]: INFO : Stage: files Jan 16 09:03:34.935775 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:03:34.935775 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:03:34.935775 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Jan 16 09:03:34.938709 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 09:03:34.938709 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 09:03:34.940691 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 09:03:34.941809 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 09:03:34.941809 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 09:03:34.941426 unknown[936]: wrote ssh authorized keys file for user: core Jan 16 09:03:34.945996 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 09:03:34.947237 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 16 09:03:35.003152 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 09:03:35.133911 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 16 09:03:35.135554 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 09:03:35.135554 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 16 09:03:35.232587 systemd-networkd[746]: eth1: Gained IPv6LL Jan 16 09:03:35.600968 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 09:03:35.684880 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 09:03:35.705961 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:03:35.705961 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 09:03:35.705961 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 16 09:03:35.705961 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 16 09:03:35.705961 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 16 09:03:35.705961 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 16 09:03:35.744650 systemd-networkd[746]: eth0: Gained IPv6LL Jan 16 09:03:36.144240 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 16 09:03:36.418415 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 16 09:03:36.418415 ignition[936]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 16 09:03:36.420692 ignition[936]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 09:03:36.420692 ignition[936]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 09:03:36.420692 ignition[936]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 16 09:03:36.420692 ignition[936]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 16 09:03:36.420692 ignition[936]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 09:03:36.420692 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:03:36.420692 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 09:03:36.420692 ignition[936]: INFO : files: files passed Jan 16 09:03:36.420692 ignition[936]: INFO : Ignition finished successfully Jan 16 09:03:36.422413 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 09:03:36.430392 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 09:03:36.433371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 09:03:36.439527 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 09:03:36.439676 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 09:03:36.462287 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:03:36.462287 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:03:36.465246 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 09:03:36.466622 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:03:36.468795 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 09:03:36.475468 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 09:03:36.516931 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 09:03:36.517061 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 09:03:36.519029 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 09:03:36.520488 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 09:03:36.522043 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 09:03:36.526434 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 09:03:36.555261 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:03:36.562379 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 09:03:36.581538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:03:36.582248 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:03:36.582913 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 09:03:36.585432 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 09:03:36.585569 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 09:03:36.586931 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 09:03:36.587657 systemd[1]: Stopped target basic.target - Basic System. Jan 16 09:03:36.588815 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 09:03:36.590236 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 09:03:36.591401 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 09:03:36.592606 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 09:03:36.594182 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 09:03:36.595418 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 09:03:36.596767 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 09:03:36.598159 systemd[1]: Stopped target swap.target - Swaps. Jan 16 09:03:36.599602 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 09:03:36.599753 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 09:03:36.601616 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:03:36.602457 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:03:36.603746 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 09:03:36.603916 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:03:36.605442 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 09:03:36.605650 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 09:03:36.607438 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 09:03:36.607710 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 09:03:36.609256 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 09:03:36.609505 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 09:03:36.610546 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 09:03:36.610713 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 09:03:36.622207 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 09:03:36.624413 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 09:03:36.625384 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 09:03:36.626295 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:03:36.630751 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 09:03:36.630904 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 09:03:36.638474 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 09:03:36.638618 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 09:03:36.654554 ignition[989]: INFO : Ignition 2.19.0 Jan 16 09:03:36.654554 ignition[989]: INFO : Stage: umount Jan 16 09:03:36.654554 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 09:03:36.654554 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 16 09:03:36.654554 ignition[989]: INFO : umount: umount passed Jan 16 09:03:36.654554 ignition[989]: INFO : Ignition finished successfully Jan 16 09:03:36.658988 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 09:03:36.665438 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 09:03:36.668586 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 09:03:36.669688 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 09:03:36.669864 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 09:03:36.671477 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 09:03:36.671582 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 09:03:36.672921 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 09:03:36.672999 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 09:03:36.673874 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 09:03:36.673936 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 09:03:36.674864 systemd[1]: Stopped target network.target - Network. Jan 16 09:03:36.676166 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 09:03:36.676247 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 09:03:36.677749 systemd[1]: Stopped target paths.target - Path Units. Jan 16 09:03:36.678887 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 09:03:36.679248 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:03:36.680231 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 09:03:36.681723 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 09:03:36.682953 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 09:03:36.683023 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 09:03:36.684165 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 09:03:36.684214 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 09:03:36.685692 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 09:03:36.685782 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 09:03:36.687176 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 09:03:36.687238 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 09:03:36.688278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 09:03:36.688329 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 09:03:36.690152 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 09:03:36.691642 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 09:03:36.695282 systemd-networkd[746]: eth1: DHCPv6 lease lost Jan 16 09:03:36.699202 systemd-networkd[746]: eth0: DHCPv6 lease lost Jan 16 09:03:36.700450 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 09:03:36.700597 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 09:03:36.703902 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 09:03:36.704091 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 09:03:36.706141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 09:03:36.706224 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:03:36.722154 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 09:03:36.722771 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 09:03:36.722863 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 09:03:36.723789 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 09:03:36.723883 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:03:36.725306 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 09:03:36.725498 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 09:03:36.726735 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 09:03:36.726801 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:03:36.728491 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:03:36.744459 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 09:03:36.744664 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:03:36.746895 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 09:03:36.747013 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 09:03:36.748168 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 09:03:36.748226 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:03:36.750160 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 09:03:36.750237 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 09:03:36.752752 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 09:03:36.752834 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 09:03:36.754491 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 09:03:36.754581 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 09:03:36.763636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 09:03:36.764329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 09:03:36.764417 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:03:36.765275 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 09:03:36.765329 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:03:36.766874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 09:03:36.766929 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:03:36.769647 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:03:36.769715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:03:36.773799 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 09:03:36.773918 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 09:03:36.775602 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 09:03:36.775709 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 09:03:36.778333 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 09:03:36.784396 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 09:03:36.794267 systemd[1]: Switching root. Jan 16 09:03:36.860200 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 16 09:03:36.860327 systemd-journald[184]: Journal stopped Jan 16 09:03:38.335397 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 09:03:38.335500 kernel: SELinux: policy capability open_perms=1 Jan 16 09:03:38.335522 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 09:03:38.335541 kernel: SELinux: policy capability always_check_network=0 Jan 16 09:03:38.335558 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 09:03:38.335577 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 09:03:38.335595 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 09:03:38.335621 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 09:03:38.335641 kernel: audit: type=1403 audit(1737018217.159:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 09:03:38.335664 systemd[1]: Successfully loaded SELinux policy in 46.799ms. Jan 16 09:03:38.335796 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.512ms. Jan 16 09:03:38.335819 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 09:03:38.335838 systemd[1]: Detected virtualization kvm. Jan 16 09:03:38.335856 systemd[1]: Detected architecture x86-64. Jan 16 09:03:38.335868 systemd[1]: Detected first boot. Jan 16 09:03:38.335903 systemd[1]: Hostname set to . Jan 16 09:03:38.335921 systemd[1]: Initializing machine ID from VM UUID. Jan 16 09:03:38.335940 zram_generator::config[1036]: No configuration found. Jan 16 09:03:38.335957 systemd[1]: Populated /etc with preset unit settings. Jan 16 09:03:38.335973 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 09:03:38.335988 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 09:03:38.336006 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 09:03:38.336024 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 09:03:38.336039 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 09:03:38.336061 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 09:03:38.336077 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 09:03:38.336095 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 09:03:38.336111 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 09:03:38.336149 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 09:03:38.336170 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 09:03:38.336189 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 09:03:38.336209 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 09:03:38.336232 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 09:03:38.336249 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 09:03:38.336267 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 09:03:38.336283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 09:03:38.336301 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 09:03:38.336318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 09:03:38.336335 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 09:03:38.336350 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 09:03:38.336378 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 09:03:38.336395 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 09:03:38.336417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 09:03:38.336435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 09:03:38.336454 systemd[1]: Reached target slices.target - Slice Units. Jan 16 09:03:38.336472 systemd[1]: Reached target swap.target - Swaps. Jan 16 09:03:38.336491 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 09:03:38.336508 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 09:03:38.336529 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 09:03:38.336548 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 09:03:38.336565 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 09:03:38.336582 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 09:03:38.336602 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 09:03:38.336620 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 09:03:38.336638 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 09:03:38.336656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:38.336677 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 09:03:38.336700 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 09:03:38.336715 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 09:03:38.336729 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 09:03:38.336745 systemd[1]: Reached target machines.target - Containers. Jan 16 09:03:38.336761 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 09:03:38.336777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:03:38.336792 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 09:03:38.336810 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 09:03:38.336856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:03:38.336873 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:03:38.336891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:03:38.336908 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 09:03:38.336925 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:03:38.336944 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 09:03:38.336962 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 09:03:38.336980 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 09:03:38.337000 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 09:03:38.337017 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 09:03:38.337033 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 09:03:38.337052 kernel: loop: module loaded Jan 16 09:03:38.337085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 09:03:38.337105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 09:03:38.338243 kernel: ACPI: bus type drm_connector registered Jan 16 09:03:38.338312 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 09:03:38.338334 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 09:03:38.338360 systemd[1]: verity-setup.service: Deactivated successfully. Jan 16 09:03:38.338375 systemd[1]: Stopped verity-setup.service. Jan 16 09:03:38.338388 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:38.338401 kernel: fuse: init (API version 7.39) Jan 16 09:03:38.338412 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 09:03:38.338425 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 09:03:38.338437 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 09:03:38.338450 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 09:03:38.338462 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 09:03:38.338476 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 09:03:38.338493 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 09:03:38.338513 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 09:03:38.338538 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 09:03:38.338551 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 09:03:38.338563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:03:38.338575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:03:38.338587 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:03:38.338598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:03:38.338610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:03:38.338630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:03:38.338648 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 09:03:38.338664 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 09:03:38.338683 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:03:38.338704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:03:38.338785 systemd-journald[1112]: Collecting audit messages is disabled. Jan 16 09:03:38.338823 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 09:03:38.338844 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 09:03:38.338862 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 09:03:38.338880 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 09:03:38.338900 systemd-journald[1112]: Journal started Jan 16 09:03:38.338938 systemd-journald[1112]: Runtime Journal (/run/log/journal/562d7b63dd2147758299ba91e519e64f) is 4.9M, max 39.3M, 34.4M free. Jan 16 09:03:37.925640 systemd[1]: Queued start job for default target multi-user.target. Jan 16 09:03:37.944447 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 16 09:03:37.945022 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 09:03:38.350286 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 09:03:38.360757 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 09:03:38.366245 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 09:03:38.366369 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 09:03:38.373521 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 09:03:38.386195 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 09:03:38.398907 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 09:03:38.398997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:03:38.410508 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 09:03:38.423247 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:03:38.429451 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 09:03:38.429548 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:03:38.438308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:03:38.450014 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 09:03:38.469289 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 09:03:38.473332 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 09:03:38.482299 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 09:03:38.486120 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 09:03:38.487074 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 09:03:38.507120 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 09:03:38.539950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 09:03:38.555052 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 09:03:38.565920 kernel: loop0: detected capacity change from 0 to 210664 Jan 16 09:03:38.564497 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 09:03:38.573425 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 09:03:38.581564 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 09:03:38.583757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:03:38.626982 systemd-journald[1112]: Time spent on flushing to /var/log/journal/562d7b63dd2147758299ba91e519e64f is 91.233ms for 998 entries. Jan 16 09:03:38.626982 systemd-journald[1112]: System Journal (/var/log/journal/562d7b63dd2147758299ba91e519e64f) is 8.0M, max 195.6M, 187.6M free. Jan 16 09:03:38.754050 systemd-journald[1112]: Received client request to flush runtime journal. Jan 16 09:03:38.755120 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 09:03:38.755211 kernel: loop1: detected capacity change from 0 to 140768 Jan 16 09:03:38.675480 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 09:03:38.757483 kernel: loop2: detected capacity change from 0 to 8 Jan 16 09:03:38.676689 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Jan 16 09:03:38.676703 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Jan 16 09:03:38.683113 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 09:03:38.685040 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 09:03:38.698115 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 09:03:38.713586 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 09:03:38.761272 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 09:03:38.801270 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 09:03:38.812731 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 09:03:38.816515 kernel: loop3: detected capacity change from 0 to 142488 Jan 16 09:03:38.872550 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 16 09:03:38.872577 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jan 16 09:03:38.878604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 09:03:38.888186 kernel: loop4: detected capacity change from 0 to 210664 Jan 16 09:03:38.915249 kernel: loop5: detected capacity change from 0 to 140768 Jan 16 09:03:38.955161 kernel: loop6: detected capacity change from 0 to 8 Jan 16 09:03:38.958168 kernel: loop7: detected capacity change from 0 to 142488 Jan 16 09:03:39.016026 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 16 09:03:39.016626 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 16 09:03:39.031843 systemd[1]: Reloading requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 09:03:39.031869 systemd[1]: Reloading... Jan 16 09:03:39.292405 zram_generator::config[1206]: No configuration found. Jan 16 09:03:39.297178 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 09:03:39.499335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:03:39.594930 systemd[1]: Reloading finished in 561 ms. Jan 16 09:03:39.619879 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 09:03:39.625838 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 09:03:39.634431 systemd[1]: Starting ensure-sysext.service... Jan 16 09:03:39.636382 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 09:03:39.647382 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jan 16 09:03:39.647399 systemd[1]: Reloading... Jan 16 09:03:39.680568 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 09:03:39.681352 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 09:03:39.682976 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 09:03:39.683484 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 16 09:03:39.683720 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 16 09:03:39.692930 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:03:39.692945 systemd-tmpfiles[1250]: Skipping /boot Jan 16 09:03:39.706601 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 09:03:39.706616 systemd-tmpfiles[1250]: Skipping /boot Jan 16 09:03:39.765170 zram_generator::config[1276]: No configuration found. Jan 16 09:03:39.915558 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:03:39.971601 systemd[1]: Reloading finished in 323 ms. Jan 16 09:03:39.990300 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 09:03:39.995760 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 09:03:40.011965 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 09:03:40.017499 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 09:03:40.029371 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 09:03:40.042013 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 09:03:40.056071 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 09:03:40.064312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 09:03:40.075804 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 09:03:40.078990 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:40.080255 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:03:40.090446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:03:40.103471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:03:40.108013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:03:40.110439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:03:40.110654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:40.119016 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:40.119341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:03:40.119539 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:03:40.119626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:40.122220 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 09:03:40.137004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:40.137538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:03:40.151270 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 09:03:40.152001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:03:40.152226 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:40.153048 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 09:03:40.155068 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 09:03:40.159773 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Jan 16 09:03:40.161648 systemd[1]: Finished ensure-sysext.service. Jan 16 09:03:40.177323 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 09:03:40.181727 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 09:03:40.182724 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:03:40.183120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:03:40.184448 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:03:40.191647 augenrules[1353]: No rules Jan 16 09:03:40.191961 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 09:03:40.192234 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 09:03:40.199576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:03:40.200213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:03:40.202216 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 09:03:40.206718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:03:40.208318 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 09:03:40.219352 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 09:03:40.221519 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:03:40.222240 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:03:40.226370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:03:40.263197 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 09:03:40.284598 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 09:03:40.338335 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 16 09:03:40.340222 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:40.340372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 09:03:40.345368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 09:03:40.356332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 09:03:40.367429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 09:03:40.369376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 09:03:40.369432 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 09:03:40.369450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 16 09:03:40.393168 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Jan 16 09:03:40.405164 kernel: ISO 9660 Extensions: RRIP_1991A Jan 16 09:03:40.408433 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 16 09:03:40.433715 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 09:03:40.433911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 09:03:40.447262 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 16 09:03:40.453934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 09:03:40.454156 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 09:03:40.455065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 09:03:40.455277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 09:03:40.456060 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 09:03:40.456109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 09:03:40.480342 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 09:03:40.481247 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 09:03:40.560183 systemd-networkd[1367]: lo: Link UP Jan 16 09:03:40.561412 systemd-networkd[1367]: lo: Gained carrier Jan 16 09:03:40.567927 systemd-networkd[1367]: Enumeration completed Jan 16 09:03:40.568639 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 09:03:40.571224 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-ce:73:54:22:17:07.network. Jan 16 09:03:40.572245 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-4e:bb:06:e9:d5:cd.network. Jan 16 09:03:40.573220 systemd-resolved[1332]: Positive Trust Anchors: Jan 16 09:03:40.573242 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 09:03:40.573293 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 09:03:40.573959 systemd-networkd[1367]: eth0: Link UP Jan 16 09:03:40.573969 systemd-networkd[1367]: eth0: Gained carrier Jan 16 09:03:40.578489 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 09:03:40.580853 systemd-networkd[1367]: eth1: Link UP Jan 16 09:03:40.580866 systemd-networkd[1367]: eth1: Gained carrier Jan 16 09:03:40.587877 systemd-resolved[1332]: Using system hostname 'ci-4081.3.0-a-a78886c5b6'. Jan 16 09:03:40.589106 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 16 09:03:40.589400 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 16 09:03:40.592514 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 09:03:40.600389 systemd[1]: Reached target network.target - Network. Jan 16 09:03:40.601314 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 09:03:40.629196 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 16 09:03:40.642638 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 16 09:03:40.649918 kernel: ACPI: button: Power Button [PWRF] Jan 16 09:03:40.691179 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 16 09:03:40.701544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 09:03:40.748170 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 16 09:03:40.764663 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 09:03:40.786162 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 09:03:40.791608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:03:40.800889 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 16 09:03:40.800982 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 16 09:03:40.805581 kernel: Console: switching to colour dummy device 80x25 Jan 16 09:03:40.806309 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 09:03:40.806384 kernel: [drm] features: -context_init Jan 16 09:03:40.808404 kernel: [drm] number of scanouts: 1 Jan 16 09:03:40.808461 kernel: [drm] number of cap sets: 0 Jan 16 09:03:40.813209 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 16 09:03:40.818212 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 16 09:03:40.818405 kernel: Console: switching to colour frame buffer device 128x48 Jan 16 09:03:40.829623 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 09:03:40.870996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:03:40.871275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:03:40.956633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:03:40.963966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 09:03:40.965295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:03:40.978545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 09:03:41.046208 kernel: EDAC MC: Ver: 3.0.0 Jan 16 09:03:41.073021 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 09:03:41.090909 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 09:03:41.111972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 09:03:41.114525 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:03:41.150383 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 09:03:41.152575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 09:03:41.152798 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 09:03:41.153504 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 09:03:41.155244 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 09:03:41.156622 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 09:03:41.156933 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 09:03:41.157326 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 09:03:41.157621 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 09:03:41.157660 systemd[1]: Reached target paths.target - Path Units. Jan 16 09:03:41.157726 systemd[1]: Reached target timers.target - Timer Units. Jan 16 09:03:41.160267 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 09:03:41.162700 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 09:03:41.169328 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 09:03:41.178531 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 09:03:41.182420 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 09:03:41.184574 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 09:03:41.186647 systemd[1]: Reached target basic.target - Basic System. Jan 16 09:03:41.189356 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 09:03:41.187662 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:03:41.187702 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 09:03:41.198641 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 09:03:41.210597 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 09:03:41.218369 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 09:03:41.223970 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 09:03:41.233582 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 09:03:41.235893 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 09:03:41.249639 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 09:03:41.259400 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 09:03:41.270622 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 09:03:41.278010 jq[1440]: false Jan 16 09:03:41.282738 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 09:03:41.299522 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 09:03:41.304608 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 09:03:41.305478 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 09:03:41.312612 coreos-metadata[1438]: Jan 16 09:03:41.312 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:03:41.320885 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 09:03:41.327452 coreos-metadata[1438]: Jan 16 09:03:41.327 INFO Fetch successful Jan 16 09:03:41.335354 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 09:03:41.341563 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 09:03:41.347636 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 09:03:41.348478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 09:03:41.369849 dbus-daemon[1439]: [system] SELinux support is enabled Jan 16 09:03:41.376766 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 09:03:41.379920 update_engine[1449]: I20250116 09:03:41.379377 1449 main.cc:92] Flatcar Update Engine starting Jan 16 09:03:41.388451 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 09:03:41.388518 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 09:03:41.393609 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 09:03:41.393710 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 16 09:03:41.393741 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 09:03:41.428864 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 09:03:41.439019 update_engine[1449]: I20250116 09:03:41.437458 1449 update_check_scheduler.cc:74] Next update check in 7m42s Jan 16 09:03:41.439202 jq[1450]: true Jan 16 09:03:41.446197 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 09:03:41.446492 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 09:03:41.460593 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 09:03:41.477435 systemd[1]: Started update-engine.service - Update Engine. Jan 16 09:03:41.480650 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 09:03:41.486265 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 09:03:41.499115 tar[1453]: linux-amd64/helm Jan 16 09:03:41.499799 jq[1467]: true Jan 16 09:03:41.507717 extend-filesystems[1443]: Found loop4 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found loop5 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found loop6 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found loop7 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found vda Jan 16 09:03:41.507717 extend-filesystems[1443]: Found vda1 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found vda2 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found vda3 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found usr Jan 16 09:03:41.507717 extend-filesystems[1443]: Found vda4 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found vda6 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found vda7 Jan 16 09:03:41.507717 extend-filesystems[1443]: Found vda9 Jan 16 09:03:41.610328 extend-filesystems[1443]: Checking size of /dev/vda9 Jan 16 09:03:41.595970 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 09:03:41.596805 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 09:03:41.636948 extend-filesystems[1443]: Resized partition /dev/vda9 Jan 16 09:03:41.656541 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) Jan 16 09:03:41.667304 systemd-logind[1448]: New seat seat0. Jan 16 09:03:41.671810 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 16 09:03:41.671842 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 16 09:03:41.673797 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 16 09:03:41.674441 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 09:03:41.687049 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1373) Jan 16 09:03:41.912578 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 09:03:41.925643 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 09:03:41.942287 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:03:41.946520 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 09:03:41.967704 systemd[1]: Starting sshkeys.service... Jan 16 09:03:41.991167 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 16 09:03:42.008282 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 09:03:42.025409 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 09:03:42.038308 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 16 09:03:42.038308 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 16 09:03:42.038308 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 16 09:03:42.042993 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jan 16 09:03:42.042993 extend-filesystems[1443]: Found vdb Jan 16 09:03:42.043940 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 09:03:42.044800 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 09:03:42.054859 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 09:03:42.079597 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 09:03:42.112935 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 09:03:42.113256 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 09:03:42.122719 coreos-metadata[1519]: Jan 16 09:03:42.122 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 16 09:03:42.129017 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 09:03:42.152550 coreos-metadata[1519]: Jan 16 09:03:42.135 INFO Fetch successful Jan 16 09:03:42.173098 unknown[1519]: wrote ssh authorized keys file for user: core Jan 16 09:03:42.214977 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 09:03:42.233833 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 09:03:42.246187 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Jan 16 09:03:42.248391 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 09:03:42.252619 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 09:03:42.257363 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 09:03:42.271208 systemd[1]: Finished sshkeys.service. Jan 16 09:03:42.320871 containerd[1462]: time="2025-01-16T09:03:42.320701426Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 09:03:42.336638 systemd-networkd[1367]: eth0: Gained IPv6LL Jan 16 09:03:42.341213 systemd-networkd[1367]: eth1: Gained IPv6LL Jan 16 09:03:42.342454 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 16 09:03:42.350342 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 09:03:42.353557 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 09:03:42.370500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:03:42.384619 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 09:03:42.440906 containerd[1462]: time="2025-01-16T09:03:42.440189103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:03:42.444972 containerd[1462]: time="2025-01-16T09:03:42.444362184Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:03:42.444972 containerd[1462]: time="2025-01-16T09:03:42.444423655Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 09:03:42.444972 containerd[1462]: time="2025-01-16T09:03:42.444450995Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 09:03:42.444972 containerd[1462]: time="2025-01-16T09:03:42.444670380Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 09:03:42.444972 containerd[1462]: time="2025-01-16T09:03:42.444694801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 09:03:42.444972 containerd[1462]: time="2025-01-16T09:03:42.444766863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:03:42.444972 containerd[1462]: time="2025-01-16T09:03:42.444783872Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:03:42.445311 containerd[1462]: time="2025-01-16T09:03:42.445058475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:03:42.445311 containerd[1462]: time="2025-01-16T09:03:42.445100025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 09:03:42.445311 containerd[1462]: time="2025-01-16T09:03:42.445118812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:03:42.445311 containerd[1462]: time="2025-01-16T09:03:42.445189072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 09:03:42.445429 containerd[1462]: time="2025-01-16T09:03:42.445321498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:03:42.445702 containerd[1462]: time="2025-01-16T09:03:42.445650112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 09:03:42.446518 containerd[1462]: time="2025-01-16T09:03:42.445939965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 09:03:42.446518 containerd[1462]: time="2025-01-16T09:03:42.446009249Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 09:03:42.446518 containerd[1462]: time="2025-01-16T09:03:42.446156856Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 09:03:42.446518 containerd[1462]: time="2025-01-16T09:03:42.446217550Z" level=info msg="metadata content store policy set" policy=shared Jan 16 09:03:42.455669 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 09:03:42.466587 containerd[1462]: time="2025-01-16T09:03:42.465731744Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 09:03:42.466587 containerd[1462]: time="2025-01-16T09:03:42.465842802Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 09:03:42.466587 containerd[1462]: time="2025-01-16T09:03:42.465871805Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 09:03:42.466587 containerd[1462]: time="2025-01-16T09:03:42.465901719Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 09:03:42.466587 containerd[1462]: time="2025-01-16T09:03:42.466029259Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 09:03:42.466587 containerd[1462]: time="2025-01-16T09:03:42.466332169Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 09:03:42.469677 containerd[1462]: time="2025-01-16T09:03:42.469292279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 09:03:42.469677 containerd[1462]: time="2025-01-16T09:03:42.469642576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 09:03:42.469677 containerd[1462]: time="2025-01-16T09:03:42.469677203Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 09:03:42.469855 containerd[1462]: time="2025-01-16T09:03:42.469707351Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 09:03:42.469855 containerd[1462]: time="2025-01-16T09:03:42.469786650Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 09:03:42.469855 containerd[1462]: time="2025-01-16T09:03:42.469817065Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 09:03:42.469855 containerd[1462]: time="2025-01-16T09:03:42.469844732Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 09:03:42.469961 containerd[1462]: time="2025-01-16T09:03:42.469878455Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 09:03:42.469961 containerd[1462]: time="2025-01-16T09:03:42.469909184Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 09:03:42.470049 containerd[1462]: time="2025-01-16T09:03:42.469951097Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 09:03:42.470049 containerd[1462]: time="2025-01-16T09:03:42.469980087Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 09:03:42.470049 containerd[1462]: time="2025-01-16T09:03:42.470006757Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 09:03:42.470049 containerd[1462]: time="2025-01-16T09:03:42.470041649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.470049 containerd[1462]: time="2025-01-16T09:03:42.470061518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.470049 containerd[1462]: time="2025-01-16T09:03:42.470078684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.470049 containerd[1462]: time="2025-01-16T09:03:42.470096895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.470378 containerd[1462]: time="2025-01-16T09:03:42.470113523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472275087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472348953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472382691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472417089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472465585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472524221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472557725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472583866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472611920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472649276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472710179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472727956Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472906606Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 09:03:42.473031 containerd[1462]: time="2025-01-16T09:03:42.472944841Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 09:03:42.473926 containerd[1462]: time="2025-01-16T09:03:42.472967664Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 09:03:42.480248 containerd[1462]: time="2025-01-16T09:03:42.476014115Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 09:03:42.480248 containerd[1462]: time="2025-01-16T09:03:42.476064976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.480248 containerd[1462]: time="2025-01-16T09:03:42.476091938Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 09:03:42.480248 containerd[1462]: time="2025-01-16T09:03:42.476110532Z" level=info msg="NRI interface is disabled by configuration." Jan 16 09:03:42.480248 containerd[1462]: time="2025-01-16T09:03:42.476216851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.478069892Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.478260138Z" level=info msg="Connect containerd service" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.478320423Z" level=info msg="using legacy CRI server" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.478329955Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.478548759Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.479713981Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.480085503Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.480222115Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.480306964Z" level=info msg="Start subscribing containerd event" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.480347189Z" level=info msg="Start recovering state" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.480423331Z" level=info msg="Start event monitor" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.480450182Z" level=info msg="Start snapshots syncer" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.480465073Z" level=info msg="Start cni network conf syncer for default" Jan 16 09:03:42.480535 containerd[1462]: time="2025-01-16T09:03:42.480477469Z" level=info msg="Start streaming server" Jan 16 09:03:42.480718 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 09:03:42.488604 containerd[1462]: time="2025-01-16T09:03:42.488178227Z" level=info msg="containerd successfully booted in 0.170236s" Jan 16 09:03:42.600915 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 09:03:42.617537 systemd[1]: Started sshd@0-64.227.96.98:22-139.178.68.195:33828.service - OpenSSH per-connection server daemon (139.178.68.195:33828). Jan 16 09:03:42.803408 sshd[1554]: Accepted publickey for core from 139.178.68.195 port 33828 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:42.809387 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:42.832089 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 09:03:42.839783 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 09:03:42.848630 systemd-logind[1448]: New session 1 of user core. Jan 16 09:03:42.888425 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 09:03:42.899760 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 09:03:42.924256 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 09:03:43.053370 tar[1453]: linux-amd64/LICENSE Jan 16 09:03:43.053370 tar[1453]: linux-amd64/README.md Jan 16 09:03:43.093485 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 09:03:43.134156 systemd[1558]: Queued start job for default target default.target. Jan 16 09:03:43.139921 systemd[1558]: Created slice app.slice - User Application Slice. Jan 16 09:03:43.139978 systemd[1558]: Reached target paths.target - Paths. Jan 16 09:03:43.140030 systemd[1558]: Reached target timers.target - Timers. Jan 16 09:03:43.144359 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 09:03:43.164413 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 09:03:43.166369 systemd[1558]: Reached target sockets.target - Sockets. Jan 16 09:03:43.166406 systemd[1558]: Reached target basic.target - Basic System. Jan 16 09:03:43.166479 systemd[1558]: Reached target default.target - Main User Target. Jan 16 09:03:43.166521 systemd[1558]: Startup finished in 224ms. Jan 16 09:03:43.166713 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 09:03:43.178472 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 09:03:43.270268 systemd[1]: Started sshd@1-64.227.96.98:22-139.178.68.195:33832.service - OpenSSH per-connection server daemon (139.178.68.195:33832). Jan 16 09:03:43.373729 sshd[1572]: Accepted publickey for core from 139.178.68.195 port 33832 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:43.376696 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:43.386920 systemd-logind[1448]: New session 2 of user core. Jan 16 09:03:43.394623 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 09:03:43.472446 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 16 09:03:43.486679 systemd[1]: sshd@1-64.227.96.98:22-139.178.68.195:33832.service: Deactivated successfully. Jan 16 09:03:43.490558 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 09:03:43.495667 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 16 09:03:43.503281 systemd[1]: Started sshd@2-64.227.96.98:22-139.178.68.195:33834.service - OpenSSH per-connection server daemon (139.178.68.195:33834). Jan 16 09:03:43.513616 systemd-logind[1448]: Removed session 2. Jan 16 09:03:43.589459 sshd[1579]: Accepted publickey for core from 139.178.68.195 port 33834 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:43.594386 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:43.604152 systemd-logind[1448]: New session 3 of user core. Jan 16 09:03:43.609593 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 09:03:43.694534 sshd[1579]: pam_unix(sshd:session): session closed for user core Jan 16 09:03:43.700645 systemd[1]: sshd@2-64.227.96.98:22-139.178.68.195:33834.service: Deactivated successfully. Jan 16 09:03:43.700930 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 16 09:03:43.704493 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 09:03:43.709806 systemd-logind[1448]: Removed session 3. Jan 16 09:03:44.097465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:03:44.101583 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 09:03:44.104664 systemd[1]: Startup finished in 1.406s (kernel) + 6.402s (initrd) + 6.990s (userspace) = 14.799s. Jan 16 09:03:44.122385 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:03:45.216246 kubelet[1590]: E0116 09:03:45.216096 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:03:45.221791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:03:45.222046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:03:45.222547 systemd[1]: kubelet.service: Consumed 1.798s CPU time. Jan 16 09:03:53.707417 systemd[1]: Started sshd@3-64.227.96.98:22-139.178.68.195:47518.service - OpenSSH per-connection server daemon (139.178.68.195:47518). Jan 16 09:03:53.771061 sshd[1604]: Accepted publickey for core from 139.178.68.195 port 47518 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:53.773271 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:53.780401 systemd-logind[1448]: New session 4 of user core. Jan 16 09:03:53.786503 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 09:03:53.849789 sshd[1604]: pam_unix(sshd:session): session closed for user core Jan 16 09:03:53.864382 systemd[1]: sshd@3-64.227.96.98:22-139.178.68.195:47518.service: Deactivated successfully. Jan 16 09:03:53.866451 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 09:03:53.867375 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 16 09:03:53.874771 systemd[1]: Started sshd@4-64.227.96.98:22-139.178.68.195:47534.service - OpenSSH per-connection server daemon (139.178.68.195:47534). Jan 16 09:03:53.876501 systemd-logind[1448]: Removed session 4. Jan 16 09:03:53.929597 sshd[1611]: Accepted publickey for core from 139.178.68.195 port 47534 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:53.931265 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:53.936803 systemd-logind[1448]: New session 5 of user core. Jan 16 09:03:53.948445 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 09:03:54.007400 sshd[1611]: pam_unix(sshd:session): session closed for user core Jan 16 09:03:54.019495 systemd[1]: sshd@4-64.227.96.98:22-139.178.68.195:47534.service: Deactivated successfully. Jan 16 09:03:54.021505 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 09:03:54.023306 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 16 09:03:54.028653 systemd[1]: Started sshd@5-64.227.96.98:22-139.178.68.195:47540.service - OpenSSH per-connection server daemon (139.178.68.195:47540). Jan 16 09:03:54.030581 systemd-logind[1448]: Removed session 5. Jan 16 09:03:54.087676 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 47540 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:54.089903 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:54.096919 systemd-logind[1448]: New session 6 of user core. Jan 16 09:03:54.103518 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 09:03:54.169438 sshd[1618]: pam_unix(sshd:session): session closed for user core Jan 16 09:03:54.180348 systemd[1]: sshd@5-64.227.96.98:22-139.178.68.195:47540.service: Deactivated successfully. Jan 16 09:03:54.182379 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 09:03:54.184594 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 16 09:03:54.190756 systemd[1]: Started sshd@6-64.227.96.98:22-139.178.68.195:55022.service - OpenSSH per-connection server daemon (139.178.68.195:55022). Jan 16 09:03:54.192760 systemd-logind[1448]: Removed session 6. Jan 16 09:03:54.235713 sshd[1625]: Accepted publickey for core from 139.178.68.195 port 55022 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:54.238594 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:54.245767 systemd-logind[1448]: New session 7 of user core. Jan 16 09:03:54.258105 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 09:03:54.328694 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 09:03:54.329619 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:03:54.343031 sudo[1628]: pam_unix(sudo:session): session closed for user root Jan 16 09:03:54.347046 sshd[1625]: pam_unix(sshd:session): session closed for user core Jan 16 09:03:54.365248 systemd[1]: sshd@6-64.227.96.98:22-139.178.68.195:55022.service: Deactivated successfully. Jan 16 09:03:54.367739 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 09:03:54.370552 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 16 09:03:54.379602 systemd[1]: Started sshd@7-64.227.96.98:22-139.178.68.195:55036.service - OpenSSH per-connection server daemon (139.178.68.195:55036). Jan 16 09:03:54.381292 systemd-logind[1448]: Removed session 7. Jan 16 09:03:54.425824 sshd[1633]: Accepted publickey for core from 139.178.68.195 port 55036 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:54.427834 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:54.433914 systemd-logind[1448]: New session 8 of user core. Jan 16 09:03:54.440548 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 09:03:54.504354 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 09:03:54.504777 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:03:54.510304 sudo[1637]: pam_unix(sudo:session): session closed for user root Jan 16 09:03:54.518504 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 09:03:54.518909 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:03:54.542762 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 09:03:54.544900 auditctl[1640]: No rules Jan 16 09:03:54.545743 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 09:03:54.546283 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 09:03:54.549567 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 09:03:54.591822 augenrules[1658]: No rules Jan 16 09:03:54.593686 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 09:03:54.595284 sudo[1636]: pam_unix(sudo:session): session closed for user root Jan 16 09:03:54.599507 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 16 09:03:54.614721 systemd[1]: sshd@7-64.227.96.98:22-139.178.68.195:55036.service: Deactivated successfully. Jan 16 09:03:54.616645 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 09:03:54.617553 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 16 09:03:54.622656 systemd[1]: Started sshd@8-64.227.96.98:22-139.178.68.195:55040.service - OpenSSH per-connection server daemon (139.178.68.195:55040). Jan 16 09:03:54.624770 systemd-logind[1448]: Removed session 8. Jan 16 09:03:54.682503 sshd[1666]: Accepted publickey for core from 139.178.68.195 port 55040 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:03:54.684491 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:03:54.690575 systemd-logind[1448]: New session 9 of user core. Jan 16 09:03:54.693429 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 09:03:54.752665 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 09:03:54.753005 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 09:03:55.257482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 09:03:55.264623 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 09:03:55.266467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:03:55.269619 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 09:03:55.452571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:03:55.457557 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:03:55.546033 kubelet[1696]: E0116 09:03:55.545813 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:03:55.553115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:03:55.553938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:03:55.884034 dockerd[1687]: time="2025-01-16T09:03:55.883948781Z" level=info msg="Starting up" Jan 16 09:03:56.077852 dockerd[1687]: time="2025-01-16T09:03:56.077791086Z" level=info msg="Loading containers: start." Jan 16 09:03:56.237180 kernel: Initializing XFRM netlink socket Jan 16 09:03:56.270976 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Jan 16 09:03:56.344309 systemd-networkd[1367]: docker0: Link UP Jan 16 09:03:56.368646 dockerd[1687]: time="2025-01-16T09:03:56.368577812Z" level=info msg="Loading containers: done." Jan 16 09:03:56.392240 dockerd[1687]: time="2025-01-16T09:03:56.392076780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 09:03:56.392493 dockerd[1687]: time="2025-01-16T09:03:56.392253200Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 09:03:56.392493 dockerd[1687]: time="2025-01-16T09:03:56.392387108Z" level=info msg="Daemon has completed initialization" Jan 16 09:03:56.442112 dockerd[1687]: time="2025-01-16T09:03:56.441251978Z" level=info msg="API listen on /run/docker.sock" Jan 16 09:03:56.441591 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 09:03:57.265176 systemd-timesyncd[1352]: Contacted time server 104.234.61.117:123 (2.flatcar.pool.ntp.org). Jan 16 09:03:57.265266 systemd-timesyncd[1352]: Initial clock synchronization to Thu 2025-01-16 09:03:57.264731 UTC. Jan 16 09:03:57.265875 systemd-resolved[1332]: Clock change detected. Flushing caches. Jan 16 09:03:58.183498 containerd[1462]: time="2025-01-16T09:03:58.183346015Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 16 09:03:58.754751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875121174.mount: Deactivated successfully. Jan 16 09:04:00.417126 containerd[1462]: time="2025-01-16T09:04:00.416536440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:00.418279 containerd[1462]: time="2025-01-16T09:04:00.418232610Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 16 09:04:00.419171 containerd[1462]: time="2025-01-16T09:04:00.419089909Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:00.425050 containerd[1462]: time="2025-01-16T09:04:00.423339589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:00.425651 containerd[1462]: time="2025-01-16T09:04:00.425574189Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.242161287s" Jan 16 09:04:00.425821 containerd[1462]: time="2025-01-16T09:04:00.425797781Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 16 09:04:00.459260 containerd[1462]: time="2025-01-16T09:04:00.459219313Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 16 09:04:02.809432 containerd[1462]: time="2025-01-16T09:04:02.809356700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:02.811460 containerd[1462]: time="2025-01-16T09:04:02.810889243Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 16 09:04:02.812608 containerd[1462]: time="2025-01-16T09:04:02.812550754Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:02.818543 containerd[1462]: time="2025-01-16T09:04:02.818440589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:02.820634 containerd[1462]: time="2025-01-16T09:04:02.820370698Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.36087757s" Jan 16 09:04:02.820634 containerd[1462]: time="2025-01-16T09:04:02.820441456Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 16 09:04:02.867076 containerd[1462]: time="2025-01-16T09:04:02.866873528Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 16 09:04:03.264797 systemd-resolved[1332]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 16 09:04:04.339794 containerd[1462]: time="2025-01-16T09:04:04.339698660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:04.341387 containerd[1462]: time="2025-01-16T09:04:04.341322299Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 16 09:04:04.342859 containerd[1462]: time="2025-01-16T09:04:04.342776505Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:04.349247 containerd[1462]: time="2025-01-16T09:04:04.349117637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:04.351517 containerd[1462]: time="2025-01-16T09:04:04.351140682Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.483873551s" Jan 16 09:04:04.351517 containerd[1462]: time="2025-01-16T09:04:04.351247807Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 16 09:04:04.412523 containerd[1462]: time="2025-01-16T09:04:04.412461617Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 16 09:04:05.828223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131946899.mount: Deactivated successfully. Jan 16 09:04:06.230520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 09:04:06.241158 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:04:06.366126 systemd-resolved[1332]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 16 09:04:06.469392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:04:06.488622 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:04:06.599267 kubelet[1943]: E0116 09:04:06.599060 1943 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:04:06.604919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:04:06.605169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:04:06.888155 containerd[1462]: time="2025-01-16T09:04:06.887883778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:06.890841 containerd[1462]: time="2025-01-16T09:04:06.890405628Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 16 09:04:06.892759 containerd[1462]: time="2025-01-16T09:04:06.892325135Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:06.896494 containerd[1462]: time="2025-01-16T09:04:06.896371123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:06.897857 containerd[1462]: time="2025-01-16T09:04:06.897618876Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.485091349s" Jan 16 09:04:06.897857 containerd[1462]: time="2025-01-16T09:04:06.897679733Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 16 09:04:06.965620 containerd[1462]: time="2025-01-16T09:04:06.964541350Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 16 09:04:07.697362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927842924.mount: Deactivated successfully. Jan 16 09:04:09.162639 containerd[1462]: time="2025-01-16T09:04:09.162517432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:09.164762 containerd[1462]: time="2025-01-16T09:04:09.164672167Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 16 09:04:09.165837 containerd[1462]: time="2025-01-16T09:04:09.165684546Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:09.173229 containerd[1462]: time="2025-01-16T09:04:09.173140772Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.20727693s" Jan 16 09:04:09.173229 containerd[1462]: time="2025-01-16T09:04:09.173214027Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 16 09:04:09.173597 containerd[1462]: time="2025-01-16T09:04:09.173146798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:09.218062 containerd[1462]: time="2025-01-16T09:04:09.217960808Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 16 09:04:09.616076 systemd-resolved[1332]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 16 09:04:09.749896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273457526.mount: Deactivated successfully. Jan 16 09:04:09.760102 containerd[1462]: time="2025-01-16T09:04:09.759412142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:09.761049 containerd[1462]: time="2025-01-16T09:04:09.760870981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 16 09:04:09.762228 containerd[1462]: time="2025-01-16T09:04:09.762126243Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:09.765179 containerd[1462]: time="2025-01-16T09:04:09.765120745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:09.767007 containerd[1462]: time="2025-01-16T09:04:09.766437134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 548.420553ms" Jan 16 09:04:09.767007 containerd[1462]: time="2025-01-16T09:04:09.766511695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 16 09:04:09.799494 containerd[1462]: time="2025-01-16T09:04:09.799400912Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 16 09:04:10.394662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717247768.mount: Deactivated successfully. Jan 16 09:04:13.483396 containerd[1462]: time="2025-01-16T09:04:13.483328639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:13.485213 containerd[1462]: time="2025-01-16T09:04:13.485141919Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 16 09:04:13.485423 containerd[1462]: time="2025-01-16T09:04:13.485387233Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:13.489262 containerd[1462]: time="2025-01-16T09:04:13.489210698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:13.490704 containerd[1462]: time="2025-01-16T09:04:13.490660327Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.691200636s" Jan 16 09:04:13.490704 containerd[1462]: time="2025-01-16T09:04:13.490704808Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 16 09:04:16.730728 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 09:04:16.741208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:04:16.953312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:04:16.964664 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 09:04:17.029146 kubelet[2125]: E0116 09:04:17.028692 2125 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 09:04:17.032162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 09:04:17.032348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 09:04:17.836627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:04:17.848870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:04:17.889313 systemd[1]: Reloading requested from client PID 2139 ('systemctl') (unit session-9.scope)... Jan 16 09:04:17.889529 systemd[1]: Reloading... Jan 16 09:04:18.052273 zram_generator::config[2181]: No configuration found. Jan 16 09:04:18.184194 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:04:18.270159 systemd[1]: Reloading finished in 380 ms. Jan 16 09:04:18.334180 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 09:04:18.334547 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 09:04:18.335169 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:04:18.340588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:04:18.516290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:04:18.526712 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 09:04:18.589649 kubelet[2232]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:04:18.589649 kubelet[2232]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 09:04:18.589649 kubelet[2232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:04:18.590124 kubelet[2232]: I0116 09:04:18.589696 2232 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 09:04:19.074001 kubelet[2232]: I0116 09:04:19.073950 2232 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 16 09:04:19.074775 kubelet[2232]: I0116 09:04:19.074300 2232 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 09:04:19.074775 kubelet[2232]: I0116 09:04:19.074613 2232 server.go:927] "Client rotation is on, will bootstrap in background" Jan 16 09:04:19.098389 kubelet[2232]: I0116 09:04:19.098050 2232 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 09:04:19.103389 kubelet[2232]: E0116 09:04:19.103326 2232 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://64.227.96.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.118095 kubelet[2232]: I0116 09:04:19.117992 2232 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 09:04:19.123298 kubelet[2232]: I0116 09:04:19.123186 2232 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 09:04:19.123516 kubelet[2232]: I0116 09:04:19.123276 2232 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-a78886c5b6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 09:04:19.124146 kubelet[2232]: I0116 09:04:19.124097 2232 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 09:04:19.124146 kubelet[2232]: I0116 09:04:19.124127 2232 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 09:04:19.126368 kubelet[2232]: I0116 09:04:19.125991 2232 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:04:19.128887 kubelet[2232]: W0116 09:04:19.128803 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.96.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-a78886c5b6&limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.129161 kubelet[2232]: E0116 09:04:19.129090 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.227.96.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-a78886c5b6&limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.130360 kubelet[2232]: I0116 09:04:19.130296 2232 kubelet.go:400] "Attempting to sync node with API server" Jan 16 09:04:19.130360 kubelet[2232]: I0116 09:04:19.130372 2232 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 09:04:19.132394 kubelet[2232]: I0116 09:04:19.130412 2232 kubelet.go:312] "Adding apiserver pod source" Jan 16 09:04:19.132394 kubelet[2232]: I0116 09:04:19.130425 2232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 09:04:19.134110 kubelet[2232]: W0116 09:04:19.133711 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.96.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.134110 kubelet[2232]: E0116 09:04:19.133763 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.227.96.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.134370 kubelet[2232]: I0116 09:04:19.134354 2232 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 09:04:19.137053 kubelet[2232]: I0116 09:04:19.136266 2232 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 09:04:19.137053 kubelet[2232]: W0116 09:04:19.136351 2232 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 09:04:19.137271 kubelet[2232]: I0116 09:04:19.137252 2232 server.go:1264] "Started kubelet" Jan 16 09:04:19.155085 kubelet[2232]: E0116 09:04:19.154769 2232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.96.98:6443/api/v1/namespaces/default/events\": dial tcp 64.227.96.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-a78886c5b6.181b20e7503dc7e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-a78886c5b6,UID:ci-4081.3.0-a-a78886c5b6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-a78886c5b6,},FirstTimestamp:2025-01-16 09:04:19.137218533 +0000 UTC m=+0.605295121,LastTimestamp:2025-01-16 09:04:19.137218533 +0000 UTC m=+0.605295121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-a78886c5b6,}" Jan 16 09:04:19.155458 kubelet[2232]: I0116 09:04:19.155163 2232 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 09:04:19.156004 kubelet[2232]: I0116 09:04:19.155696 2232 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 09:04:19.156004 kubelet[2232]: I0116 09:04:19.155782 2232 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 09:04:19.160152 kubelet[2232]: I0116 09:04:19.159470 2232 server.go:455] "Adding debug handlers to kubelet server" Jan 16 09:04:19.165374 kubelet[2232]: I0116 09:04:19.165341 2232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 09:04:19.169520 kubelet[2232]: I0116 09:04:19.169448 2232 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 09:04:19.171089 kubelet[2232]: I0116 09:04:19.169861 2232 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 16 09:04:19.171089 kubelet[2232]: I0116 09:04:19.170308 2232 reconciler.go:26] "Reconciler: start to sync state" Jan 16 09:04:19.172692 kubelet[2232]: W0116 09:04:19.171895 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.96.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.172815 kubelet[2232]: E0116 09:04:19.172711 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.227.96.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.173901 kubelet[2232]: E0116 09:04:19.173852 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-a78886c5b6?timeout=10s\": dial tcp 64.227.96.98:6443: connect: connection refused" interval="200ms" Jan 16 09:04:19.174568 kubelet[2232]: E0116 09:04:19.174535 2232 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 09:04:19.176507 kubelet[2232]: I0116 09:04:19.176477 2232 factory.go:221] Registration of the systemd container factory successfully Jan 16 09:04:19.177645 kubelet[2232]: I0116 09:04:19.176618 2232 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 09:04:19.179065 kubelet[2232]: I0116 09:04:19.178871 2232 factory.go:221] Registration of the containerd container factory successfully Jan 16 09:04:19.201502 kubelet[2232]: I0116 09:04:19.200866 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 09:04:19.202735 kubelet[2232]: I0116 09:04:19.202287 2232 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 09:04:19.202735 kubelet[2232]: I0116 09:04:19.202329 2232 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 09:04:19.202735 kubelet[2232]: I0116 09:04:19.202363 2232 kubelet.go:2337] "Starting kubelet main sync loop" Jan 16 09:04:19.202735 kubelet[2232]: E0116 09:04:19.202422 2232 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 09:04:19.210964 kubelet[2232]: W0116 09:04:19.210746 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.96.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.210964 kubelet[2232]: E0116 09:04:19.210830 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.227.96.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:19.217623 kubelet[2232]: I0116 09:04:19.217582 2232 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 09:04:19.217623 kubelet[2232]: I0116 09:04:19.217606 2232 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 09:04:19.217623 kubelet[2232]: I0116 09:04:19.217634 2232 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:04:19.220235 kubelet[2232]: I0116 09:04:19.220168 2232 policy_none.go:49] "None policy: Start" Jan 16 09:04:19.221572 kubelet[2232]: I0116 09:04:19.221537 2232 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 09:04:19.221572 kubelet[2232]: I0116 09:04:19.221581 2232 state_mem.go:35] "Initializing new in-memory state store" Jan 16 09:04:19.230096 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 09:04:19.246077 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 09:04:19.250619 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 09:04:19.260599 kubelet[2232]: I0116 09:04:19.260523 2232 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 09:04:19.261079 kubelet[2232]: I0116 09:04:19.260892 2232 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 09:04:19.261079 kubelet[2232]: I0116 09:04:19.261064 2232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 09:04:19.263682 kubelet[2232]: E0116 09:04:19.263646 2232 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-a78886c5b6\" not found" Jan 16 09:04:19.272100 kubelet[2232]: I0116 09:04:19.271535 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.272100 kubelet[2232]: E0116 09:04:19.272054 2232 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.96.98:6443/api/v1/nodes\": dial tcp 64.227.96.98:6443: connect: connection refused" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.303688 kubelet[2232]: I0116 09:04:19.303557 2232 topology_manager.go:215] "Topology Admit Handler" podUID="d9694650b00aa22ece60e23ea6e9c863" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.305470 kubelet[2232]: I0116 09:04:19.304750 2232 topology_manager.go:215] "Topology Admit Handler" podUID="b374a73a67b1a20f5b78110d0eb68103" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.305626 kubelet[2232]: I0116 09:04:19.305604 2232 topology_manager.go:215] "Topology Admit Handler" podUID="568537c7936fb9ec330572993e451452" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.316374 systemd[1]: Created slice kubepods-burstable-podd9694650b00aa22ece60e23ea6e9c863.slice - libcontainer container kubepods-burstable-podd9694650b00aa22ece60e23ea6e9c863.slice. Jan 16 09:04:19.332420 systemd[1]: Created slice kubepods-burstable-podb374a73a67b1a20f5b78110d0eb68103.slice - libcontainer container kubepods-burstable-podb374a73a67b1a20f5b78110d0eb68103.slice. Jan 16 09:04:19.340846 systemd[1]: Created slice kubepods-burstable-pod568537c7936fb9ec330572993e451452.slice - libcontainer container kubepods-burstable-pod568537c7936fb9ec330572993e451452.slice. Jan 16 09:04:19.371151 kubelet[2232]: I0116 09:04:19.370801 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9694650b00aa22ece60e23ea6e9c863-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-a78886c5b6\" (UID: \"d9694650b00aa22ece60e23ea6e9c863\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.375519 kubelet[2232]: E0116 09:04:19.375442 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-a78886c5b6?timeout=10s\": dial tcp 64.227.96.98:6443: connect: connection refused" interval="400ms" Jan 16 09:04:19.471337 kubelet[2232]: I0116 09:04:19.471145 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9694650b00aa22ece60e23ea6e9c863-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-a78886c5b6\" (UID: \"d9694650b00aa22ece60e23ea6e9c863\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.471337 kubelet[2232]: I0116 09:04:19.471205 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.471337 kubelet[2232]: I0116 09:04:19.471233 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9694650b00aa22ece60e23ea6e9c863-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-a78886c5b6\" (UID: \"d9694650b00aa22ece60e23ea6e9c863\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.471337 kubelet[2232]: I0116 09:04:19.471252 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.471337 kubelet[2232]: I0116 09:04:19.471268 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.471758 kubelet[2232]: I0116 09:04:19.471309 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.471758 kubelet[2232]: I0116 09:04:19.471338 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.471758 kubelet[2232]: I0116 09:04:19.471365 2232 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/568537c7936fb9ec330572993e451452-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-a78886c5b6\" (UID: \"568537c7936fb9ec330572993e451452\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.473997 kubelet[2232]: I0116 09:04:19.473963 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.474452 kubelet[2232]: E0116 09:04:19.474378 2232 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.96.98:6443/api/v1/nodes\": dial tcp 64.227.96.98:6443: connect: connection refused" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.631177 kubelet[2232]: E0116 09:04:19.630717 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:19.632235 containerd[1462]: time="2025-01-16T09:04:19.632097111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-a78886c5b6,Uid:d9694650b00aa22ece60e23ea6e9c863,Namespace:kube-system,Attempt:0,}" Jan 16 09:04:19.638693 kubelet[2232]: E0116 09:04:19.638565 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:19.645054 kubelet[2232]: E0116 09:04:19.643819 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:19.649353 containerd[1462]: time="2025-01-16T09:04:19.648928253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-a78886c5b6,Uid:b374a73a67b1a20f5b78110d0eb68103,Namespace:kube-system,Attempt:0,}" Jan 16 09:04:19.649353 containerd[1462]: time="2025-01-16T09:04:19.649115071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-a78886c5b6,Uid:568537c7936fb9ec330572993e451452,Namespace:kube-system,Attempt:0,}" Jan 16 09:04:19.776572 kubelet[2232]: E0116 09:04:19.776514 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-a78886c5b6?timeout=10s\": dial tcp 64.227.96.98:6443: connect: connection refused" interval="800ms" Jan 16 09:04:19.876905 kubelet[2232]: I0116 09:04:19.876842 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:19.877544 kubelet[2232]: E0116 09:04:19.877499 2232 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.96.98:6443/api/v1/nodes\": dial tcp 64.227.96.98:6443: connect: connection refused" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:20.087489 kubelet[2232]: W0116 09:04:20.087407 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.96.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:20.087489 kubelet[2232]: E0116 09:04:20.087459 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://64.227.96.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:20.129851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517783760.mount: Deactivated successfully. Jan 16 09:04:20.136171 containerd[1462]: time="2025-01-16T09:04:20.136108110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:04:20.137939 containerd[1462]: time="2025-01-16T09:04:20.137864557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 16 09:04:20.141952 containerd[1462]: time="2025-01-16T09:04:20.141719836Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:04:20.143439 containerd[1462]: time="2025-01-16T09:04:20.143153025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 09:04:20.143439 containerd[1462]: time="2025-01-16T09:04:20.143237665Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:04:20.146067 containerd[1462]: time="2025-01-16T09:04:20.145600064Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:04:20.149247 containerd[1462]: time="2025-01-16T09:04:20.148915463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 09:04:20.154404 containerd[1462]: time="2025-01-16T09:04:20.154334719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 09:04:20.157054 containerd[1462]: time="2025-01-16T09:04:20.155478722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.300404ms" Jan 16 09:04:20.159472 containerd[1462]: time="2025-01-16T09:04:20.159417807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.363868ms" Jan 16 09:04:20.164280 containerd[1462]: time="2025-01-16T09:04:20.164222172Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.022717ms" Jan 16 09:04:20.261752 kubelet[2232]: W0116 09:04:20.260558 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.96.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:20.261752 kubelet[2232]: E0116 09:04:20.260662 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://64.227.96.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:20.359573 containerd[1462]: time="2025-01-16T09:04:20.359285467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:04:20.359573 containerd[1462]: time="2025-01-16T09:04:20.359407978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:04:20.360188 containerd[1462]: time="2025-01-16T09:04:20.359435898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:20.362266 containerd[1462]: time="2025-01-16T09:04:20.361579120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:20.364651 containerd[1462]: time="2025-01-16T09:04:20.363639736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:04:20.365052 containerd[1462]: time="2025-01-16T09:04:20.364928175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:04:20.365271 containerd[1462]: time="2025-01-16T09:04:20.365181873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:20.365651 containerd[1462]: time="2025-01-16T09:04:20.365568722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:20.376740 containerd[1462]: time="2025-01-16T09:04:20.375275227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:04:20.376740 containerd[1462]: time="2025-01-16T09:04:20.375365713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:04:20.376740 containerd[1462]: time="2025-01-16T09:04:20.375393522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:20.376740 containerd[1462]: time="2025-01-16T09:04:20.375508180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:20.412316 systemd[1]: Started cri-containerd-99b9d35d5d7cea1c8f37c9b2dfe954a5b9df84cd14b3e8fb699ffeb331c8f215.scope - libcontainer container 99b9d35d5d7cea1c8f37c9b2dfe954a5b9df84cd14b3e8fb699ffeb331c8f215. Jan 16 09:04:20.424323 systemd[1]: Started cri-containerd-43cbfc29ead5ce8226554e08fe7e950ae80d73b3cb326ba290defb34e61f2a31.scope - libcontainer container 43cbfc29ead5ce8226554e08fe7e950ae80d73b3cb326ba290defb34e61f2a31. Jan 16 09:04:20.436682 systemd[1]: Started cri-containerd-79f8b0d52ec22f084cdddfadaac1a9abf555778948bca1507357b41bdf29bb6e.scope - libcontainer container 79f8b0d52ec22f084cdddfadaac1a9abf555778948bca1507357b41bdf29bb6e. Jan 16 09:04:20.514905 containerd[1462]: time="2025-01-16T09:04:20.514717946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-a78886c5b6,Uid:568537c7936fb9ec330572993e451452,Namespace:kube-system,Attempt:0,} returns sandbox id \"43cbfc29ead5ce8226554e08fe7e950ae80d73b3cb326ba290defb34e61f2a31\"" Jan 16 09:04:20.518012 kubelet[2232]: E0116 09:04:20.517044 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:20.528639 containerd[1462]: time="2025-01-16T09:04:20.528572369Z" level=info msg="CreateContainer within sandbox \"43cbfc29ead5ce8226554e08fe7e950ae80d73b3cb326ba290defb34e61f2a31\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 09:04:20.536928 containerd[1462]: time="2025-01-16T09:04:20.536878980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-a78886c5b6,Uid:b374a73a67b1a20f5b78110d0eb68103,Namespace:kube-system,Attempt:0,} returns sandbox id \"99b9d35d5d7cea1c8f37c9b2dfe954a5b9df84cd14b3e8fb699ffeb331c8f215\"" Jan 16 09:04:20.538436 kubelet[2232]: E0116 09:04:20.538383 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:20.542771 containerd[1462]: time="2025-01-16T09:04:20.542504173Z" level=info msg="CreateContainer within sandbox \"99b9d35d5d7cea1c8f37c9b2dfe954a5b9df84cd14b3e8fb699ffeb331c8f215\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 09:04:20.551790 containerd[1462]: time="2025-01-16T09:04:20.551725617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-a78886c5b6,Uid:d9694650b00aa22ece60e23ea6e9c863,Namespace:kube-system,Attempt:0,} returns sandbox id \"79f8b0d52ec22f084cdddfadaac1a9abf555778948bca1507357b41bdf29bb6e\"" Jan 16 09:04:20.554155 kubelet[2232]: E0116 09:04:20.553925 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:20.557317 containerd[1462]: time="2025-01-16T09:04:20.557224949Z" level=info msg="CreateContainer within sandbox \"79f8b0d52ec22f084cdddfadaac1a9abf555778948bca1507357b41bdf29bb6e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 09:04:20.565694 containerd[1462]: time="2025-01-16T09:04:20.565423872Z" level=info msg="CreateContainer within sandbox \"43cbfc29ead5ce8226554e08fe7e950ae80d73b3cb326ba290defb34e61f2a31\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b67c1a6b19ec8afa945ec8818bd3b9459df79e0346f2575218f1e69044419b0e\"" Jan 16 09:04:20.566420 kubelet[2232]: W0116 09:04:20.566341 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.96.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:20.566420 kubelet[2232]: E0116 09:04:20.566388 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://64.227.96.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:20.572465 containerd[1462]: time="2025-01-16T09:04:20.572080001Z" level=info msg="StartContainer for \"b67c1a6b19ec8afa945ec8818bd3b9459df79e0346f2575218f1e69044419b0e\"" Jan 16 09:04:20.577587 kubelet[2232]: E0116 09:04:20.577528 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.96.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-a78886c5b6?timeout=10s\": dial tcp 64.227.96.98:6443: connect: connection refused" interval="1.6s" Jan 16 09:04:20.581238 containerd[1462]: time="2025-01-16T09:04:20.581192363Z" level=info msg="CreateContainer within sandbox \"99b9d35d5d7cea1c8f37c9b2dfe954a5b9df84cd14b3e8fb699ffeb331c8f215\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5aee876278b2a0679a78d61c587532329c781bef2302e0477e421ef86d72456f\"" Jan 16 09:04:20.582503 containerd[1462]: time="2025-01-16T09:04:20.582459170Z" level=info msg="StartContainer for \"5aee876278b2a0679a78d61c587532329c781bef2302e0477e421ef86d72456f\"" Jan 16 09:04:20.588332 containerd[1462]: time="2025-01-16T09:04:20.588173104Z" level=info msg="CreateContainer within sandbox \"79f8b0d52ec22f084cdddfadaac1a9abf555778948bca1507357b41bdf29bb6e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"47df1eaef187ea327b0b319210bf436d5b0e864c0edbc1ed1cdacc8ab4fd0f39\"" Jan 16 09:04:20.589654 containerd[1462]: time="2025-01-16T09:04:20.589508852Z" level=info msg="StartContainer for \"47df1eaef187ea327b0b319210bf436d5b0e864c0edbc1ed1cdacc8ab4fd0f39\"" Jan 16 09:04:20.619293 systemd[1]: Started cri-containerd-b67c1a6b19ec8afa945ec8818bd3b9459df79e0346f2575218f1e69044419b0e.scope - libcontainer container b67c1a6b19ec8afa945ec8818bd3b9459df79e0346f2575218f1e69044419b0e. Jan 16 09:04:20.648447 systemd[1]: Started cri-containerd-47df1eaef187ea327b0b319210bf436d5b0e864c0edbc1ed1cdacc8ab4fd0f39.scope - libcontainer container 47df1eaef187ea327b0b319210bf436d5b0e864c0edbc1ed1cdacc8ab4fd0f39. Jan 16 09:04:20.662165 systemd[1]: Started cri-containerd-5aee876278b2a0679a78d61c587532329c781bef2302e0477e421ef86d72456f.scope - libcontainer container 5aee876278b2a0679a78d61c587532329c781bef2302e0477e421ef86d72456f. Jan 16 09:04:20.690066 kubelet[2232]: W0116 09:04:20.689321 2232 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.96.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-a78886c5b6&limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:20.690066 kubelet[2232]: E0116 09:04:20.689424 2232 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://64.227.96.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-a78886c5b6&limit=500&resourceVersion=0": dial tcp 64.227.96.98:6443: connect: connection refused Jan 16 09:04:20.690066 kubelet[2232]: I0116 09:04:20.689944 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:20.697413 kubelet[2232]: E0116 09:04:20.696816 2232 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://64.227.96.98:6443/api/v1/nodes\": dial tcp 64.227.96.98:6443: connect: connection refused" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:20.740052 containerd[1462]: time="2025-01-16T09:04:20.738439151Z" level=info msg="StartContainer for \"b67c1a6b19ec8afa945ec8818bd3b9459df79e0346f2575218f1e69044419b0e\" returns successfully" Jan 16 09:04:20.791075 containerd[1462]: time="2025-01-16T09:04:20.790345513Z" level=info msg="StartContainer for \"47df1eaef187ea327b0b319210bf436d5b0e864c0edbc1ed1cdacc8ab4fd0f39\" returns successfully" Jan 16 09:04:20.791075 containerd[1462]: time="2025-01-16T09:04:20.790449400Z" level=info msg="StartContainer for \"5aee876278b2a0679a78d61c587532329c781bef2302e0477e421ef86d72456f\" returns successfully" Jan 16 09:04:21.235076 kubelet[2232]: E0116 09:04:21.235007 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:21.241968 kubelet[2232]: E0116 09:04:21.241788 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:21.244777 kubelet[2232]: E0116 09:04:21.244619 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:22.248097 kubelet[2232]: E0116 09:04:22.247280 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:22.298942 kubelet[2232]: I0116 09:04:22.298866 2232 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:23.163799 kubelet[2232]: E0116 09:04:23.163755 2232 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-a78886c5b6\" not found" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:23.253734 kubelet[2232]: I0116 09:04:23.253687 2232 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:23.268901 kubelet[2232]: E0116 09:04:23.268842 2232 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-a78886c5b6\" not found" Jan 16 09:04:23.287995 kubelet[2232]: E0116 09:04:23.287874 2232 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-a78886c5b6.181b20e7503dc7e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-a78886c5b6,UID:ci-4081.3.0-a-a78886c5b6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-a78886c5b6,},FirstTimestamp:2025-01-16 09:04:19.137218533 +0000 UTC m=+0.605295121,LastTimestamp:2025-01-16 09:04:19.137218533 +0000 UTC m=+0.605295121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-a78886c5b6,}" Jan 16 09:04:24.136377 kubelet[2232]: I0116 09:04:24.136302 2232 apiserver.go:52] "Watching apiserver" Jan 16 09:04:24.170284 kubelet[2232]: I0116 09:04:24.170186 2232 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 16 09:04:25.473572 systemd[1]: Reloading requested from client PID 2503 ('systemctl') (unit session-9.scope)... Jan 16 09:04:25.473599 systemd[1]: Reloading... Jan 16 09:04:25.569055 kubelet[2232]: W0116 09:04:25.568246 2232 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:04:25.569055 kubelet[2232]: E0116 09:04:25.568697 2232 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:25.605051 zram_generator::config[2545]: No configuration found. Jan 16 09:04:25.739299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 09:04:25.836232 systemd[1]: Reloading finished in 362 ms. Jan 16 09:04:25.890362 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:04:25.904808 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 09:04:25.905106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:04:25.905185 systemd[1]: kubelet.service: Consumed 1.142s CPU time, 115.0M memory peak, 0B memory swap peak. Jan 16 09:04:25.918474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 09:04:26.058713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 09:04:26.075144 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 09:04:26.164047 kubelet[2593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:04:26.164047 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 16 09:04:26.164047 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 09:04:26.164047 kubelet[2593]: I0116 09:04:26.163450 2593 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 09:04:26.171481 kubelet[2593]: I0116 09:04:26.171405 2593 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 16 09:04:26.171481 kubelet[2593]: I0116 09:04:26.171451 2593 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 09:04:26.171800 kubelet[2593]: I0116 09:04:26.171746 2593 server.go:927] "Client rotation is on, will bootstrap in background" Jan 16 09:04:26.173835 kubelet[2593]: I0116 09:04:26.173788 2593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 09:04:26.176408 kubelet[2593]: I0116 09:04:26.175778 2593 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 09:04:26.187309 kubelet[2593]: I0116 09:04:26.187238 2593 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 09:04:26.188054 kubelet[2593]: I0116 09:04:26.187965 2593 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 09:04:26.188487 kubelet[2593]: I0116 09:04:26.188188 2593 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-a78886c5b6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 16 09:04:26.188704 kubelet[2593]: I0116 09:04:26.188688 2593 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 09:04:26.188776 kubelet[2593]: I0116 09:04:26.188767 2593 container_manager_linux.go:301] "Creating device plugin manager" Jan 16 09:04:26.188907 kubelet[2593]: I0116 09:04:26.188895 2593 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:04:26.189183 kubelet[2593]: I0116 09:04:26.189167 2593 kubelet.go:400] "Attempting to sync node with API server" Jan 16 09:04:26.189312 kubelet[2593]: I0116 09:04:26.189286 2593 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 09:04:26.189410 kubelet[2593]: I0116 09:04:26.189399 2593 kubelet.go:312] "Adding apiserver pod source" Jan 16 09:04:26.189491 kubelet[2593]: I0116 09:04:26.189481 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 09:04:26.194044 kubelet[2593]: I0116 09:04:26.193240 2593 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 09:04:26.194044 kubelet[2593]: I0116 09:04:26.193527 2593 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 09:04:26.195035 kubelet[2593]: I0116 09:04:26.194979 2593 server.go:1264] "Started kubelet" Jan 16 09:04:26.200254 kubelet[2593]: I0116 09:04:26.200203 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 09:04:26.204703 kubelet[2593]: I0116 09:04:26.204654 2593 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 09:04:26.209067 kubelet[2593]: I0116 09:04:26.209008 2593 server.go:455] "Adding debug handlers to kubelet server" Jan 16 09:04:26.218520 kubelet[2593]: I0116 09:04:26.218427 2593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 09:04:26.219135 kubelet[2593]: I0116 09:04:26.219106 2593 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 09:04:26.224666 kubelet[2593]: I0116 09:04:26.224633 2593 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 16 09:04:26.231422 kubelet[2593]: I0116 09:04:26.231383 2593 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 16 09:04:26.232040 kubelet[2593]: I0116 09:04:26.231923 2593 reconciler.go:26] "Reconciler: start to sync state" Jan 16 09:04:26.236864 kubelet[2593]: I0116 09:04:26.236187 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 09:04:26.239475 kubelet[2593]: I0116 09:04:26.239432 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 09:04:26.239475 kubelet[2593]: I0116 09:04:26.239480 2593 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 16 09:04:26.239601 kubelet[2593]: I0116 09:04:26.239508 2593 kubelet.go:2337] "Starting kubelet main sync loop" Jan 16 09:04:26.239601 kubelet[2593]: E0116 09:04:26.239565 2593 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 09:04:26.254497 kubelet[2593]: I0116 09:04:26.254305 2593 factory.go:221] Registration of the systemd container factory successfully Jan 16 09:04:26.254497 kubelet[2593]: I0116 09:04:26.254422 2593 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 09:04:26.257297 kubelet[2593]: E0116 09:04:26.257055 2593 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 09:04:26.262368 kubelet[2593]: I0116 09:04:26.262325 2593 factory.go:221] Registration of the containerd container factory successfully Jan 16 09:04:26.325088 kubelet[2593]: I0116 09:04:26.324784 2593 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 16 09:04:26.325088 kubelet[2593]: I0116 09:04:26.324805 2593 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 16 09:04:26.325088 kubelet[2593]: I0116 09:04:26.324838 2593 state_mem.go:36] "Initialized new in-memory state store" Jan 16 09:04:26.327433 kubelet[2593]: I0116 09:04:26.326494 2593 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 09:04:26.327433 kubelet[2593]: I0116 09:04:26.326525 2593 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 09:04:26.327433 kubelet[2593]: I0116 09:04:26.326549 2593 policy_none.go:49] "None policy: Start" Jan 16 09:04:26.327792 kubelet[2593]: I0116 09:04:26.327742 2593 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.329740 kubelet[2593]: I0116 09:04:26.329704 2593 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 16 09:04:26.329740 kubelet[2593]: I0116 09:04:26.329739 2593 state_mem.go:35] "Initializing new in-memory state store" Jan 16 09:04:26.329959 kubelet[2593]: I0116 09:04:26.329944 2593 state_mem.go:75] "Updated machine memory state" Jan 16 09:04:26.343702 kubelet[2593]: E0116 09:04:26.342281 2593 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 09:04:26.347693 kubelet[2593]: I0116 09:04:26.347643 2593 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.347872 kubelet[2593]: I0116 09:04:26.347780 2593 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.350789 kubelet[2593]: I0116 09:04:26.350734 2593 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 09:04:26.351360 kubelet[2593]: I0116 09:04:26.351132 2593 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 09:04:26.351360 kubelet[2593]: I0116 09:04:26.351255 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 09:04:26.498428 sudo[2626]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 16 09:04:26.499697 sudo[2626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 16 09:04:26.543174 kubelet[2593]: I0116 09:04:26.542848 2593 topology_manager.go:215] "Topology Admit Handler" podUID="d9694650b00aa22ece60e23ea6e9c863" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.543367 kubelet[2593]: I0116 09:04:26.543339 2593 topology_manager.go:215] "Topology Admit Handler" podUID="b374a73a67b1a20f5b78110d0eb68103" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.544034 kubelet[2593]: I0116 09:04:26.543635 2593 topology_manager.go:215] "Topology Admit Handler" podUID="568537c7936fb9ec330572993e451452" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.551741 kubelet[2593]: W0116 09:04:26.550797 2593 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:04:26.555591 kubelet[2593]: W0116 09:04:26.555555 2593 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:04:26.558822 kubelet[2593]: W0116 09:04:26.558760 2593 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:04:26.559260 kubelet[2593]: E0116 09:04:26.559041 2593 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.633991 kubelet[2593]: I0116 09:04:26.633846 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/568537c7936fb9ec330572993e451452-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-a78886c5b6\" (UID: \"568537c7936fb9ec330572993e451452\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.633991 kubelet[2593]: I0116 09:04:26.633910 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9694650b00aa22ece60e23ea6e9c863-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-a78886c5b6\" (UID: \"d9694650b00aa22ece60e23ea6e9c863\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.633991 kubelet[2593]: I0116 09:04:26.633932 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9694650b00aa22ece60e23ea6e9c863-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-a78886c5b6\" (UID: \"d9694650b00aa22ece60e23ea6e9c863\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.633991 kubelet[2593]: I0116 09:04:26.633955 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9694650b00aa22ece60e23ea6e9c863-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-a78886c5b6\" (UID: \"d9694650b00aa22ece60e23ea6e9c863\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.633991 kubelet[2593]: I0116 09:04:26.633989 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.635309 kubelet[2593]: I0116 09:04:26.634034 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.635309 kubelet[2593]: I0116 09:04:26.634061 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.635309 kubelet[2593]: I0116 09:04:26.634087 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.635309 kubelet[2593]: I0116 09:04:26.634104 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b374a73a67b1a20f5b78110d0eb68103-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-a78886c5b6\" (UID: \"b374a73a67b1a20f5b78110d0eb68103\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:26.855644 kubelet[2593]: E0116 09:04:26.852237 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:26.859042 kubelet[2593]: E0116 09:04:26.857669 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:26.861287 kubelet[2593]: E0116 09:04:26.861158 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:26.973887 update_engine[1449]: I20250116 09:04:26.972066 1449 update_attempter.cc:509] Updating boot flags... Jan 16 09:04:27.059054 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2636) Jan 16 09:04:27.199500 kubelet[2593]: I0116 09:04:27.199462 2593 apiserver.go:52] "Watching apiserver" Jan 16 09:04:27.241999 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2636) Jan 16 09:04:27.242195 kubelet[2593]: I0116 09:04:27.240126 2593 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 16 09:04:27.297718 kubelet[2593]: E0116 09:04:27.297653 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:27.301094 kubelet[2593]: E0116 09:04:27.299139 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:27.307667 kubelet[2593]: W0116 09:04:27.307631 2593 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 16 09:04:27.308855 kubelet[2593]: E0116 09:04:27.308787 2593 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-a78886c5b6\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-a78886c5b6" Jan 16 09:04:27.309430 kubelet[2593]: E0116 09:04:27.309401 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:27.393889 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2636) Jan 16 09:04:27.393995 kubelet[2593]: I0116 09:04:27.392592 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-a78886c5b6" podStartSLOduration=1.392572645 podStartE2EDuration="1.392572645s" podCreationTimestamp="2025-01-16 09:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:04:27.357538075 +0000 UTC m=+1.274152442" watchObservedRunningTime="2025-01-16 09:04:27.392572645 +0000 UTC m=+1.309186988" Jan 16 09:04:27.393995 kubelet[2593]: I0116 09:04:27.392730 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-a78886c5b6" podStartSLOduration=2.392725691 podStartE2EDuration="2.392725691s" podCreationTimestamp="2025-01-16 09:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:04:27.392392093 +0000 UTC m=+1.309006458" watchObservedRunningTime="2025-01-16 09:04:27.392725691 +0000 UTC m=+1.309340052" Jan 16 09:04:27.433591 kubelet[2593]: I0116 09:04:27.433248 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-a78886c5b6" podStartSLOduration=1.433223214 podStartE2EDuration="1.433223214s" podCreationTimestamp="2025-01-16 09:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:04:27.422997274 +0000 UTC m=+1.339611646" watchObservedRunningTime="2025-01-16 09:04:27.433223214 +0000 UTC m=+1.349837588" Jan 16 09:04:27.578288 sudo[2626]: pam_unix(sudo:session): session closed for user root Jan 16 09:04:28.303355 kubelet[2593]: E0116 09:04:28.303297 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:29.305295 kubelet[2593]: E0116 09:04:29.305239 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:29.758840 sudo[1669]: pam_unix(sudo:session): session closed for user root Jan 16 09:04:29.763251 sshd[1666]: pam_unix(sshd:session): session closed for user core Jan 16 09:04:29.769840 systemd[1]: sshd@8-64.227.96.98:22-139.178.68.195:55040.service: Deactivated successfully. Jan 16 09:04:29.773376 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 09:04:29.773612 systemd[1]: session-9.scope: Consumed 7.837s CPU time, 189.0M memory peak, 0B memory swap peak. Jan 16 09:04:29.775065 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 16 09:04:29.777698 systemd-logind[1448]: Removed session 9. Jan 16 09:04:32.404760 kubelet[2593]: E0116 09:04:32.404679 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:33.314333 kubelet[2593]: E0116 09:04:33.314276 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:33.811755 kubelet[2593]: E0116 09:04:33.811687 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:34.318256 kubelet[2593]: E0116 09:04:34.318180 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:35.629125 kubelet[2593]: E0116 09:04:35.628536 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:36.323764 kubelet[2593]: E0116 09:04:36.323698 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:37.328935 kubelet[2593]: E0116 09:04:37.328841 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:40.064684 kubelet[2593]: I0116 09:04:40.064640 2593 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 09:04:40.065631 kubelet[2593]: I0116 09:04:40.065321 2593 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 09:04:40.065979 containerd[1462]: time="2025-01-16T09:04:40.065140717Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 09:04:40.563817 kubelet[2593]: I0116 09:04:40.562342 2593 topology_manager.go:215] "Topology Admit Handler" podUID="2fa3af34-c170-45dc-9f6f-3df6affa2a21" podNamespace="kube-system" podName="kube-proxy-wnxs2" Jan 16 09:04:40.576265 kubelet[2593]: I0116 09:04:40.576207 2593 topology_manager.go:215] "Topology Admit Handler" podUID="b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" podNamespace="kube-system" podName="cilium-jdr9g" Jan 16 09:04:40.580598 systemd[1]: Created slice kubepods-besteffort-pod2fa3af34_c170_45dc_9f6f_3df6affa2a21.slice - libcontainer container kubepods-besteffort-pod2fa3af34_c170_45dc_9f6f_3df6affa2a21.slice. Jan 16 09:04:40.608169 systemd[1]: Created slice kubepods-burstable-podb3bab0bc_e94d_458f_a9a5_179d6a8b28d2.slice - libcontainer container kubepods-burstable-podb3bab0bc_e94d_458f_a9a5_179d6a8b28d2.slice. Jan 16 09:04:40.633740 kubelet[2593]: I0116 09:04:40.633680 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-run\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.633740 kubelet[2593]: I0116 09:04:40.633746 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-bpf-maps\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.633993 kubelet[2593]: I0116 09:04:40.633773 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-hostproc\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.633993 kubelet[2593]: I0116 09:04:40.633803 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-hubble-tls\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.633993 kubelet[2593]: I0116 09:04:40.633827 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-etc-cni-netd\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.633993 kubelet[2593]: I0116 09:04:40.633850 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-host-proc-sys-net\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.633993 kubelet[2593]: I0116 09:04:40.633873 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fa3af34-c170-45dc-9f6f-3df6affa2a21-kube-proxy\") pod \"kube-proxy-wnxs2\" (UID: \"2fa3af34-c170-45dc-9f6f-3df6affa2a21\") " pod="kube-system/kube-proxy-wnxs2" Jan 16 09:04:40.633993 kubelet[2593]: I0116 09:04:40.633896 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-cgroup\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.634304 kubelet[2593]: I0116 09:04:40.633917 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-lib-modules\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.634304 kubelet[2593]: I0116 09:04:40.633940 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-config-path\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.634304 kubelet[2593]: I0116 09:04:40.633966 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5hck\" (UniqueName: \"kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-kube-api-access-l5hck\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.634304 kubelet[2593]: I0116 09:04:40.633995 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-host-proc-sys-kernel\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.634304 kubelet[2593]: I0116 09:04:40.634063 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cni-path\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.634304 kubelet[2593]: I0116 09:04:40.634089 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-xtables-lock\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.635562 kubelet[2593]: I0116 09:04:40.634117 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqlrn\" (UniqueName: \"kubernetes.io/projected/2fa3af34-c170-45dc-9f6f-3df6affa2a21-kube-api-access-qqlrn\") pod \"kube-proxy-wnxs2\" (UID: \"2fa3af34-c170-45dc-9f6f-3df6affa2a21\") " pod="kube-system/kube-proxy-wnxs2" Jan 16 09:04:40.635562 kubelet[2593]: I0116 09:04:40.634153 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fa3af34-c170-45dc-9f6f-3df6affa2a21-xtables-lock\") pod \"kube-proxy-wnxs2\" (UID: \"2fa3af34-c170-45dc-9f6f-3df6affa2a21\") " pod="kube-system/kube-proxy-wnxs2" Jan 16 09:04:40.635562 kubelet[2593]: I0116 09:04:40.634180 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fa3af34-c170-45dc-9f6f-3df6affa2a21-lib-modules\") pod \"kube-proxy-wnxs2\" (UID: \"2fa3af34-c170-45dc-9f6f-3df6affa2a21\") " pod="kube-system/kube-proxy-wnxs2" Jan 16 09:04:40.635562 kubelet[2593]: I0116 09:04:40.634212 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-clustermesh-secrets\") pod \"cilium-jdr9g\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " pod="kube-system/cilium-jdr9g" Jan 16 09:04:40.763754 kubelet[2593]: E0116 09:04:40.763690 2593 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 16 09:04:40.763754 kubelet[2593]: E0116 09:04:40.763751 2593 projected.go:200] Error preparing data for projected volume kube-api-access-l5hck for pod kube-system/cilium-jdr9g: configmap "kube-root-ca.crt" not found Jan 16 09:04:40.763947 kubelet[2593]: E0116 09:04:40.763827 2593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-kube-api-access-l5hck podName:b3bab0bc-e94d-458f-a9a5-179d6a8b28d2 nodeName:}" failed. No retries permitted until 2025-01-16 09:04:41.26380489 +0000 UTC m=+15.180419240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l5hck" (UniqueName: "kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-kube-api-access-l5hck") pod "cilium-jdr9g" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2") : configmap "kube-root-ca.crt" not found Jan 16 09:04:40.765705 kubelet[2593]: E0116 09:04:40.765370 2593 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 16 09:04:40.765705 kubelet[2593]: E0116 09:04:40.765434 2593 projected.go:200] Error preparing data for projected volume kube-api-access-qqlrn for pod kube-system/kube-proxy-wnxs2: configmap "kube-root-ca.crt" not found Jan 16 09:04:40.765705 kubelet[2593]: E0116 09:04:40.765496 2593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2fa3af34-c170-45dc-9f6f-3df6affa2a21-kube-api-access-qqlrn podName:2fa3af34-c170-45dc-9f6f-3df6affa2a21 nodeName:}" failed. No retries permitted until 2025-01-16 09:04:41.265479127 +0000 UTC m=+15.182093466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qqlrn" (UniqueName: "kubernetes.io/projected/2fa3af34-c170-45dc-9f6f-3df6affa2a21-kube-api-access-qqlrn") pod "kube-proxy-wnxs2" (UID: "2fa3af34-c170-45dc-9f6f-3df6affa2a21") : configmap "kube-root-ca.crt" not found Jan 16 09:04:41.230824 kubelet[2593]: I0116 09:04:41.230183 2593 topology_manager.go:215] "Topology Admit Handler" podUID="c0305cd0-5902-4546-ae0e-abe114d1d23e" podNamespace="kube-system" podName="cilium-operator-599987898-42gqh" Jan 16 09:04:41.242905 kubelet[2593]: I0116 09:04:41.240167 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0305cd0-5902-4546-ae0e-abe114d1d23e-cilium-config-path\") pod \"cilium-operator-599987898-42gqh\" (UID: \"c0305cd0-5902-4546-ae0e-abe114d1d23e\") " pod="kube-system/cilium-operator-599987898-42gqh" Jan 16 09:04:41.242905 kubelet[2593]: I0116 09:04:41.240255 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgsx\" (UniqueName: \"kubernetes.io/projected/c0305cd0-5902-4546-ae0e-abe114d1d23e-kube-api-access-mhgsx\") pod \"cilium-operator-599987898-42gqh\" (UID: \"c0305cd0-5902-4546-ae0e-abe114d1d23e\") " pod="kube-system/cilium-operator-599987898-42gqh" Jan 16 09:04:41.244776 systemd[1]: Created slice kubepods-besteffort-podc0305cd0_5902_4546_ae0e_abe114d1d23e.slice - libcontainer container kubepods-besteffort-podc0305cd0_5902_4546_ae0e_abe114d1d23e.slice. Jan 16 09:04:41.497768 kubelet[2593]: E0116 09:04:41.497515 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:41.499678 containerd[1462]: time="2025-01-16T09:04:41.499068089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wnxs2,Uid:2fa3af34-c170-45dc-9f6f-3df6affa2a21,Namespace:kube-system,Attempt:0,}" Jan 16 09:04:41.513320 kubelet[2593]: E0116 09:04:41.513085 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:41.514686 containerd[1462]: time="2025-01-16T09:04:41.514001241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdr9g,Uid:b3bab0bc-e94d-458f-a9a5-179d6a8b28d2,Namespace:kube-system,Attempt:0,}" Jan 16 09:04:41.557904 kubelet[2593]: E0116 09:04:41.557858 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:41.566042 containerd[1462]: time="2025-01-16T09:04:41.565622701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-42gqh,Uid:c0305cd0-5902-4546-ae0e-abe114d1d23e,Namespace:kube-system,Attempt:0,}" Jan 16 09:04:41.579107 containerd[1462]: time="2025-01-16T09:04:41.578304790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:04:41.579107 containerd[1462]: time="2025-01-16T09:04:41.578464310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:04:41.579107 containerd[1462]: time="2025-01-16T09:04:41.578539348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:41.580483 containerd[1462]: time="2025-01-16T09:04:41.580247676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:04:41.580483 containerd[1462]: time="2025-01-16T09:04:41.580335469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:04:41.580483 containerd[1462]: time="2025-01-16T09:04:41.580351804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:41.580483 containerd[1462]: time="2025-01-16T09:04:41.580438266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:41.582964 containerd[1462]: time="2025-01-16T09:04:41.582817982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:41.617350 systemd[1]: Started cri-containerd-5cfe81d651252c68fcba017a0feefefc4c3918ea95c17ec21b3b8a44308be93b.scope - libcontainer container 5cfe81d651252c68fcba017a0feefefc4c3918ea95c17ec21b3b8a44308be93b. Jan 16 09:04:41.633385 systemd[1]: Started cri-containerd-1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe.scope - libcontainer container 1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe. Jan 16 09:04:41.663071 containerd[1462]: time="2025-01-16T09:04:41.661188113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:04:41.663071 containerd[1462]: time="2025-01-16T09:04:41.661400444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:04:41.663071 containerd[1462]: time="2025-01-16T09:04:41.661481047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:41.663071 containerd[1462]: time="2025-01-16T09:04:41.662210182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:04:41.706531 systemd[1]: Started cri-containerd-4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4.scope - libcontainer container 4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4. Jan 16 09:04:41.714986 containerd[1462]: time="2025-01-16T09:04:41.714921792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wnxs2,Uid:2fa3af34-c170-45dc-9f6f-3df6affa2a21,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cfe81d651252c68fcba017a0feefefc4c3918ea95c17ec21b3b8a44308be93b\"" Jan 16 09:04:41.720621 kubelet[2593]: E0116 09:04:41.720585 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:41.723061 containerd[1462]: time="2025-01-16T09:04:41.722939458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jdr9g,Uid:b3bab0bc-e94d-458f-a9a5-179d6a8b28d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\"" Jan 16 09:04:41.726535 kubelet[2593]: E0116 09:04:41.725925 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:41.731451 containerd[1462]: time="2025-01-16T09:04:41.731321269Z" level=info msg="CreateContainer within sandbox \"5cfe81d651252c68fcba017a0feefefc4c3918ea95c17ec21b3b8a44308be93b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 09:04:41.750134 containerd[1462]: time="2025-01-16T09:04:41.744603435Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 16 09:04:41.787168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527938828.mount: Deactivated successfully. Jan 16 09:04:41.799480 containerd[1462]: time="2025-01-16T09:04:41.799405288Z" level=info msg="CreateContainer within sandbox \"5cfe81d651252c68fcba017a0feefefc4c3918ea95c17ec21b3b8a44308be93b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7843cdbec3a3654cb05fb054cbcc8c18641a15a8407dbc62a5c962cdfb9e3015\"" Jan 16 09:04:41.804058 containerd[1462]: time="2025-01-16T09:04:41.802177138Z" level=info msg="StartContainer for \"7843cdbec3a3654cb05fb054cbcc8c18641a15a8407dbc62a5c962cdfb9e3015\"" Jan 16 09:04:41.841877 containerd[1462]: time="2025-01-16T09:04:41.841832603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-42gqh,Uid:c0305cd0-5902-4546-ae0e-abe114d1d23e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4\"" Jan 16 09:04:41.844759 kubelet[2593]: E0116 09:04:41.844713 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:41.868351 systemd[1]: Started cri-containerd-7843cdbec3a3654cb05fb054cbcc8c18641a15a8407dbc62a5c962cdfb9e3015.scope - libcontainer container 7843cdbec3a3654cb05fb054cbcc8c18641a15a8407dbc62a5c962cdfb9e3015. Jan 16 09:04:41.910000 containerd[1462]: time="2025-01-16T09:04:41.909905283Z" level=info msg="StartContainer for \"7843cdbec3a3654cb05fb054cbcc8c18641a15a8407dbc62a5c962cdfb9e3015\" returns successfully" Jan 16 09:04:42.348355 kubelet[2593]: E0116 09:04:42.348305 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:42.365063 kubelet[2593]: I0116 09:04:42.363931 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wnxs2" podStartSLOduration=2.363912127 podStartE2EDuration="2.363912127s" podCreationTimestamp="2025-01-16 09:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:04:42.363757414 +0000 UTC m=+16.280371771" watchObservedRunningTime="2025-01-16 09:04:42.363912127 +0000 UTC m=+16.280526483" Jan 16 09:04:43.874507 systemd[1]: Started sshd@9-64.227.96.98:22-218.92.0.134:23224.service - OpenSSH per-connection server daemon (218.92.0.134:23224). Jan 16 09:04:45.114836 sshd[2964]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.134 user=root Jan 16 09:04:46.763949 sshd[2962]: PAM: Permission denied for root from 218.92.0.134 Jan 16 09:04:47.092447 sshd[2965]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.134 user=root Jan 16 09:04:49.348945 sshd[2962]: PAM: Permission denied for root from 218.92.0.134 Jan 16 09:04:49.671337 sshd[2970]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.134 user=root Jan 16 09:04:51.340744 sshd[2962]: PAM: Permission denied for root from 218.92.0.134 Jan 16 09:04:51.522341 sshd[2962]: Received disconnect from 218.92.0.134 port 23224:11: [preauth] Jan 16 09:04:51.522785 sshd[2962]: Disconnected from authenticating user root 218.92.0.134 port 23224 [preauth] Jan 16 09:04:51.528516 systemd[1]: sshd@9-64.227.96.98:22-218.92.0.134:23224.service: Deactivated successfully. Jan 16 09:04:51.894629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566359982.mount: Deactivated successfully. Jan 16 09:04:55.191943 containerd[1462]: time="2025-01-16T09:04:55.191821759Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:55.193140 containerd[1462]: time="2025-01-16T09:04:55.192981017Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735327" Jan 16 09:04:55.195391 containerd[1462]: time="2025-01-16T09:04:55.195306673Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:04:55.200096 containerd[1462]: time="2025-01-16T09:04:55.199832150Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.455157846s" Jan 16 09:04:55.200096 containerd[1462]: time="2025-01-16T09:04:55.200043083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 16 09:04:55.201683 containerd[1462]: time="2025-01-16T09:04:55.201649863Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 16 09:04:55.206588 containerd[1462]: time="2025-01-16T09:04:55.206525910Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 09:04:55.284429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3284130366.mount: Deactivated successfully. Jan 16 09:04:55.345158 containerd[1462]: time="2025-01-16T09:04:55.344847960Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\"" Jan 16 09:04:55.347064 containerd[1462]: time="2025-01-16T09:04:55.346725423Z" level=info msg="StartContainer for \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\"" Jan 16 09:04:55.462798 systemd[1]: Started cri-containerd-06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d.scope - libcontainer container 06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d. Jan 16 09:04:55.503696 containerd[1462]: time="2025-01-16T09:04:55.503643854Z" level=info msg="StartContainer for \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\" returns successfully" Jan 16 09:04:55.520410 systemd[1]: cri-containerd-06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d.scope: Deactivated successfully. Jan 16 09:04:55.773146 containerd[1462]: time="2025-01-16T09:04:55.754664517Z" level=info msg="shim disconnected" id=06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d namespace=k8s.io Jan 16 09:04:55.773664 containerd[1462]: time="2025-01-16T09:04:55.773435582Z" level=warning msg="cleaning up after shim disconnected" id=06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d namespace=k8s.io Jan 16 09:04:55.773664 containerd[1462]: time="2025-01-16T09:04:55.773464778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:04:56.277574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d-rootfs.mount: Deactivated successfully. Jan 16 09:04:56.397006 kubelet[2593]: E0116 09:04:56.396492 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:56.403675 containerd[1462]: time="2025-01-16T09:04:56.403523984Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 09:04:56.440802 containerd[1462]: time="2025-01-16T09:04:56.438424567Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\"" Jan 16 09:04:56.440802 containerd[1462]: time="2025-01-16T09:04:56.440277089Z" level=info msg="StartContainer for \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\"" Jan 16 09:04:56.490331 systemd[1]: Started cri-containerd-3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b.scope - libcontainer container 3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b. Jan 16 09:04:56.531108 containerd[1462]: time="2025-01-16T09:04:56.529832264Z" level=info msg="StartContainer for \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\" returns successfully" Jan 16 09:04:56.545262 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 09:04:56.545631 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:04:56.545746 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:04:56.554322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 09:04:56.557977 systemd[1]: cri-containerd-3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b.scope: Deactivated successfully. Jan 16 09:04:56.587203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 09:04:56.607674 containerd[1462]: time="2025-01-16T09:04:56.607205090Z" level=info msg="shim disconnected" id=3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b namespace=k8s.io Jan 16 09:04:56.607674 containerd[1462]: time="2025-01-16T09:04:56.607324633Z" level=warning msg="cleaning up after shim disconnected" id=3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b namespace=k8s.io Jan 16 09:04:56.607674 containerd[1462]: time="2025-01-16T09:04:56.607340500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:04:57.277930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b-rootfs.mount: Deactivated successfully. Jan 16 09:04:57.402782 kubelet[2593]: E0116 09:04:57.402384 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:57.408740 containerd[1462]: time="2025-01-16T09:04:57.408144746Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 09:04:57.482927 containerd[1462]: time="2025-01-16T09:04:57.482682584Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\"" Jan 16 09:04:57.483596 containerd[1462]: time="2025-01-16T09:04:57.483514592Z" level=info msg="StartContainer for \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\"" Jan 16 09:04:57.546575 systemd[1]: Started cri-containerd-84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892.scope - libcontainer container 84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892. Jan 16 09:04:57.593915 containerd[1462]: time="2025-01-16T09:04:57.593819675Z" level=info msg="StartContainer for \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\" returns successfully" Jan 16 09:04:57.598092 systemd[1]: cri-containerd-84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892.scope: Deactivated successfully. Jan 16 09:04:57.638507 containerd[1462]: time="2025-01-16T09:04:57.638423558Z" level=info msg="shim disconnected" id=84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892 namespace=k8s.io Jan 16 09:04:57.638507 containerd[1462]: time="2025-01-16T09:04:57.638484921Z" level=warning msg="cleaning up after shim disconnected" id=84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892 namespace=k8s.io Jan 16 09:04:57.638507 containerd[1462]: time="2025-01-16T09:04:57.638494531Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:04:58.278952 systemd[1]: run-containerd-runc-k8s.io-84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892-runc.bBHx69.mount: Deactivated successfully. Jan 16 09:04:58.279111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892-rootfs.mount: Deactivated successfully. Jan 16 09:04:58.408183 kubelet[2593]: E0116 09:04:58.407994 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:58.417041 containerd[1462]: time="2025-01-16T09:04:58.415912515Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 09:04:58.442574 containerd[1462]: time="2025-01-16T09:04:58.442503853Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\"" Jan 16 09:04:58.444270 containerd[1462]: time="2025-01-16T09:04:58.444224947Z" level=info msg="StartContainer for \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\"" Jan 16 09:04:58.495288 systemd[1]: Started cri-containerd-dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793.scope - libcontainer container dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793. Jan 16 09:04:58.533890 systemd[1]: cri-containerd-dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793.scope: Deactivated successfully. Jan 16 09:04:58.538441 containerd[1462]: time="2025-01-16T09:04:58.537971108Z" level=info msg="StartContainer for \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\" returns successfully" Jan 16 09:04:58.575404 containerd[1462]: time="2025-01-16T09:04:58.575326385Z" level=info msg="shim disconnected" id=dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793 namespace=k8s.io Jan 16 09:04:58.576112 containerd[1462]: time="2025-01-16T09:04:58.575841534Z" level=warning msg="cleaning up after shim disconnected" id=dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793 namespace=k8s.io Jan 16 09:04:58.576112 containerd[1462]: time="2025-01-16T09:04:58.575876730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:04:59.280004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793-rootfs.mount: Deactivated successfully. Jan 16 09:04:59.413294 kubelet[2593]: E0116 09:04:59.413242 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:04:59.420053 containerd[1462]: time="2025-01-16T09:04:59.418417549Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 09:04:59.461463 containerd[1462]: time="2025-01-16T09:04:59.461387820Z" level=info msg="CreateContainer within sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\"" Jan 16 09:04:59.463564 containerd[1462]: time="2025-01-16T09:04:59.462422598Z" level=info msg="StartContainer for \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\"" Jan 16 09:04:59.516286 systemd[1]: Started cri-containerd-ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4.scope - libcontainer container ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4. Jan 16 09:04:59.556853 containerd[1462]: time="2025-01-16T09:04:59.556737847Z" level=info msg="StartContainer for \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\" returns successfully" Jan 16 09:04:59.786595 kubelet[2593]: I0116 09:04:59.786562 2593 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 16 09:04:59.818670 kubelet[2593]: I0116 09:04:59.818290 2593 topology_manager.go:215] "Topology Admit Handler" podUID="30c8967c-03eb-4fef-a9c9-48ffb0edd1c7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vmn6h" Jan 16 09:04:59.829458 kubelet[2593]: I0116 09:04:59.827443 2593 topology_manager.go:215] "Topology Admit Handler" podUID="b8f9d583-3274-4e29-9d86-98010948ba4f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4qhsw" Jan 16 09:04:59.839288 systemd[1]: Created slice kubepods-burstable-pod30c8967c_03eb_4fef_a9c9_48ffb0edd1c7.slice - libcontainer container kubepods-burstable-pod30c8967c_03eb_4fef_a9c9_48ffb0edd1c7.slice. Jan 16 09:04:59.848041 systemd[1]: Created slice kubepods-burstable-podb8f9d583_3274_4e29_9d86_98010948ba4f.slice - libcontainer container kubepods-burstable-podb8f9d583_3274_4e29_9d86_98010948ba4f.slice. Jan 16 09:04:59.887881 kubelet[2593]: I0116 09:04:59.887638 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45zw7\" (UniqueName: \"kubernetes.io/projected/b8f9d583-3274-4e29-9d86-98010948ba4f-kube-api-access-45zw7\") pod \"coredns-7db6d8ff4d-4qhsw\" (UID: \"b8f9d583-3274-4e29-9d86-98010948ba4f\") " pod="kube-system/coredns-7db6d8ff4d-4qhsw" Jan 16 09:04:59.887881 kubelet[2593]: I0116 09:04:59.887698 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2s2j\" (UniqueName: \"kubernetes.io/projected/30c8967c-03eb-4fef-a9c9-48ffb0edd1c7-kube-api-access-f2s2j\") pod \"coredns-7db6d8ff4d-vmn6h\" (UID: \"30c8967c-03eb-4fef-a9c9-48ffb0edd1c7\") " pod="kube-system/coredns-7db6d8ff4d-vmn6h" Jan 16 09:04:59.887881 kubelet[2593]: I0116 09:04:59.887731 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30c8967c-03eb-4fef-a9c9-48ffb0edd1c7-config-volume\") pod \"coredns-7db6d8ff4d-vmn6h\" (UID: \"30c8967c-03eb-4fef-a9c9-48ffb0edd1c7\") " pod="kube-system/coredns-7db6d8ff4d-vmn6h" Jan 16 09:04:59.887881 kubelet[2593]: I0116 09:04:59.887758 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8f9d583-3274-4e29-9d86-98010948ba4f-config-volume\") pod \"coredns-7db6d8ff4d-4qhsw\" (UID: \"b8f9d583-3274-4e29-9d86-98010948ba4f\") " pod="kube-system/coredns-7db6d8ff4d-4qhsw" Jan 16 09:05:00.153703 kubelet[2593]: E0116 09:05:00.153514 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:00.154759 kubelet[2593]: E0116 09:05:00.154511 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:00.162570 containerd[1462]: time="2025-01-16T09:05:00.161502209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4qhsw,Uid:b8f9d583-3274-4e29-9d86-98010948ba4f,Namespace:kube-system,Attempt:0,}" Jan 16 09:05:00.163625 containerd[1462]: time="2025-01-16T09:05:00.162959935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vmn6h,Uid:30c8967c-03eb-4fef-a9c9-48ffb0edd1c7,Namespace:kube-system,Attempt:0,}" Jan 16 09:05:00.303177 systemd[1]: run-containerd-runc-k8s.io-ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4-runc.CyC8xs.mount: Deactivated successfully. Jan 16 09:05:00.422784 kubelet[2593]: E0116 09:05:00.420876 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:00.444552 kubelet[2593]: I0116 09:05:00.444477 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jdr9g" podStartSLOduration=6.970943438 podStartE2EDuration="20.444455639s" podCreationTimestamp="2025-01-16 09:04:40 +0000 UTC" firstStartedPulling="2025-01-16 09:04:41.727806124 +0000 UTC m=+15.644420467" lastFinishedPulling="2025-01-16 09:04:55.201318318 +0000 UTC m=+29.117932668" observedRunningTime="2025-01-16 09:05:00.443812573 +0000 UTC m=+34.360426922" watchObservedRunningTime="2025-01-16 09:05:00.444455639 +0000 UTC m=+34.361070002" Jan 16 09:05:01.423426 kubelet[2593]: E0116 09:05:01.423320 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:02.426574 kubelet[2593]: E0116 09:05:02.426418 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:06.505155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569507639.mount: Deactivated successfully. Jan 16 09:05:07.302674 containerd[1462]: time="2025-01-16T09:05:07.301475418Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:07.304310 containerd[1462]: time="2025-01-16T09:05:07.304236553Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907169" Jan 16 09:05:07.305468 containerd[1462]: time="2025-01-16T09:05:07.305429705Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 09:05:07.307617 containerd[1462]: time="2025-01-16T09:05:07.307564403Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 12.105876341s" Jan 16 09:05:07.307617 containerd[1462]: time="2025-01-16T09:05:07.307619275Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 16 09:05:07.311413 containerd[1462]: time="2025-01-16T09:05:07.311060461Z" level=info msg="CreateContainer within sandbox \"4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 16 09:05:07.337368 containerd[1462]: time="2025-01-16T09:05:07.337303198Z" level=info msg="CreateContainer within sandbox \"4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\"" Jan 16 09:05:07.339173 containerd[1462]: time="2025-01-16T09:05:07.339126483Z" level=info msg="StartContainer for \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\"" Jan 16 09:05:07.383480 systemd[1]: Started cri-containerd-6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111.scope - libcontainer container 6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111. Jan 16 09:05:07.433563 containerd[1462]: time="2025-01-16T09:05:07.433489933Z" level=info msg="StartContainer for \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\" returns successfully" Jan 16 09:05:07.468835 kubelet[2593]: E0116 09:05:07.468783 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:07.494337 kubelet[2593]: I0116 09:05:07.494251 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-42gqh" podStartSLOduration=1.032506564 podStartE2EDuration="26.494224223s" podCreationTimestamp="2025-01-16 09:04:41 +0000 UTC" firstStartedPulling="2025-01-16 09:04:41.847103138 +0000 UTC m=+15.763717471" lastFinishedPulling="2025-01-16 09:05:07.308820775 +0000 UTC m=+41.225435130" observedRunningTime="2025-01-16 09:05:07.493385016 +0000 UTC m=+41.409999384" watchObservedRunningTime="2025-01-16 09:05:07.494224223 +0000 UTC m=+41.410838621" Jan 16 09:05:08.470524 kubelet[2593]: E0116 09:05:08.469690 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:10.903261 systemd-networkd[1367]: cilium_host: Link UP Jan 16 09:05:10.904102 systemd-networkd[1367]: cilium_net: Link UP Jan 16 09:05:10.904556 systemd-networkd[1367]: cilium_net: Gained carrier Jan 16 09:05:10.904731 systemd-networkd[1367]: cilium_host: Gained carrier Jan 16 09:05:11.081941 systemd-networkd[1367]: cilium_vxlan: Link UP Jan 16 09:05:11.081949 systemd-networkd[1367]: cilium_vxlan: Gained carrier Jan 16 09:05:11.556064 kernel: NET: Registered PF_ALG protocol family Jan 16 09:05:11.770558 systemd-networkd[1367]: cilium_host: Gained IPv6LL Jan 16 09:05:11.898406 systemd-networkd[1367]: cilium_net: Gained IPv6LL Jan 16 09:05:12.154995 systemd-networkd[1367]: cilium_vxlan: Gained IPv6LL Jan 16 09:05:12.530602 systemd-networkd[1367]: lxc_health: Link UP Jan 16 09:05:12.535611 systemd-networkd[1367]: lxc_health: Gained carrier Jan 16 09:05:12.787533 systemd-networkd[1367]: lxc32ad51ef63ab: Link UP Jan 16 09:05:12.793578 kernel: eth0: renamed from tmp2bf17 Jan 16 09:05:12.801210 systemd-networkd[1367]: lxc32ad51ef63ab: Gained carrier Jan 16 09:05:12.827085 kernel: eth0: renamed from tmpf0435 Jan 16 09:05:12.827800 systemd-networkd[1367]: lxc8319768d6ca0: Link UP Jan 16 09:05:12.839678 systemd-networkd[1367]: lxc8319768d6ca0: Gained carrier Jan 16 09:05:13.520060 kubelet[2593]: E0116 09:05:13.518458 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:14.010477 systemd-networkd[1367]: lxc_health: Gained IPv6LL Jan 16 09:05:14.459875 systemd-networkd[1367]: lxc32ad51ef63ab: Gained IPv6LL Jan 16 09:05:14.485947 kubelet[2593]: E0116 09:05:14.485649 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:14.906636 systemd-networkd[1367]: lxc8319768d6ca0: Gained IPv6LL Jan 16 09:05:15.488377 kubelet[2593]: E0116 09:05:15.488328 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:19.189210 containerd[1462]: time="2025-01-16T09:05:19.188672918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:05:19.189210 containerd[1462]: time="2025-01-16T09:05:19.188819041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:05:19.189210 containerd[1462]: time="2025-01-16T09:05:19.188873710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:19.191822 containerd[1462]: time="2025-01-16T09:05:19.189155868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:19.242702 systemd[1]: run-containerd-runc-k8s.io-f04355b05f4f8edc629f51ee155b40577f92701f710c0ebc563074cc414c9c9c-runc.UbppVZ.mount: Deactivated successfully. Jan 16 09:05:19.258990 systemd[1]: Started cri-containerd-f04355b05f4f8edc629f51ee155b40577f92701f710c0ebc563074cc414c9c9c.scope - libcontainer container f04355b05f4f8edc629f51ee155b40577f92701f710c0ebc563074cc414c9c9c. Jan 16 09:05:19.284545 containerd[1462]: time="2025-01-16T09:05:19.283439722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:05:19.284545 containerd[1462]: time="2025-01-16T09:05:19.284379290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:05:19.284545 containerd[1462]: time="2025-01-16T09:05:19.284401509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:19.285555 containerd[1462]: time="2025-01-16T09:05:19.284956430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:05:19.345411 systemd[1]: Started cri-containerd-2bf177c84867dec03011ac18940e08b7ec3ee9fbaa934e9be81b52425b844251.scope - libcontainer container 2bf177c84867dec03011ac18940e08b7ec3ee9fbaa934e9be81b52425b844251. Jan 16 09:05:19.451577 containerd[1462]: time="2025-01-16T09:05:19.451351120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4qhsw,Uid:b8f9d583-3274-4e29-9d86-98010948ba4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f04355b05f4f8edc629f51ee155b40577f92701f710c0ebc563074cc414c9c9c\"" Jan 16 09:05:19.454232 kubelet[2593]: E0116 09:05:19.454190 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:19.461584 containerd[1462]: time="2025-01-16T09:05:19.461309987Z" level=info msg="CreateContainer within sandbox \"f04355b05f4f8edc629f51ee155b40577f92701f710c0ebc563074cc414c9c9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 09:05:19.509595 containerd[1462]: time="2025-01-16T09:05:19.509146103Z" level=info msg="CreateContainer within sandbox \"f04355b05f4f8edc629f51ee155b40577f92701f710c0ebc563074cc414c9c9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29b18d24f75d34e68c3a37db2a191e4b0fb5fe0d37c28dcc6e2deb2f3548bb0c\"" Jan 16 09:05:19.511589 containerd[1462]: time="2025-01-16T09:05:19.510735257Z" level=info msg="StartContainer for \"29b18d24f75d34e68c3a37db2a191e4b0fb5fe0d37c28dcc6e2deb2f3548bb0c\"" Jan 16 09:05:19.529407 containerd[1462]: time="2025-01-16T09:05:19.529343009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vmn6h,Uid:30c8967c-03eb-4fef-a9c9-48ffb0edd1c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bf177c84867dec03011ac18940e08b7ec3ee9fbaa934e9be81b52425b844251\"" Jan 16 09:05:19.531914 kubelet[2593]: E0116 09:05:19.531690 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:19.539227 containerd[1462]: time="2025-01-16T09:05:19.538935550Z" level=info msg="CreateContainer within sandbox \"2bf177c84867dec03011ac18940e08b7ec3ee9fbaa934e9be81b52425b844251\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 09:05:19.576006 containerd[1462]: time="2025-01-16T09:05:19.575905648Z" level=info msg="CreateContainer within sandbox \"2bf177c84867dec03011ac18940e08b7ec3ee9fbaa934e9be81b52425b844251\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd9b682d600a66a8b1189c9406437dd98c6ce1c96f416b737af0679e6d70732c\"" Jan 16 09:05:19.582464 containerd[1462]: time="2025-01-16T09:05:19.581530266Z" level=info msg="StartContainer for \"fd9b682d600a66a8b1189c9406437dd98c6ce1c96f416b737af0679e6d70732c\"" Jan 16 09:05:19.591441 systemd[1]: Started cri-containerd-29b18d24f75d34e68c3a37db2a191e4b0fb5fe0d37c28dcc6e2deb2f3548bb0c.scope - libcontainer container 29b18d24f75d34e68c3a37db2a191e4b0fb5fe0d37c28dcc6e2deb2f3548bb0c. Jan 16 09:05:19.652642 systemd[1]: Started cri-containerd-fd9b682d600a66a8b1189c9406437dd98c6ce1c96f416b737af0679e6d70732c.scope - libcontainer container fd9b682d600a66a8b1189c9406437dd98c6ce1c96f416b737af0679e6d70732c. Jan 16 09:05:19.693556 containerd[1462]: time="2025-01-16T09:05:19.693412710Z" level=info msg="StartContainer for \"29b18d24f75d34e68c3a37db2a191e4b0fb5fe0d37c28dcc6e2deb2f3548bb0c\" returns successfully" Jan 16 09:05:19.716564 containerd[1462]: time="2025-01-16T09:05:19.716393414Z" level=info msg="StartContainer for \"fd9b682d600a66a8b1189c9406437dd98c6ce1c96f416b737af0679e6d70732c\" returns successfully" Jan 16 09:05:20.258592 systemd[1]: Started sshd@10-64.227.96.98:22-139.178.68.195:42474.service - OpenSSH per-connection server daemon (139.178.68.195:42474). Jan 16 09:05:20.332485 sshd[3981]: Accepted publickey for core from 139.178.68.195 port 42474 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:20.335387 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:20.343957 systemd-logind[1448]: New session 10 of user core. Jan 16 09:05:20.352369 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 09:05:20.516147 kubelet[2593]: E0116 09:05:20.515238 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:20.522442 kubelet[2593]: E0116 09:05:20.522244 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:20.573240 kubelet[2593]: I0116 09:05:20.570809 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vmn6h" podStartSLOduration=39.570786463 podStartE2EDuration="39.570786463s" podCreationTimestamp="2025-01-16 09:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:05:20.548055665 +0000 UTC m=+54.464670017" watchObservedRunningTime="2025-01-16 09:05:20.570786463 +0000 UTC m=+54.487400831" Jan 16 09:05:21.036801 sshd[3981]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:21.041617 systemd[1]: sshd@10-64.227.96.98:22-139.178.68.195:42474.service: Deactivated successfully. Jan 16 09:05:21.044848 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 09:05:21.047385 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 16 09:05:21.048926 systemd-logind[1448]: Removed session 10. Jan 16 09:05:21.525221 kubelet[2593]: E0116 09:05:21.525161 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:21.527151 kubelet[2593]: E0116 09:05:21.526160 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:22.528194 kubelet[2593]: E0116 09:05:22.528130 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:22.529548 kubelet[2593]: E0116 09:05:22.529300 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:26.059633 systemd[1]: Started sshd@11-64.227.96.98:22-139.178.68.195:39630.service - OpenSSH per-connection server daemon (139.178.68.195:39630). Jan 16 09:05:26.134214 sshd[4003]: Accepted publickey for core from 139.178.68.195 port 39630 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:26.136277 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:26.142827 systemd-logind[1448]: New session 11 of user core. Jan 16 09:05:26.149390 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 09:05:26.374779 sshd[4003]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:26.380490 systemd[1]: sshd@11-64.227.96.98:22-139.178.68.195:39630.service: Deactivated successfully. Jan 16 09:05:26.385527 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 09:05:26.388429 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 16 09:05:26.392105 systemd-logind[1448]: Removed session 11. Jan 16 09:05:31.393585 systemd[1]: Started sshd@12-64.227.96.98:22-139.178.68.195:39646.service - OpenSSH per-connection server daemon (139.178.68.195:39646). Jan 16 09:05:31.440420 sshd[4020]: Accepted publickey for core from 139.178.68.195 port 39646 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:31.442685 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:31.449812 systemd-logind[1448]: New session 12 of user core. Jan 16 09:05:31.453506 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 09:05:31.617350 sshd[4020]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:31.622482 systemd[1]: sshd@12-64.227.96.98:22-139.178.68.195:39646.service: Deactivated successfully. Jan 16 09:05:31.622757 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 16 09:05:31.625866 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 09:05:31.628986 systemd-logind[1448]: Removed session 12. Jan 16 09:05:36.638572 systemd[1]: Started sshd@13-64.227.96.98:22-139.178.68.195:38292.service - OpenSSH per-connection server daemon (139.178.68.195:38292). Jan 16 09:05:36.700872 sshd[4034]: Accepted publickey for core from 139.178.68.195 port 38292 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:36.703094 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:36.709119 systemd-logind[1448]: New session 13 of user core. Jan 16 09:05:36.715612 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 09:05:36.878963 sshd[4034]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:36.883613 systemd[1]: sshd@13-64.227.96.98:22-139.178.68.195:38292.service: Deactivated successfully. Jan 16 09:05:36.886873 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 09:05:36.889882 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 16 09:05:36.891846 systemd-logind[1448]: Removed session 13. Jan 16 09:05:40.243118 kubelet[2593]: E0116 09:05:40.242993 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:41.900643 systemd[1]: Started sshd@14-64.227.96.98:22-139.178.68.195:38308.service - OpenSSH per-connection server daemon (139.178.68.195:38308). Jan 16 09:05:41.952918 sshd[4048]: Accepted publickey for core from 139.178.68.195 port 38308 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:41.955758 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:41.962988 systemd-logind[1448]: New session 14 of user core. Jan 16 09:05:41.969474 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 09:05:42.130365 sshd[4048]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:42.135218 systemd[1]: sshd@14-64.227.96.98:22-139.178.68.195:38308.service: Deactivated successfully. Jan 16 09:05:42.137773 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 09:05:42.143897 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 16 09:05:42.145141 systemd-logind[1448]: Removed session 14. Jan 16 09:05:47.149415 systemd[1]: Started sshd@15-64.227.96.98:22-139.178.68.195:38126.service - OpenSSH per-connection server daemon (139.178.68.195:38126). Jan 16 09:05:47.213639 sshd[4063]: Accepted publickey for core from 139.178.68.195 port 38126 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:47.216135 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:47.222218 systemd-logind[1448]: New session 15 of user core. Jan 16 09:05:47.226487 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 09:05:47.388858 sshd[4063]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:47.397650 systemd[1]: sshd@15-64.227.96.98:22-139.178.68.195:38126.service: Deactivated successfully. Jan 16 09:05:47.400707 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 09:05:47.404649 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 16 09:05:47.412581 systemd[1]: Started sshd@16-64.227.96.98:22-139.178.68.195:38138.service - OpenSSH per-connection server daemon (139.178.68.195:38138). Jan 16 09:05:47.415230 systemd-logind[1448]: Removed session 15. Jan 16 09:05:47.459527 sshd[4076]: Accepted publickey for core from 139.178.68.195 port 38138 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:47.461535 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:47.468483 systemd-logind[1448]: New session 16 of user core. Jan 16 09:05:47.475392 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 09:05:47.671419 sshd[4076]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:47.683824 systemd[1]: sshd@16-64.227.96.98:22-139.178.68.195:38138.service: Deactivated successfully. Jan 16 09:05:47.689903 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 09:05:47.695464 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 16 09:05:47.704320 systemd[1]: Started sshd@17-64.227.96.98:22-139.178.68.195:38146.service - OpenSSH per-connection server daemon (139.178.68.195:38146). Jan 16 09:05:47.707733 systemd-logind[1448]: Removed session 16. Jan 16 09:05:47.765730 sshd[4087]: Accepted publickey for core from 139.178.68.195 port 38146 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:47.768690 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:47.779228 systemd-logind[1448]: New session 17 of user core. Jan 16 09:05:47.783680 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 09:05:47.932384 sshd[4087]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:47.939206 systemd[1]: sshd@17-64.227.96.98:22-139.178.68.195:38146.service: Deactivated successfully. Jan 16 09:05:47.943489 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 09:05:47.944952 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 16 09:05:47.946193 systemd-logind[1448]: Removed session 17. Jan 16 09:05:51.241401 kubelet[2593]: E0116 09:05:51.241308 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:52.950669 systemd[1]: Started sshd@18-64.227.96.98:22-139.178.68.195:38160.service - OpenSSH per-connection server daemon (139.178.68.195:38160). Jan 16 09:05:52.996665 sshd[4103]: Accepted publickey for core from 139.178.68.195 port 38160 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:52.999074 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:53.006605 systemd-logind[1448]: New session 18 of user core. Jan 16 09:05:53.011332 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 09:05:53.156596 sshd[4103]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:53.160473 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 16 09:05:53.161173 systemd[1]: sshd@18-64.227.96.98:22-139.178.68.195:38160.service: Deactivated successfully. Jan 16 09:05:53.163761 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 09:05:53.166410 systemd-logind[1448]: Removed session 18. Jan 16 09:05:58.179495 systemd[1]: Started sshd@19-64.227.96.98:22-139.178.68.195:33816.service - OpenSSH per-connection server daemon (139.178.68.195:33816). Jan 16 09:05:58.227211 sshd[4116]: Accepted publickey for core from 139.178.68.195 port 33816 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:05:58.228961 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:05:58.234970 systemd-logind[1448]: New session 19 of user core. Jan 16 09:05:58.241172 kubelet[2593]: E0116 09:05:58.240708 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:05:58.242392 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 09:05:58.385292 sshd[4116]: pam_unix(sshd:session): session closed for user core Jan 16 09:05:58.390869 systemd[1]: sshd@19-64.227.96.98:22-139.178.68.195:33816.service: Deactivated successfully. Jan 16 09:05:58.393524 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 09:05:58.394559 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 16 09:05:58.395809 systemd-logind[1448]: Removed session 19. Jan 16 09:06:03.412424 systemd[1]: Started sshd@20-64.227.96.98:22-139.178.68.195:33832.service - OpenSSH per-connection server daemon (139.178.68.195:33832). Jan 16 09:06:03.476513 sshd[4129]: Accepted publickey for core from 139.178.68.195 port 33832 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:03.479313 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:03.486430 systemd-logind[1448]: New session 20 of user core. Jan 16 09:06:03.493541 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 09:06:03.645543 sshd[4129]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:03.650900 systemd[1]: sshd@20-64.227.96.98:22-139.178.68.195:33832.service: Deactivated successfully. Jan 16 09:06:03.654518 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 09:06:03.657193 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 16 09:06:03.658992 systemd-logind[1448]: Removed session 20. Jan 16 09:06:08.669598 systemd[1]: Started sshd@21-64.227.96.98:22-139.178.68.195:33578.service - OpenSSH per-connection server daemon (139.178.68.195:33578). Jan 16 09:06:08.723879 sshd[4142]: Accepted publickey for core from 139.178.68.195 port 33578 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:08.725881 sshd[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:08.733671 systemd-logind[1448]: New session 21 of user core. Jan 16 09:06:08.736462 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 09:06:08.884385 sshd[4142]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:08.896412 systemd[1]: sshd@21-64.227.96.98:22-139.178.68.195:33578.service: Deactivated successfully. Jan 16 09:06:08.899724 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 09:06:08.902158 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 16 09:06:08.908605 systemd[1]: Started sshd@22-64.227.96.98:22-139.178.68.195:33588.service - OpenSSH per-connection server daemon (139.178.68.195:33588). Jan 16 09:06:08.911963 systemd-logind[1448]: Removed session 21. Jan 16 09:06:08.955898 sshd[4155]: Accepted publickey for core from 139.178.68.195 port 33588 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:08.957923 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:08.963810 systemd-logind[1448]: New session 22 of user core. Jan 16 09:06:08.969515 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 09:06:09.385044 sshd[4155]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:09.400306 systemd[1]: sshd@22-64.227.96.98:22-139.178.68.195:33588.service: Deactivated successfully. Jan 16 09:06:09.403363 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 09:06:09.405773 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Jan 16 09:06:09.413858 systemd[1]: Started sshd@23-64.227.96.98:22-139.178.68.195:33598.service - OpenSSH per-connection server daemon (139.178.68.195:33598). Jan 16 09:06:09.417241 systemd-logind[1448]: Removed session 22. Jan 16 09:06:09.499376 sshd[4166]: Accepted publickey for core from 139.178.68.195 port 33598 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:09.501764 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:09.508322 systemd-logind[1448]: New session 23 of user core. Jan 16 09:06:09.519392 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 09:06:11.453511 systemd[1]: Started sshd@24-64.227.96.98:22-218.92.0.134:14045.service - OpenSSH per-connection server daemon (218.92.0.134:14045). Jan 16 09:06:11.515554 sshd[4166]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:11.533195 systemd[1]: sshd@23-64.227.96.98:22-139.178.68.195:33598.service: Deactivated successfully. Jan 16 09:06:11.540299 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 09:06:11.544464 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Jan 16 09:06:11.553174 systemd[1]: Started sshd@25-64.227.96.98:22-139.178.68.195:33610.service - OpenSSH per-connection server daemon (139.178.68.195:33610). Jan 16 09:06:11.558101 systemd-logind[1448]: Removed session 23. Jan 16 09:06:11.625288 sshd[4185]: Accepted publickey for core from 139.178.68.195 port 33610 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:11.627481 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:11.633714 systemd-logind[1448]: New session 24 of user core. Jan 16 09:06:11.638301 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 16 09:06:11.959499 sshd[4185]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:11.972900 systemd[1]: sshd@25-64.227.96.98:22-139.178.68.195:33610.service: Deactivated successfully. Jan 16 09:06:11.977525 systemd[1]: session-24.scope: Deactivated successfully. Jan 16 09:06:11.983537 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Jan 16 09:06:11.992527 systemd[1]: Started sshd@26-64.227.96.98:22-139.178.68.195:33626.service - OpenSSH per-connection server daemon (139.178.68.195:33626). Jan 16 09:06:11.995037 systemd-logind[1448]: Removed session 24. Jan 16 09:06:12.055357 sshd[4197]: Accepted publickey for core from 139.178.68.195 port 33626 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:12.057772 sshd[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:12.064490 systemd-logind[1448]: New session 25 of user core. Jan 16 09:06:12.069286 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 16 09:06:12.220646 sshd[4197]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:12.226244 systemd[1]: sshd@26-64.227.96.98:22-139.178.68.195:33626.service: Deactivated successfully. Jan 16 09:06:12.229360 systemd[1]: session-25.scope: Deactivated successfully. Jan 16 09:06:12.231093 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Jan 16 09:06:12.232892 systemd-logind[1448]: Removed session 25. Jan 16 09:06:12.243351 kubelet[2593]: E0116 09:06:12.242540 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:12.717633 sshd[4211]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.134 user=root Jan 16 09:06:14.447230 sshd[4178]: PAM: Permission denied for root from 218.92.0.134 Jan 16 09:06:14.797238 sshd[4212]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.134 user=root Jan 16 09:06:16.466480 sshd[4178]: PAM: Permission denied for root from 218.92.0.134 Jan 16 09:06:16.795321 sshd[4213]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.134 user=root Jan 16 09:06:17.241471 systemd[1]: Started sshd@27-64.227.96.98:22-139.178.68.195:46100.service - OpenSSH per-connection server daemon (139.178.68.195:46100). Jan 16 09:06:17.288009 sshd[4215]: Accepted publickey for core from 139.178.68.195 port 46100 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:17.290703 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:17.297070 systemd-logind[1448]: New session 26 of user core. Jan 16 09:06:17.305346 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 16 09:06:17.450987 sshd[4215]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:17.456197 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Jan 16 09:06:17.457816 systemd[1]: sshd@27-64.227.96.98:22-139.178.68.195:46100.service: Deactivated successfully. Jan 16 09:06:17.460807 systemd[1]: session-26.scope: Deactivated successfully. Jan 16 09:06:17.462472 systemd-logind[1448]: Removed session 26. Jan 16 09:06:19.074653 sshd[4178]: PAM: Permission denied for root from 218.92.0.134 Jan 16 09:06:19.252102 sshd[4178]: Received disconnect from 218.92.0.134 port 14045:11: [preauth] Jan 16 09:06:19.252102 sshd[4178]: Disconnected from authenticating user root 218.92.0.134 port 14045 [preauth] Jan 16 09:06:19.255323 systemd[1]: sshd@24-64.227.96.98:22-218.92.0.134:14045.service: Deactivated successfully. Jan 16 09:06:22.470498 systemd[1]: Started sshd@28-64.227.96.98:22-139.178.68.195:46114.service - OpenSSH per-connection server daemon (139.178.68.195:46114). Jan 16 09:06:22.521305 sshd[4233]: Accepted publickey for core from 139.178.68.195 port 46114 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:22.523685 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:22.529826 systemd-logind[1448]: New session 27 of user core. Jan 16 09:06:22.537381 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 16 09:06:22.674423 sshd[4233]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:22.679070 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Jan 16 09:06:22.679320 systemd[1]: sshd@28-64.227.96.98:22-139.178.68.195:46114.service: Deactivated successfully. Jan 16 09:06:22.681364 systemd[1]: session-27.scope: Deactivated successfully. Jan 16 09:06:22.683913 systemd-logind[1448]: Removed session 27. Jan 16 09:06:23.241728 kubelet[2593]: E0116 09:06:23.241559 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:27.697348 systemd[1]: Started sshd@29-64.227.96.98:22-139.178.68.195:40088.service - OpenSSH per-connection server daemon (139.178.68.195:40088). Jan 16 09:06:27.762296 sshd[4248]: Accepted publickey for core from 139.178.68.195 port 40088 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:27.764739 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:27.773009 systemd-logind[1448]: New session 28 of user core. Jan 16 09:06:27.782422 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 16 09:06:27.952296 sshd[4248]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:27.960009 systemd[1]: sshd@29-64.227.96.98:22-139.178.68.195:40088.service: Deactivated successfully. Jan 16 09:06:27.962620 systemd[1]: session-28.scope: Deactivated successfully. Jan 16 09:06:27.964004 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Jan 16 09:06:27.965674 systemd-logind[1448]: Removed session 28. Jan 16 09:06:32.966378 systemd[1]: Started sshd@30-64.227.96.98:22-139.178.68.195:40100.service - OpenSSH per-connection server daemon (139.178.68.195:40100). Jan 16 09:06:33.025915 sshd[4262]: Accepted publickey for core from 139.178.68.195 port 40100 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:33.028206 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:33.034092 systemd-logind[1448]: New session 29 of user core. Jan 16 09:06:33.044353 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 16 09:06:33.198942 sshd[4262]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:33.210279 systemd[1]: sshd@30-64.227.96.98:22-139.178.68.195:40100.service: Deactivated successfully. Jan 16 09:06:33.213457 systemd[1]: session-29.scope: Deactivated successfully. Jan 16 09:06:33.215617 systemd-logind[1448]: Session 29 logged out. Waiting for processes to exit. Jan 16 09:06:33.224640 systemd[1]: Started sshd@31-64.227.96.98:22-139.178.68.195:40114.service - OpenSSH per-connection server daemon (139.178.68.195:40114). Jan 16 09:06:33.227636 systemd-logind[1448]: Removed session 29. Jan 16 09:06:33.292098 sshd[4275]: Accepted publickey for core from 139.178.68.195 port 40114 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:33.293937 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:33.300958 systemd-logind[1448]: New session 30 of user core. Jan 16 09:06:33.309425 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 16 09:06:34.243998 kubelet[2593]: E0116 09:06:34.242762 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:34.867252 kubelet[2593]: I0116 09:06:34.867148 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4qhsw" podStartSLOduration=113.8671172 podStartE2EDuration="1m53.8671172s" podCreationTimestamp="2025-01-16 09:04:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:05:20.60298581 +0000 UTC m=+54.519600208" watchObservedRunningTime="2025-01-16 09:06:34.8671172 +0000 UTC m=+128.783731571" Jan 16 09:06:34.966577 containerd[1462]: time="2025-01-16T09:06:34.966503726Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 09:06:35.027160 containerd[1462]: time="2025-01-16T09:06:35.026955861Z" level=info msg="StopContainer for \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\" with timeout 30 (s)" Jan 16 09:06:35.027498 containerd[1462]: time="2025-01-16T09:06:35.027438782Z" level=info msg="StopContainer for \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\" with timeout 2 (s)" Jan 16 09:06:35.029986 containerd[1462]: time="2025-01-16T09:06:35.029937692Z" level=info msg="Stop container \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\" with signal terminated" Jan 16 09:06:35.030622 containerd[1462]: time="2025-01-16T09:06:35.030206336Z" level=info msg="Stop container \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\" with signal terminated" Jan 16 09:06:35.048921 systemd-networkd[1367]: lxc_health: Link DOWN Jan 16 09:06:35.049336 systemd-networkd[1367]: lxc_health: Lost carrier Jan 16 09:06:35.053454 systemd[1]: cri-containerd-6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111.scope: Deactivated successfully. Jan 16 09:06:35.076293 systemd[1]: cri-containerd-ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4.scope: Deactivated successfully. Jan 16 09:06:35.077892 systemd[1]: cri-containerd-ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4.scope: Consumed 10.267s CPU time. Jan 16 09:06:35.116815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111-rootfs.mount: Deactivated successfully. Jan 16 09:06:35.134779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4-rootfs.mount: Deactivated successfully. Jan 16 09:06:35.146148 containerd[1462]: time="2025-01-16T09:06:35.145918250Z" level=info msg="shim disconnected" id=6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111 namespace=k8s.io Jan 16 09:06:35.146568 containerd[1462]: time="2025-01-16T09:06:35.146180574Z" level=warning msg="cleaning up after shim disconnected" id=6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111 namespace=k8s.io Jan 16 09:06:35.146568 containerd[1462]: time="2025-01-16T09:06:35.146202486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:35.146568 containerd[1462]: time="2025-01-16T09:06:35.146004566Z" level=info msg="shim disconnected" id=ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4 namespace=k8s.io Jan 16 09:06:35.146568 containerd[1462]: time="2025-01-16T09:06:35.146277453Z" level=warning msg="cleaning up after shim disconnected" id=ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4 namespace=k8s.io Jan 16 09:06:35.146568 containerd[1462]: time="2025-01-16T09:06:35.146288401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:35.168544 containerd[1462]: time="2025-01-16T09:06:35.167008732Z" level=warning msg="cleanup warnings time=\"2025-01-16T09:06:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 16 09:06:35.172513 containerd[1462]: time="2025-01-16T09:06:35.172400833Z" level=info msg="StopContainer for \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\" returns successfully" Jan 16 09:06:35.179844 containerd[1462]: time="2025-01-16T09:06:35.179754795Z" level=info msg="StopPodSandbox for \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\"" Jan 16 09:06:35.179844 containerd[1462]: time="2025-01-16T09:06:35.179853132Z" level=info msg="Container to stop \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:06:35.180573 containerd[1462]: time="2025-01-16T09:06:35.179874831Z" level=info msg="Container to stop \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:06:35.180573 containerd[1462]: time="2025-01-16T09:06:35.179893459Z" level=info msg="Container to stop \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:06:35.180573 containerd[1462]: time="2025-01-16T09:06:35.179917734Z" level=info msg="Container to stop \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:06:35.180573 containerd[1462]: time="2025-01-16T09:06:35.179934983Z" level=info msg="Container to stop \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:06:35.184108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe-shm.mount: Deactivated successfully. Jan 16 09:06:35.190889 containerd[1462]: time="2025-01-16T09:06:35.190543818Z" level=info msg="StopContainer for \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\" returns successfully" Jan 16 09:06:35.195507 containerd[1462]: time="2025-01-16T09:06:35.193191635Z" level=info msg="StopPodSandbox for \"4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4\"" Jan 16 09:06:35.195507 containerd[1462]: time="2025-01-16T09:06:35.193240799Z" level=info msg="Container to stop \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 16 09:06:35.197244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4-shm.mount: Deactivated successfully. Jan 16 09:06:35.199789 systemd[1]: cri-containerd-1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe.scope: Deactivated successfully. Jan 16 09:06:35.211524 systemd[1]: cri-containerd-4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4.scope: Deactivated successfully. Jan 16 09:06:35.246463 containerd[1462]: time="2025-01-16T09:06:35.246348625Z" level=info msg="shim disconnected" id=1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe namespace=k8s.io Jan 16 09:06:35.246463 containerd[1462]: time="2025-01-16T09:06:35.246447291Z" level=warning msg="cleaning up after shim disconnected" id=1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe namespace=k8s.io Jan 16 09:06:35.246463 containerd[1462]: time="2025-01-16T09:06:35.246467356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:35.262259 containerd[1462]: time="2025-01-16T09:06:35.262175321Z" level=info msg="shim disconnected" id=4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4 namespace=k8s.io Jan 16 09:06:35.262858 containerd[1462]: time="2025-01-16T09:06:35.262592153Z" level=warning msg="cleaning up after shim disconnected" id=4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4 namespace=k8s.io Jan 16 09:06:35.262858 containerd[1462]: time="2025-01-16T09:06:35.262629397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:35.288889 containerd[1462]: time="2025-01-16T09:06:35.287504216Z" level=info msg="TearDown network for sandbox \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" successfully" Jan 16 09:06:35.288889 containerd[1462]: time="2025-01-16T09:06:35.287558989Z" level=info msg="StopPodSandbox for \"1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe\" returns successfully" Jan 16 09:06:35.306278 containerd[1462]: time="2025-01-16T09:06:35.305180416Z" level=info msg="TearDown network for sandbox \"4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4\" successfully" Jan 16 09:06:35.306278 containerd[1462]: time="2025-01-16T09:06:35.306096714Z" level=info msg="StopPodSandbox for \"4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4\" returns successfully" Jan 16 09:06:35.478989 kubelet[2593]: I0116 09:06:35.477870 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-clustermesh-secrets\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.478989 kubelet[2593]: I0116 09:06:35.477927 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-host-proc-sys-net\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.478989 kubelet[2593]: I0116 09:06:35.477956 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-config-path\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.478989 kubelet[2593]: I0116 09:06:35.477979 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhgsx\" (UniqueName: \"kubernetes.io/projected/c0305cd0-5902-4546-ae0e-abe114d1d23e-kube-api-access-mhgsx\") pod \"c0305cd0-5902-4546-ae0e-abe114d1d23e\" (UID: \"c0305cd0-5902-4546-ae0e-abe114d1d23e\") " Jan 16 09:06:35.478989 kubelet[2593]: I0116 09:06:35.477997 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-host-proc-sys-kernel\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.478989 kubelet[2593]: I0116 09:06:35.478040 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cni-path\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479789 kubelet[2593]: I0116 09:06:35.478056 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-lib-modules\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479789 kubelet[2593]: I0116 09:06:35.478080 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-hubble-tls\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479789 kubelet[2593]: I0116 09:06:35.478104 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-xtables-lock\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479789 kubelet[2593]: I0116 09:06:35.478118 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-hostproc\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479789 kubelet[2593]: I0116 09:06:35.478149 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0305cd0-5902-4546-ae0e-abe114d1d23e-cilium-config-path\") pod \"c0305cd0-5902-4546-ae0e-abe114d1d23e\" (UID: \"c0305cd0-5902-4546-ae0e-abe114d1d23e\") " Jan 16 09:06:35.479789 kubelet[2593]: I0116 09:06:35.478168 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-bpf-maps\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479961 kubelet[2593]: I0116 09:06:35.478183 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-cgroup\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479961 kubelet[2593]: I0116 09:06:35.478199 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5hck\" (UniqueName: \"kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-kube-api-access-l5hck\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479961 kubelet[2593]: I0116 09:06:35.478215 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-run\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.479961 kubelet[2593]: I0116 09:06:35.478229 2593 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-etc-cni-netd\") pod \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\" (UID: \"b3bab0bc-e94d-458f-a9a5-179d6a8b28d2\") " Jan 16 09:06:35.481009 kubelet[2593]: I0116 09:06:35.478305 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.482324 kubelet[2593]: I0116 09:06:35.482286 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 09:06:35.485535 kubelet[2593]: I0116 09:06:35.485453 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.485751 kubelet[2593]: I0116 09:06:35.485733 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 16 09:06:35.485832 kubelet[2593]: I0116 09:06:35.485822 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.485890 kubelet[2593]: I0116 09:06:35.485879 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.485957 kubelet[2593]: I0116 09:06:35.485933 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.486156 kubelet[2593]: I0116 09:06:35.486143 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.489589 kubelet[2593]: I0116 09:06:35.489550 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.489765 kubelet[2593]: I0116 09:06:35.489751 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.492337 kubelet[2593]: I0116 09:06:35.491867 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0305cd0-5902-4546-ae0e-abe114d1d23e-kube-api-access-mhgsx" (OuterVolumeSpecName: "kube-api-access-mhgsx") pod "c0305cd0-5902-4546-ae0e-abe114d1d23e" (UID: "c0305cd0-5902-4546-ae0e-abe114d1d23e"). InnerVolumeSpecName "kube-api-access-mhgsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:06:35.492337 kubelet[2593]: I0116 09:06:35.491956 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:06:35.492786 kubelet[2593]: I0116 09:06:35.492761 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0305cd0-5902-4546-ae0e-abe114d1d23e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0305cd0-5902-4546-ae0e-abe114d1d23e" (UID: "c0305cd0-5902-4546-ae0e-abe114d1d23e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 16 09:06:35.492865 kubelet[2593]: I0116 09:06:35.492854 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.492921 kubelet[2593]: I0116 09:06:35.492912 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 16 09:06:35.495457 kubelet[2593]: I0116 09:06:35.495384 2593 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-kube-api-access-l5hck" (OuterVolumeSpecName: "kube-api-access-l5hck") pod "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" (UID: "b3bab0bc-e94d-458f-a9a5-179d6a8b28d2"). InnerVolumeSpecName "kube-api-access-l5hck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 16 09:06:35.582498 kubelet[2593]: I0116 09:06:35.582237 2593 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-lib-modules\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582498 kubelet[2593]: I0116 09:06:35.582324 2593 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-hostproc\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582498 kubelet[2593]: I0116 09:06:35.582337 2593 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-hubble-tls\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582498 kubelet[2593]: I0116 09:06:35.582347 2593 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-xtables-lock\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582498 kubelet[2593]: I0116 09:06:35.582355 2593 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-bpf-maps\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582498 kubelet[2593]: I0116 09:06:35.582368 2593 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0305cd0-5902-4546-ae0e-abe114d1d23e-cilium-config-path\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582498 kubelet[2593]: I0116 09:06:35.582378 2593 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-cgroup\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582498 kubelet[2593]: I0116 09:06:35.582388 2593 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-l5hck\" (UniqueName: \"kubernetes.io/projected/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-kube-api-access-l5hck\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582857 kubelet[2593]: I0116 09:06:35.582399 2593 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-run\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582857 kubelet[2593]: I0116 09:06:35.582409 2593 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-etc-cni-netd\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582857 kubelet[2593]: I0116 09:06:35.582417 2593 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mhgsx\" (UniqueName: \"kubernetes.io/projected/c0305cd0-5902-4546-ae0e-abe114d1d23e-kube-api-access-mhgsx\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582857 kubelet[2593]: I0116 09:06:35.582426 2593 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-clustermesh-secrets\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582857 kubelet[2593]: I0116 09:06:35.582434 2593 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-host-proc-sys-net\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582857 kubelet[2593]: I0116 09:06:35.582443 2593 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cilium-config-path\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582857 kubelet[2593]: I0116 09:06:35.582452 2593 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-host-proc-sys-kernel\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.582857 kubelet[2593]: I0116 09:06:35.582460 2593 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2-cni-path\") on node \"ci-4081.3.0-a-a78886c5b6\" DevicePath \"\"" Jan 16 09:06:35.750787 kubelet[2593]: I0116 09:06:35.750465 2593 scope.go:117] "RemoveContainer" containerID="6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111" Jan 16 09:06:35.759413 systemd[1]: Removed slice kubepods-besteffort-podc0305cd0_5902_4546_ae0e_abe114d1d23e.slice - libcontainer container kubepods-besteffort-podc0305cd0_5902_4546_ae0e_abe114d1d23e.slice. Jan 16 09:06:35.769751 containerd[1462]: time="2025-01-16T09:06:35.769466865Z" level=info msg="RemoveContainer for \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\"" Jan 16 09:06:35.779963 containerd[1462]: time="2025-01-16T09:06:35.779646738Z" level=info msg="RemoveContainer for \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\" returns successfully" Jan 16 09:06:35.789722 kubelet[2593]: I0116 09:06:35.789375 2593 scope.go:117] "RemoveContainer" containerID="6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111" Jan 16 09:06:35.791001 systemd[1]: Removed slice kubepods-burstable-podb3bab0bc_e94d_458f_a9a5_179d6a8b28d2.slice - libcontainer container kubepods-burstable-podb3bab0bc_e94d_458f_a9a5_179d6a8b28d2.slice. Jan 16 09:06:35.791154 systemd[1]: kubepods-burstable-podb3bab0bc_e94d_458f_a9a5_179d6a8b28d2.slice: Consumed 10.385s CPU time. Jan 16 09:06:35.819126 containerd[1462]: time="2025-01-16T09:06:35.801155087Z" level=error msg="ContainerStatus for \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\": not found" Jan 16 09:06:35.856344 kubelet[2593]: E0116 09:06:35.856141 2593 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\": not found" containerID="6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111" Jan 16 09:06:35.864603 kubelet[2593]: I0116 09:06:35.860530 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111"} err="failed to get container status \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\": rpc error: code = NotFound desc = an error occurred when try to find container \"6694e5d20dfd1ea24df957480846ad0128fd248cc92f93db277c0ad0099e5111\": not found" Jan 16 09:06:35.864603 kubelet[2593]: I0116 09:06:35.864425 2593 scope.go:117] "RemoveContainer" containerID="ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4" Jan 16 09:06:35.866312 containerd[1462]: time="2025-01-16T09:06:35.866280583Z" level=info msg="RemoveContainer for \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\"" Jan 16 09:06:35.871567 containerd[1462]: time="2025-01-16T09:06:35.871414584Z" level=info msg="RemoveContainer for \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\" returns successfully" Jan 16 09:06:35.872994 kubelet[2593]: I0116 09:06:35.872677 2593 scope.go:117] "RemoveContainer" containerID="dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793" Jan 16 09:06:35.878816 containerd[1462]: time="2025-01-16T09:06:35.878561797Z" level=info msg="RemoveContainer for \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\"" Jan 16 09:06:35.882853 containerd[1462]: time="2025-01-16T09:06:35.882785858Z" level=info msg="RemoveContainer for \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\" returns successfully" Jan 16 09:06:35.883384 kubelet[2593]: I0116 09:06:35.883263 2593 scope.go:117] "RemoveContainer" containerID="84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892" Jan 16 09:06:35.884988 containerd[1462]: time="2025-01-16T09:06:35.884940345Z" level=info msg="RemoveContainer for \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\"" Jan 16 09:06:35.889158 containerd[1462]: time="2025-01-16T09:06:35.889090626Z" level=info msg="RemoveContainer for \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\" returns successfully" Jan 16 09:06:35.889982 kubelet[2593]: I0116 09:06:35.889834 2593 scope.go:117] "RemoveContainer" containerID="3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b" Jan 16 09:06:35.892104 containerd[1462]: time="2025-01-16T09:06:35.892059134Z" level=info msg="RemoveContainer for \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\"" Jan 16 09:06:35.902152 containerd[1462]: time="2025-01-16T09:06:35.901928598Z" level=info msg="RemoveContainer for \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\" returns successfully" Jan 16 09:06:35.902309 kubelet[2593]: I0116 09:06:35.902250 2593 scope.go:117] "RemoveContainer" containerID="06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d" Jan 16 09:06:35.903860 containerd[1462]: time="2025-01-16T09:06:35.903817443Z" level=info msg="RemoveContainer for \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\"" Jan 16 09:06:35.907308 containerd[1462]: time="2025-01-16T09:06:35.907252086Z" level=info msg="RemoveContainer for \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\" returns successfully" Jan 16 09:06:35.908268 kubelet[2593]: I0116 09:06:35.908105 2593 scope.go:117] "RemoveContainer" containerID="ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4" Jan 16 09:06:35.908897 containerd[1462]: time="2025-01-16T09:06:35.908562350Z" level=error msg="ContainerStatus for \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\": not found" Jan 16 09:06:35.908993 kubelet[2593]: E0116 09:06:35.908741 2593 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\": not found" containerID="ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4" Jan 16 09:06:35.908993 kubelet[2593]: I0116 09:06:35.908774 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4"} err="failed to get container status \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba30724a3416c5d39e38a24d9874c79a4574ec51e05671aedacae0dd9817f0b4\": not found" Jan 16 09:06:35.908993 kubelet[2593]: I0116 09:06:35.908799 2593 scope.go:117] "RemoveContainer" containerID="dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793" Jan 16 09:06:35.909487 containerd[1462]: time="2025-01-16T09:06:35.909389904Z" level=error msg="ContainerStatus for \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\": not found" Jan 16 09:06:35.909605 kubelet[2593]: E0116 09:06:35.909577 2593 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\": not found" containerID="dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793" Jan 16 09:06:35.909659 kubelet[2593]: I0116 09:06:35.909615 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793"} err="failed to get container status \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\": rpc error: code = NotFound desc = an error occurred when try to find container \"dad771ee3923226fa6516b24747721fd924060cad363e73288793e14354e4793\": not found" Jan 16 09:06:35.909659 kubelet[2593]: I0116 09:06:35.909645 2593 scope.go:117] "RemoveContainer" containerID="84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892" Jan 16 09:06:35.909866 containerd[1462]: time="2025-01-16T09:06:35.909815547Z" level=error msg="ContainerStatus for \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\": not found" Jan 16 09:06:35.910211 kubelet[2593]: E0116 09:06:35.910063 2593 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\": not found" containerID="84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892" Jan 16 09:06:35.910211 kubelet[2593]: I0116 09:06:35.910098 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892"} err="failed to get container status \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\": rpc error: code = NotFound desc = an error occurred when try to find container \"84b8fd8677835e957ce0a6bf79f662cede9fbf808a59ae92a3ce7a07b90a7892\": not found" Jan 16 09:06:35.910211 kubelet[2593]: I0116 09:06:35.910121 2593 scope.go:117] "RemoveContainer" containerID="3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b" Jan 16 09:06:35.910516 containerd[1462]: time="2025-01-16T09:06:35.910449622Z" level=error msg="ContainerStatus for \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\": not found" Jan 16 09:06:35.910649 kubelet[2593]: E0116 09:06:35.910581 2593 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\": not found" containerID="3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b" Jan 16 09:06:35.910701 kubelet[2593]: I0116 09:06:35.910650 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b"} err="failed to get container status \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3af84084485df778f2ba296b333facbeb52f05481695d693a512700a489bd60b\": not found" Jan 16 09:06:35.910701 kubelet[2593]: I0116 09:06:35.910667 2593 scope.go:117] "RemoveContainer" containerID="06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d" Jan 16 09:06:35.911157 containerd[1462]: time="2025-01-16T09:06:35.910989162Z" level=error msg="ContainerStatus for \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\": not found" Jan 16 09:06:35.911458 kubelet[2593]: E0116 09:06:35.911433 2593 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\": not found" containerID="06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d" Jan 16 09:06:35.911498 kubelet[2593]: I0116 09:06:35.911459 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d"} err="failed to get container status \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\": rpc error: code = NotFound desc = an error occurred when try to find container \"06545d40fed2c866af24add55a5d79f5e6cd07f73fa3686613aff6e09ac7395d\": not found" Jan 16 09:06:35.927397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a3a2277e10d015c3cae0556d0d59f89eac4685a7716868a8ac17c82f7426ee4-rootfs.mount: Deactivated successfully. Jan 16 09:06:35.927520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ddd174dfd9960a3926a2d1e6830d960ac3da5ee00ccf5d5f73d52530c11dbbe-rootfs.mount: Deactivated successfully. Jan 16 09:06:35.927584 systemd[1]: var-lib-kubelet-pods-c0305cd0\x2d5902\x2d4546\x2dae0e\x2dabe114d1d23e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmhgsx.mount: Deactivated successfully. Jan 16 09:06:35.927673 systemd[1]: var-lib-kubelet-pods-b3bab0bc\x2de94d\x2d458f\x2da9a5\x2d179d6a8b28d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5hck.mount: Deactivated successfully. Jan 16 09:06:35.927732 systemd[1]: var-lib-kubelet-pods-b3bab0bc\x2de94d\x2d458f\x2da9a5\x2d179d6a8b28d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 16 09:06:35.927792 systemd[1]: var-lib-kubelet-pods-b3bab0bc\x2de94d\x2d458f\x2da9a5\x2d179d6a8b28d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 16 09:06:36.240935 kubelet[2593]: E0116 09:06:36.240855 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:36.246786 kubelet[2593]: I0116 09:06:36.246720 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" path="/var/lib/kubelet/pods/b3bab0bc-e94d-458f-a9a5-179d6a8b28d2/volumes" Jan 16 09:06:36.247704 kubelet[2593]: I0116 09:06:36.247643 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0305cd0-5902-4546-ae0e-abe114d1d23e" path="/var/lib/kubelet/pods/c0305cd0-5902-4546-ae0e-abe114d1d23e/volumes" Jan 16 09:06:36.397896 kubelet[2593]: E0116 09:06:36.397780 2593 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 09:06:36.805128 sshd[4275]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:36.813595 systemd[1]: sshd@31-64.227.96.98:22-139.178.68.195:40114.service: Deactivated successfully. Jan 16 09:06:36.816904 systemd[1]: session-30.scope: Deactivated successfully. Jan 16 09:06:36.821224 systemd-logind[1448]: Session 30 logged out. Waiting for processes to exit. Jan 16 09:06:36.827871 systemd[1]: Started sshd@32-64.227.96.98:22-139.178.68.195:60490.service - OpenSSH per-connection server daemon (139.178.68.195:60490). Jan 16 09:06:36.829830 systemd-logind[1448]: Removed session 30. Jan 16 09:06:36.880898 sshd[4434]: Accepted publickey for core from 139.178.68.195 port 60490 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:36.883242 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:36.890334 systemd-logind[1448]: New session 31 of user core. Jan 16 09:06:36.896331 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 16 09:06:37.490213 sshd[4434]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:37.502175 systemd[1]: sshd@32-64.227.96.98:22-139.178.68.195:60490.service: Deactivated successfully. Jan 16 09:06:37.508313 systemd[1]: session-31.scope: Deactivated successfully. Jan 16 09:06:37.510975 systemd-logind[1448]: Session 31 logged out. Waiting for processes to exit. Jan 16 09:06:37.528405 systemd[1]: Started sshd@33-64.227.96.98:22-139.178.68.195:60494.service - OpenSSH per-connection server daemon (139.178.68.195:60494). Jan 16 09:06:37.529343 kubelet[2593]: I0116 09:06:37.521741 2593 topology_manager.go:215] "Topology Admit Handler" podUID="ef13d274-a420-4194-88ac-a25b6b04cc34" podNamespace="kube-system" podName="cilium-ckh84" Jan 16 09:06:37.532185 systemd-logind[1448]: Removed session 31. Jan 16 09:06:37.537557 kubelet[2593]: E0116 09:06:37.537496 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" containerName="mount-cgroup" Jan 16 09:06:37.537557 kubelet[2593]: E0116 09:06:37.537545 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" containerName="mount-bpf-fs" Jan 16 09:06:37.537557 kubelet[2593]: E0116 09:06:37.537557 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" containerName="apply-sysctl-overwrites" Jan 16 09:06:37.537733 kubelet[2593]: E0116 09:06:37.537578 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" containerName="clean-cilium-state" Jan 16 09:06:37.537733 kubelet[2593]: E0116 09:06:37.537588 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" containerName="cilium-agent" Jan 16 09:06:37.537733 kubelet[2593]: E0116 09:06:37.537595 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0305cd0-5902-4546-ae0e-abe114d1d23e" containerName="cilium-operator" Jan 16 09:06:37.537733 kubelet[2593]: I0116 09:06:37.537633 2593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3bab0bc-e94d-458f-a9a5-179d6a8b28d2" containerName="cilium-agent" Jan 16 09:06:37.537733 kubelet[2593]: I0116 09:06:37.537644 2593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0305cd0-5902-4546-ae0e-abe114d1d23e" containerName="cilium-operator" Jan 16 09:06:37.600053 sshd[4446]: Accepted publickey for core from 139.178.68.195 port 60494 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:37.604786 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:37.609967 kubelet[2593]: I0116 09:06:37.607259 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-cilium-cgroup\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.609967 kubelet[2593]: I0116 09:06:37.607313 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-etc-cni-netd\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.609967 kubelet[2593]: I0116 09:06:37.607343 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-cilium-run\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.609967 kubelet[2593]: I0116 09:06:37.607371 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-host-proc-sys-net\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.609967 kubelet[2593]: I0116 09:06:37.607391 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef13d274-a420-4194-88ac-a25b6b04cc34-cilium-config-path\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.609967 kubelet[2593]: I0116 09:06:37.607410 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ef13d274-a420-4194-88ac-a25b6b04cc34-cilium-ipsec-secrets\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610271 kubelet[2593]: I0116 09:06:37.607432 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-bpf-maps\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610271 kubelet[2593]: I0116 09:06:37.607447 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-hostproc\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610271 kubelet[2593]: I0116 09:06:37.607473 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef13d274-a420-4194-88ac-a25b6b04cc34-clustermesh-secrets\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610271 kubelet[2593]: I0116 09:06:37.607494 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-host-proc-sys-kernel\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610271 kubelet[2593]: I0116 09:06:37.607509 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-cni-path\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610271 kubelet[2593]: I0116 09:06:37.607526 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef13d274-a420-4194-88ac-a25b6b04cc34-hubble-tls\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610408 kubelet[2593]: I0116 09:06:37.607549 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc7nk\" (UniqueName: \"kubernetes.io/projected/ef13d274-a420-4194-88ac-a25b6b04cc34-kube-api-access-hc7nk\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610408 kubelet[2593]: I0116 09:06:37.607566 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-lib-modules\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.610408 kubelet[2593]: I0116 09:06:37.607586 2593 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef13d274-a420-4194-88ac-a25b6b04cc34-xtables-lock\") pod \"cilium-ckh84\" (UID: \"ef13d274-a420-4194-88ac-a25b6b04cc34\") " pod="kube-system/cilium-ckh84" Jan 16 09:06:37.620689 systemd-logind[1448]: New session 32 of user core. Jan 16 09:06:37.629288 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 16 09:06:37.630204 systemd[1]: Created slice kubepods-burstable-podef13d274_a420_4194_88ac_a25b6b04cc34.slice - libcontainer container kubepods-burstable-podef13d274_a420_4194_88ac_a25b6b04cc34.slice. Jan 16 09:06:37.699976 sshd[4446]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:37.712822 systemd[1]: sshd@33-64.227.96.98:22-139.178.68.195:60494.service: Deactivated successfully. Jan 16 09:06:37.729275 systemd[1]: session-32.scope: Deactivated successfully. Jan 16 09:06:37.750268 systemd-logind[1448]: Session 32 logged out. Waiting for processes to exit. Jan 16 09:06:37.761464 systemd[1]: Started sshd@34-64.227.96.98:22-139.178.68.195:60504.service - OpenSSH per-connection server daemon (139.178.68.195:60504). Jan 16 09:06:37.780849 systemd-logind[1448]: Removed session 32. Jan 16 09:06:37.839578 sshd[4457]: Accepted publickey for core from 139.178.68.195 port 60504 ssh2: RSA SHA256:fWXAJ6WCtHVKvQlmcI2C6JuFf3oBdxh55gZP5IlKwm0 Jan 16 09:06:37.842186 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 09:06:37.848145 systemd-logind[1448]: New session 33 of user core. Jan 16 09:06:37.855387 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 16 09:06:37.951640 kubelet[2593]: E0116 09:06:37.951573 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:37.953125 containerd[1462]: time="2025-01-16T09:06:37.953081683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ckh84,Uid:ef13d274-a420-4194-88ac-a25b6b04cc34,Namespace:kube-system,Attempt:0,}" Jan 16 09:06:37.996780 containerd[1462]: time="2025-01-16T09:06:37.994820286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 09:06:37.996780 containerd[1462]: time="2025-01-16T09:06:37.994899062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 09:06:37.996780 containerd[1462]: time="2025-01-16T09:06:37.994910552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:37.996780 containerd[1462]: time="2025-01-16T09:06:37.994999172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 09:06:38.025815 systemd[1]: Started cri-containerd-7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e.scope - libcontainer container 7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e. Jan 16 09:06:38.063209 containerd[1462]: time="2025-01-16T09:06:38.063142611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ckh84,Uid:ef13d274-a420-4194-88ac-a25b6b04cc34,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\"" Jan 16 09:06:38.064053 kubelet[2593]: E0116 09:06:38.063905 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:38.072171 containerd[1462]: time="2025-01-16T09:06:38.072120449Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 16 09:06:38.086313 containerd[1462]: time="2025-01-16T09:06:38.086255488Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c11051e0b087a9e72c8f0588188ac2568f4c64e348ebb1fb8a11cf10736c7de\"" Jan 16 09:06:38.087856 containerd[1462]: time="2025-01-16T09:06:38.087328941Z" level=info msg="StartContainer for \"2c11051e0b087a9e72c8f0588188ac2568f4c64e348ebb1fb8a11cf10736c7de\"" Jan 16 09:06:38.124399 systemd[1]: Started cri-containerd-2c11051e0b087a9e72c8f0588188ac2568f4c64e348ebb1fb8a11cf10736c7de.scope - libcontainer container 2c11051e0b087a9e72c8f0588188ac2568f4c64e348ebb1fb8a11cf10736c7de. Jan 16 09:06:38.168815 containerd[1462]: time="2025-01-16T09:06:38.168721615Z" level=info msg="StartContainer for \"2c11051e0b087a9e72c8f0588188ac2568f4c64e348ebb1fb8a11cf10736c7de\" returns successfully" Jan 16 09:06:38.180526 systemd[1]: cri-containerd-2c11051e0b087a9e72c8f0588188ac2568f4c64e348ebb1fb8a11cf10736c7de.scope: Deactivated successfully. Jan 16 09:06:38.233912 containerd[1462]: time="2025-01-16T09:06:38.233677410Z" level=info msg="shim disconnected" id=2c11051e0b087a9e72c8f0588188ac2568f4c64e348ebb1fb8a11cf10736c7de namespace=k8s.io Jan 16 09:06:38.233912 containerd[1462]: time="2025-01-16T09:06:38.233738530Z" level=warning msg="cleaning up after shim disconnected" id=2c11051e0b087a9e72c8f0588188ac2568f4c64e348ebb1fb8a11cf10736c7de namespace=k8s.io Jan 16 09:06:38.233912 containerd[1462]: time="2025-01-16T09:06:38.233748489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:38.801182 kubelet[2593]: E0116 09:06:38.800945 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:38.807560 containerd[1462]: time="2025-01-16T09:06:38.807495171Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 16 09:06:38.838313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938606540.mount: Deactivated successfully. Jan 16 09:06:38.841755 containerd[1462]: time="2025-01-16T09:06:38.841697823Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264\"" Jan 16 09:06:38.845114 containerd[1462]: time="2025-01-16T09:06:38.845072588Z" level=info msg="StartContainer for \"46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264\"" Jan 16 09:06:38.896331 systemd[1]: Started cri-containerd-46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264.scope - libcontainer container 46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264. Jan 16 09:06:38.925641 kubelet[2593]: I0116 09:06:38.925562 2593 setters.go:580] "Node became not ready" node="ci-4081.3.0-a-a78886c5b6" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-16T09:06:38Z","lastTransitionTime":"2025-01-16T09:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 16 09:06:38.936193 containerd[1462]: time="2025-01-16T09:06:38.936098062Z" level=info msg="StartContainer for \"46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264\" returns successfully" Jan 16 09:06:38.957984 systemd[1]: cri-containerd-46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264.scope: Deactivated successfully. Jan 16 09:06:38.990784 containerd[1462]: time="2025-01-16T09:06:38.990630103Z" level=info msg="shim disconnected" id=46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264 namespace=k8s.io Jan 16 09:06:38.990784 containerd[1462]: time="2025-01-16T09:06:38.990739143Z" level=warning msg="cleaning up after shim disconnected" id=46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264 namespace=k8s.io Jan 16 09:06:38.990784 containerd[1462]: time="2025-01-16T09:06:38.990752687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:39.721782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46cdf99cbb1701cccc533dafa60cd00f8aa575f00411ac5e76bcdd2dc25b5264-rootfs.mount: Deactivated successfully. Jan 16 09:06:39.803518 kubelet[2593]: E0116 09:06:39.803239 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:39.808408 containerd[1462]: time="2025-01-16T09:06:39.806342988Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 16 09:06:39.837978 containerd[1462]: time="2025-01-16T09:06:39.837775230Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed\"" Jan 16 09:06:39.840063 containerd[1462]: time="2025-01-16T09:06:39.838678138Z" level=info msg="StartContainer for \"8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed\"" Jan 16 09:06:39.900569 systemd[1]: Started cri-containerd-8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed.scope - libcontainer container 8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed. Jan 16 09:06:39.941395 containerd[1462]: time="2025-01-16T09:06:39.941338585Z" level=info msg="StartContainer for \"8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed\" returns successfully" Jan 16 09:06:39.949314 systemd[1]: cri-containerd-8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed.scope: Deactivated successfully. Jan 16 09:06:39.988915 containerd[1462]: time="2025-01-16T09:06:39.988601874Z" level=info msg="shim disconnected" id=8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed namespace=k8s.io Jan 16 09:06:39.988915 containerd[1462]: time="2025-01-16T09:06:39.988677329Z" level=warning msg="cleaning up after shim disconnected" id=8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed namespace=k8s.io Jan 16 09:06:39.988915 containerd[1462]: time="2025-01-16T09:06:39.988690419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:40.719720 systemd[1]: run-containerd-runc-k8s.io-8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed-runc.gsOjFr.mount: Deactivated successfully. Jan 16 09:06:40.719871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ae723356494bf0687c6c587c41394562f9ff66a2bfd5b8b18578256062deeed-rootfs.mount: Deactivated successfully. Jan 16 09:06:40.810311 kubelet[2593]: E0116 09:06:40.808435 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:40.813055 containerd[1462]: time="2025-01-16T09:06:40.811997193Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 16 09:06:40.845433 containerd[1462]: time="2025-01-16T09:06:40.845357389Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda\"" Jan 16 09:06:40.847683 containerd[1462]: time="2025-01-16T09:06:40.846270655Z" level=info msg="StartContainer for \"22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda\"" Jan 16 09:06:40.893050 systemd[1]: Started cri-containerd-22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda.scope - libcontainer container 22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda. Jan 16 09:06:40.926743 systemd[1]: cri-containerd-22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda.scope: Deactivated successfully. Jan 16 09:06:40.932533 containerd[1462]: time="2025-01-16T09:06:40.932422614Z" level=info msg="StartContainer for \"22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda\" returns successfully" Jan 16 09:06:40.963581 containerd[1462]: time="2025-01-16T09:06:40.963494056Z" level=info msg="shim disconnected" id=22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda namespace=k8s.io Jan 16 09:06:40.963581 containerd[1462]: time="2025-01-16T09:06:40.963573013Z" level=warning msg="cleaning up after shim disconnected" id=22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda namespace=k8s.io Jan 16 09:06:40.963581 containerd[1462]: time="2025-01-16T09:06:40.963587582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 16 09:06:41.241460 kubelet[2593]: E0116 09:06:41.241366 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:41.400525 kubelet[2593]: E0116 09:06:41.400352 2593 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 16 09:06:41.720115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22cdec0f5be74b4b337ef226f08a693c814556b6aac362aab2f789e9b84e1dda-rootfs.mount: Deactivated successfully. Jan 16 09:06:41.817668 kubelet[2593]: E0116 09:06:41.817363 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:41.827051 containerd[1462]: time="2025-01-16T09:06:41.824143089Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 16 09:06:41.853767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327893650.mount: Deactivated successfully. Jan 16 09:06:41.858391 containerd[1462]: time="2025-01-16T09:06:41.858319790Z" level=info msg="CreateContainer within sandbox \"7c0ff8e1e71b995ed631c0a6a187f56986224e7e8fad691992687d91a0f5c21e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"74f09cb9defa04889e4aba04ab5a547836a39893c20ffaae79df6c44089a684d\"" Jan 16 09:06:41.859639 containerd[1462]: time="2025-01-16T09:06:41.859297032Z" level=info msg="StartContainer for \"74f09cb9defa04889e4aba04ab5a547836a39893c20ffaae79df6c44089a684d\"" Jan 16 09:06:41.906331 systemd[1]: Started cri-containerd-74f09cb9defa04889e4aba04ab5a547836a39893c20ffaae79df6c44089a684d.scope - libcontainer container 74f09cb9defa04889e4aba04ab5a547836a39893c20ffaae79df6c44089a684d. Jan 16 09:06:41.955292 containerd[1462]: time="2025-01-16T09:06:41.955011153Z" level=info msg="StartContainer for \"74f09cb9defa04889e4aba04ab5a547836a39893c20ffaae79df6c44089a684d\" returns successfully" Jan 16 09:06:42.457079 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 16 09:06:42.826800 kubelet[2593]: E0116 09:06:42.826717 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:43.954144 kubelet[2593]: E0116 09:06:43.953985 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:46.073828 systemd-networkd[1367]: lxc_health: Link UP Jan 16 09:06:46.083076 systemd-networkd[1367]: lxc_health: Gained carrier Jan 16 09:06:46.828779 systemd[1]: run-containerd-runc-k8s.io-74f09cb9defa04889e4aba04ab5a547836a39893c20ffaae79df6c44089a684d-runc.NpUgsS.mount: Deactivated successfully. Jan 16 09:06:47.642448 systemd-networkd[1367]: lxc_health: Gained IPv6LL Jan 16 09:06:47.954888 kubelet[2593]: E0116 09:06:47.953920 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:47.997461 kubelet[2593]: I0116 09:06:47.997372 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ckh84" podStartSLOduration=10.997346081 podStartE2EDuration="10.997346081s" podCreationTimestamp="2025-01-16 09:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-16 09:06:42.84685986 +0000 UTC m=+136.763474221" watchObservedRunningTime="2025-01-16 09:06:47.997346081 +0000 UTC m=+141.913960446" Jan 16 09:06:48.847618 kubelet[2593]: E0116 09:06:48.847344 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:49.850582 kubelet[2593]: E0116 09:06:49.849912 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 16 09:06:53.794711 systemd[1]: run-containerd-runc-k8s.io-74f09cb9defa04889e4aba04ab5a547836a39893c20ffaae79df6c44089a684d-runc.ZeaPFZ.mount: Deactivated successfully. Jan 16 09:06:53.946010 sshd[4457]: pam_unix(sshd:session): session closed for user core Jan 16 09:06:53.956490 systemd[1]: sshd@34-64.227.96.98:22-139.178.68.195:60504.service: Deactivated successfully. Jan 16 09:06:53.958912 systemd[1]: session-33.scope: Deactivated successfully. Jan 16 09:06:53.960414 systemd-logind[1448]: Session 33 logged out. Waiting for processes to exit. Jan 16 09:06:53.961704 systemd-logind[1448]: Removed session 33.