Jan 30 13:58:29.074438 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:58:29.074485 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:58:29.074508 kernel: BIOS-provided physical RAM map: Jan 30 13:58:29.074521 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:58:29.074533 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:58:29.074546 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:58:29.074563 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Jan 30 13:58:29.074577 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Jan 30 13:58:29.074606 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:58:29.074625 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:58:29.074639 kernel: NX (Execute Disable) protection: active Jan 30 13:58:29.074652 kernel: APIC: Static calls initialized Jan 30 13:58:29.074669 kernel: SMBIOS 2.8 present. Jan 30 13:58:29.074684 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 30 13:58:29.074701 kernel: Hypervisor detected: KVM Jan 30 13:58:29.074745 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:58:29.074765 kernel: kvm-clock: using sched offset of 3832383166 cycles Jan 30 13:58:29.074782 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:58:29.074798 kernel: tsc: Detected 2294.608 MHz processor Jan 30 13:58:29.074820 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:58:29.074836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:58:29.074851 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Jan 30 13:58:29.074867 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:58:29.074883 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:58:29.074902 kernel: ACPI: Early table checksum verification disabled Jan 30 13:58:29.074918 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Jan 30 13:58:29.074934 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:58:29.074948 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:58:29.074962 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:58:29.074978 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 30 13:58:29.074993 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:58:29.075009 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:58:29.075024 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:58:29.075045 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:58:29.075060 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 30 13:58:29.075076 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 30 13:58:29.075092 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 30 13:58:29.075107 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 30 13:58:29.075122 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 30 13:58:29.075138 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 30 13:58:29.075163 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 30 13:58:29.075180 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:58:29.075197 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:58:29.075214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:58:29.075231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:58:29.075251 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Jan 30 13:58:29.075268 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Jan 30 13:58:29.075289 kernel: Zone ranges: Jan 30 13:58:29.075307 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:58:29.075321 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Jan 30 13:58:29.075338 kernel: Normal empty Jan 30 13:58:29.075354 kernel: Movable zone start for each node Jan 30 13:58:29.075386 kernel: Early memory node ranges Jan 30 13:58:29.075402 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:58:29.075419 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Jan 30 13:58:29.075436 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Jan 30 13:58:29.075457 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:58:29.075474 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:58:29.075494 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Jan 30 13:58:29.075511 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:58:29.075528 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:58:29.075544 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:58:29.075561 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:58:29.075577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:58:29.075593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:58:29.075614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:58:29.075631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:58:29.075647 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:58:29.075664 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:58:29.075681 kernel: TSC deadline timer available Jan 30 13:58:29.075698 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:58:29.075765 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:58:29.075783 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 30 13:58:29.075804 kernel: Booting paravirtualized kernel on KVM Jan 30 13:58:29.075826 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:58:29.075843 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:58:29.075860 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:58:29.075876 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:58:29.075892 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:58:29.075908 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 30 13:58:29.075927 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:58:29.075945 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:58:29.075965 kernel: random: crng init done Jan 30 13:58:29.075981 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:58:29.075998 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:58:29.076015 kernel: Fallback order for Node 0: 0 Jan 30 13:58:29.076032 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Jan 30 13:58:29.076048 kernel: Policy zone: DMA32 Jan 30 13:58:29.076065 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:58:29.076082 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 125152K reserved, 0K cma-reserved) Jan 30 13:58:29.076098 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:58:29.076120 kernel: Kernel/User page tables isolation: enabled Jan 30 13:58:29.076136 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:58:29.076153 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:58:29.076169 kernel: Dynamic Preempt: voluntary Jan 30 13:58:29.076186 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:58:29.076204 kernel: rcu: RCU event tracing is enabled. Jan 30 13:58:29.076221 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:58:29.076237 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:58:29.076255 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:58:29.076275 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:58:29.076289 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:58:29.076302 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:58:29.076317 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 13:58:29.076333 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:58:29.076353 kernel: Console: colour VGA+ 80x25 Jan 30 13:58:29.076384 kernel: printk: console [tty0] enabled Jan 30 13:58:29.076401 kernel: printk: console [ttyS0] enabled Jan 30 13:58:29.076417 kernel: ACPI: Core revision 20230628 Jan 30 13:58:29.076440 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:58:29.076456 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:58:29.076472 kernel: x2apic enabled Jan 30 13:58:29.076489 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:58:29.076505 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:58:29.076522 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 30 13:58:29.076539 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Jan 30 13:58:29.076555 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 13:58:29.076573 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 13:58:29.076609 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:58:29.076627 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:58:29.076644 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:58:29.076667 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:58:29.076685 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 13:58:29.076702 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:58:29.076734 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:58:29.076751 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 13:58:29.076770 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:58:29.076795 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:58:29.076813 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:58:29.076830 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:58:29.076847 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:58:29.076865 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 13:58:29.076883 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:58:29.076901 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:58:29.076919 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:58:29.076941 kernel: landlock: Up and running. Jan 30 13:58:29.076959 kernel: SELinux: Initializing. Jan 30 13:58:29.076977 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:58:29.076994 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:58:29.077012 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 30 13:58:29.077029 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:58:29.077047 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:58:29.077064 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:58:29.077086 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 30 13:58:29.077104 kernel: signal: max sigframe size: 1776 Jan 30 13:58:29.077121 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:58:29.077139 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:58:29.077156 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:58:29.077209 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:58:29.077228 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:58:29.077245 kernel: .... node #0, CPUs: #1 Jan 30 13:58:29.077262 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:58:29.077290 kernel: smpboot: Max logical packages: 1 Jan 30 13:58:29.077307 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Jan 30 13:58:29.077325 kernel: devtmpfs: initialized Jan 30 13:58:29.077343 kernel: x86/mm: Memory block size: 128MB Jan 30 13:58:29.077360 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:58:29.077378 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:58:29.077396 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:58:29.077414 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:58:29.077432 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:58:29.077455 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:58:29.077473 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:58:29.077491 kernel: audit: type=2000 audit(1738245507.313:1): state=initialized audit_enabled=0 res=1 Jan 30 13:58:29.077508 kernel: cpuidle: using governor menu Jan 30 13:58:29.077525 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:58:29.077544 kernel: dca service started, version 1.12.1 Jan 30 13:58:29.077560 kernel: PCI: Using configuration type 1 for base access Jan 30 13:58:29.077578 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:58:29.077595 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:58:29.077622 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:58:29.077640 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:58:29.077656 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:58:29.077676 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:58:29.077693 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:58:29.080766 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:58:29.080803 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:58:29.080819 kernel: ACPI: Interpreter enabled Jan 30 13:58:29.080834 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:58:29.080849 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:58:29.080874 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:58:29.080892 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:58:29.080917 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 30 13:58:29.080943 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:58:29.081245 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:58:29.081406 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 13:58:29.081552 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 13:58:29.081584 kernel: acpiphp: Slot [3] registered Jan 30 13:58:29.081605 kernel: acpiphp: Slot [4] registered Jan 30 13:58:29.081636 kernel: acpiphp: Slot [5] registered Jan 30 13:58:29.081651 kernel: acpiphp: Slot [6] registered Jan 30 13:58:29.081666 kernel: acpiphp: Slot [7] registered Jan 30 13:58:29.081679 kernel: acpiphp: Slot [8] registered Jan 30 13:58:29.081691 kernel: acpiphp: Slot [9] registered Jan 30 13:58:29.083506 kernel: acpiphp: Slot [10] registered Jan 30 13:58:29.083583 kernel: acpiphp: Slot [11] registered Jan 30 13:58:29.083614 kernel: acpiphp: Slot [12] registered Jan 30 13:58:29.083635 kernel: acpiphp: Slot [13] registered Jan 30 13:58:29.083656 kernel: acpiphp: Slot [14] registered Jan 30 13:58:29.083676 kernel: acpiphp: Slot [15] registered Jan 30 13:58:29.083697 kernel: acpiphp: Slot [16] registered Jan 30 13:58:29.083740 kernel: acpiphp: Slot [17] registered Jan 30 13:58:29.083769 kernel: acpiphp: Slot [18] registered Jan 30 13:58:29.083790 kernel: acpiphp: Slot [19] registered Jan 30 13:58:29.083811 kernel: acpiphp: Slot [20] registered Jan 30 13:58:29.083836 kernel: acpiphp: Slot [21] registered Jan 30 13:58:29.083857 kernel: acpiphp: Slot [22] registered Jan 30 13:58:29.083878 kernel: acpiphp: Slot [23] registered Jan 30 13:58:29.083899 kernel: acpiphp: Slot [24] registered Jan 30 13:58:29.083919 kernel: acpiphp: Slot [25] registered Jan 30 13:58:29.083940 kernel: acpiphp: Slot [26] registered Jan 30 13:58:29.083961 kernel: acpiphp: Slot [27] registered Jan 30 13:58:29.083981 kernel: acpiphp: Slot [28] registered Jan 30 13:58:29.084002 kernel: acpiphp: Slot [29] registered Jan 30 13:58:29.084023 kernel: acpiphp: Slot [30] registered Jan 30 13:58:29.084047 kernel: acpiphp: Slot [31] registered Jan 30 13:58:29.084068 kernel: PCI host bridge to bus 0000:00 Jan 30 13:58:29.084276 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:58:29.084450 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:58:29.084583 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:58:29.086814 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 30 13:58:29.087019 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 30 13:58:29.087162 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:58:29.087386 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 13:58:29.087557 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 30 13:58:29.087772 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 30 13:58:29.087955 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 30 13:58:29.088106 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:58:29.088318 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:58:29.088523 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:58:29.088682 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:58:29.090946 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 30 13:58:29.091057 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 30 13:58:29.091216 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 13:58:29.091366 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 30 13:58:29.091522 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 30 13:58:29.091688 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 30 13:58:29.091893 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 30 13:58:29.092060 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 30 13:58:29.092244 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 30 13:58:29.092445 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 30 13:58:29.092643 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:58:29.094059 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:58:29.094256 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 30 13:58:29.094418 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 30 13:58:29.094567 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 30 13:58:29.095886 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:58:29.096063 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 30 13:58:29.096251 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 30 13:58:29.096433 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 30 13:58:29.096659 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 30 13:58:29.098947 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 30 13:58:29.099136 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 30 13:58:29.099340 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 30 13:58:29.099520 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:58:29.099683 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:58:29.099878 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 30 13:58:29.100056 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 30 13:58:29.100184 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:58:29.100330 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 30 13:58:29.100492 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 30 13:58:29.100664 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 30 13:58:29.103014 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 30 13:58:29.103231 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 30 13:58:29.103399 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 30 13:58:29.103419 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:58:29.103436 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:58:29.103454 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:58:29.103478 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:58:29.103505 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 13:58:29.103526 kernel: iommu: Default domain type: Translated Jan 30 13:58:29.103547 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:58:29.103593 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:58:29.103611 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:58:29.103629 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:58:29.103653 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Jan 30 13:58:29.103885 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 30 13:58:29.104056 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 30 13:58:29.104233 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:58:29.105824 kernel: vgaarb: loaded Jan 30 13:58:29.105853 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:58:29.105870 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:58:29.105889 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:58:29.105906 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:58:29.105921 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:58:29.105938 kernel: pnp: PnP ACPI init Jan 30 13:58:29.105956 kernel: pnp: PnP ACPI: found 4 devices Jan 30 13:58:29.105983 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:58:29.106001 kernel: NET: Registered PF_INET protocol family Jan 30 13:58:29.106018 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:58:29.106036 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:58:29.106054 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:58:29.106071 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:58:29.106090 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:58:29.106108 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:58:29.106129 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:58:29.106147 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:58:29.106164 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:58:29.106182 kernel: NET: Registered PF_XDP protocol family Jan 30 13:58:29.106416 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:58:29.106561 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:58:29.107942 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:58:29.108106 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 30 13:58:29.108248 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 30 13:58:29.108440 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 30 13:58:29.108606 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:58:29.108626 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 30 13:58:29.110912 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 35171 usecs Jan 30 13:58:29.110954 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:58:29.110976 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:58:29.110998 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Jan 30 13:58:29.111019 kernel: Initialise system trusted keyrings Jan 30 13:58:29.111051 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:58:29.111072 kernel: Key type asymmetric registered Jan 30 13:58:29.111093 kernel: Asymmetric key parser 'x509' registered Jan 30 13:58:29.111114 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:58:29.111134 kernel: io scheduler mq-deadline registered Jan 30 13:58:29.111155 kernel: io scheduler kyber registered Jan 30 13:58:29.111176 kernel: io scheduler bfq registered Jan 30 13:58:29.111197 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:58:29.111220 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 30 13:58:29.111252 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 13:58:29.111266 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 13:58:29.111281 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:58:29.111301 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:58:29.111316 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:58:29.111330 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:58:29.111343 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:58:29.111595 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 13:58:29.111630 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:58:29.111829 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 13:58:29.111975 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T13:58:28 UTC (1738245508) Jan 30 13:58:29.112119 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:58:29.112145 kernel: intel_pstate: CPU model not supported Jan 30 13:58:29.112167 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:58:29.112188 kernel: Segment Routing with IPv6 Jan 30 13:58:29.112209 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:58:29.112230 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:58:29.112259 kernel: Key type dns_resolver registered Jan 30 13:58:29.112280 kernel: IPI shorthand broadcast: enabled Jan 30 13:58:29.112301 kernel: sched_clock: Marking stable (2170009682, 187841172)->(2419149296, -61298442) Jan 30 13:58:29.112323 kernel: registered taskstats version 1 Jan 30 13:58:29.112344 kernel: Loading compiled-in X.509 certificates Jan 30 13:58:29.112364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:58:29.112385 kernel: Key type .fscrypt registered Jan 30 13:58:29.112406 kernel: Key type fscrypt-provisioning registered Jan 30 13:58:29.112427 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:58:29.112451 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:58:29.112472 kernel: ima: No architecture policies found Jan 30 13:58:29.112492 kernel: clk: Disabling unused clocks Jan 30 13:58:29.112513 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:58:29.112535 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:58:29.112582 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:58:29.112608 kernel: Run /init as init process Jan 30 13:58:29.112630 kernel: with arguments: Jan 30 13:58:29.112653 kernel: /init Jan 30 13:58:29.112681 kernel: with environment: Jan 30 13:58:29.112697 kernel: HOME=/ Jan 30 13:58:29.112775 kernel: TERM=linux Jan 30 13:58:29.112797 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:58:29.112825 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:58:29.112844 systemd[1]: Detected virtualization kvm. Jan 30 13:58:29.112859 systemd[1]: Detected architecture x86-64. Jan 30 13:58:29.112878 systemd[1]: Running in initrd. Jan 30 13:58:29.112893 systemd[1]: No hostname configured, using default hostname. Jan 30 13:58:29.112908 systemd[1]: Hostname set to . Jan 30 13:58:29.112926 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:58:29.112939 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:58:29.112954 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:58:29.112970 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:58:29.112986 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:58:29.113005 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:58:29.113019 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:58:29.113034 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:58:29.113052 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:58:29.113068 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:58:29.113083 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:58:29.113098 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:58:29.113117 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:58:29.113131 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:58:29.113145 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:58:29.113167 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:58:29.113182 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:58:29.113196 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:58:29.113218 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:58:29.113235 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:58:29.113252 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:58:29.113266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:58:29.113280 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:58:29.113296 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:58:29.113313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:58:29.113331 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:58:29.113354 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:58:29.113368 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:58:29.113383 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:58:29.113400 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:58:29.113415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:58:29.113432 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:58:29.113448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:58:29.113463 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:58:29.113536 systemd-journald[182]: Collecting audit messages is disabled. Jan 30 13:58:29.113588 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:58:29.113604 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:58:29.113623 systemd-journald[182]: Journal started Jan 30 13:58:29.113659 systemd-journald[182]: Runtime Journal (/run/log/journal/22d0a38f101b451a9142bde567dacceb) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:58:29.080473 systemd-modules-load[183]: Inserted module 'overlay' Jan 30 13:58:29.167773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:58:29.167826 kernel: Bridge firewalling registered Jan 30 13:58:29.167858 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:58:29.141413 systemd-modules-load[183]: Inserted module 'br_netfilter' Jan 30 13:58:29.169019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:58:29.177748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:58:29.203116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:58:29.205316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:58:29.208979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:58:29.222006 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:58:29.241127 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:58:29.247303 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:58:29.248539 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:58:29.260068 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:58:29.262797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:58:29.266971 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:58:29.297654 dracut-cmdline[221]: dracut-dracut-053 Jan 30 13:58:29.305138 systemd-resolved[219]: Positive Trust Anchors: Jan 30 13:58:29.305158 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:58:29.308454 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:58:29.305209 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:58:29.311008 systemd-resolved[219]: Defaulting to hostname 'linux'. Jan 30 13:58:29.314432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:58:29.316328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:58:29.434761 kernel: SCSI subsystem initialized Jan 30 13:58:29.447752 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:58:29.461752 kernel: iscsi: registered transport (tcp) Jan 30 13:58:29.488750 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:58:29.488843 kernel: QLogic iSCSI HBA Driver Jan 30 13:58:29.557683 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:58:29.575075 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:58:29.609818 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:58:29.609908 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:58:29.614133 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:58:29.665805 kernel: raid6: avx2x4 gen() 17233 MB/s Jan 30 13:58:29.688552 kernel: raid6: avx2x2 gen() 17118 MB/s Jan 30 13:58:29.702506 kernel: raid6: avx2x1 gen() 12005 MB/s Jan 30 13:58:29.702674 kernel: raid6: using algorithm avx2x4 gen() 17233 MB/s Jan 30 13:58:29.721644 kernel: raid6: .... xor() 5569 MB/s, rmw enabled Jan 30 13:58:29.721780 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:58:29.751757 kernel: xor: automatically using best checksumming function avx Jan 30 13:58:29.954780 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:58:29.973314 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:58:29.982022 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:58:30.012920 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 30 13:58:30.022144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:58:30.032994 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:58:30.068424 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 30 13:58:30.130114 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:58:30.137068 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:58:30.228847 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:58:30.238513 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:58:30.277469 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:58:30.279759 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:58:30.282282 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:58:30.284935 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:58:30.293567 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:58:30.330475 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:58:30.351298 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 30 13:58:30.458046 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 13:58:30.458288 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:58:30.458495 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:58:30.458523 kernel: GPT:9289727 != 125829119 Jan 30 13:58:30.458565 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:58:30.458591 kernel: GPT:9289727 != 125829119 Jan 30 13:58:30.458616 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:58:30.458642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:58:30.458669 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:58:30.458695 kernel: ACPI: bus type USB registered Jan 30 13:58:30.458743 kernel: usbcore: registered new interface driver usbfs Jan 30 13:58:30.458801 kernel: usbcore: registered new interface driver hub Jan 30 13:58:30.458828 kernel: usbcore: registered new device driver usb Jan 30 13:58:30.458854 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:58:30.458880 kernel: AES CTR mode by8 optimization enabled Jan 30 13:58:30.448344 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:58:30.448571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:58:30.463166 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 30 13:58:30.487496 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Jan 30 13:58:30.488924 kernel: libata version 3.00 loaded. Jan 30 13:58:30.450211 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:58:30.452533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:58:30.452745 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:58:30.453371 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:58:30.500759 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 30 13:58:30.519036 kernel: scsi host1: ata_piix Jan 30 13:58:30.519269 kernel: scsi host2: ata_piix Jan 30 13:58:30.519437 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 30 13:58:30.519465 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 30 13:58:30.462231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:58:30.610686 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Jan 30 13:58:30.616745 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 30 13:58:30.631237 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 30 13:58:30.631454 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 30 13:58:30.631645 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 30 13:58:30.632065 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (456) Jan 30 13:58:30.632100 kernel: hub 1-0:1.0: USB hub found Jan 30 13:58:30.632349 kernel: hub 1-0:1.0: 2 ports detected Jan 30 13:58:30.619572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:58:30.640916 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:58:30.649660 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:58:30.657887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:58:30.664454 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:58:30.665513 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:58:30.674042 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:58:30.679972 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:58:30.703926 disk-uuid[534]: Primary Header is updated. Jan 30 13:58:30.703926 disk-uuid[534]: Secondary Entries is updated. Jan 30 13:58:30.703926 disk-uuid[534]: Secondary Header is updated. Jan 30 13:58:30.710992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:58:30.719840 kernel: GPT:disk_guids don't match. Jan 30 13:58:30.719926 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:58:30.719959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:58:30.729746 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:58:31.731752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:58:31.732854 disk-uuid[541]: The operation has completed successfully. Jan 30 13:58:31.789878 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:58:31.791192 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:58:31.809986 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:58:31.817536 sh[562]: Success Jan 30 13:58:31.835756 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:58:31.924222 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:58:31.931939 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:58:31.934088 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:58:31.966747 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:58:31.966821 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:58:31.968918 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:58:31.971715 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:58:31.971808 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:58:31.988301 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:58:31.989764 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:58:31.997018 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:58:32.001014 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:58:32.020684 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:58:32.020780 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:58:32.020801 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:58:32.027750 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:58:32.047415 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:58:32.046970 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:58:32.066644 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:58:32.073999 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:58:32.180911 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:58:32.193062 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:58:32.241454 ignition[660]: Ignition 2.19.0 Jan 30 13:58:32.241783 systemd-networkd[746]: lo: Link UP Jan 30 13:58:32.241470 ignition[660]: Stage: fetch-offline Jan 30 13:58:32.241790 systemd-networkd[746]: lo: Gained carrier Jan 30 13:58:32.241528 ignition[660]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:58:32.245464 systemd-networkd[746]: Enumeration completed Jan 30 13:58:32.241545 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:58:32.245812 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:58:32.241685 ignition[660]: parsed url from cmdline: "" Jan 30 13:58:32.246594 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:58:32.241690 ignition[660]: no config URL provided Jan 30 13:58:32.246601 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 30 13:58:32.241697 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:58:32.248129 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:58:32.241722 ignition[660]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:58:32.249271 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:58:32.241729 ignition[660]: failed to fetch config: resource requires networking Jan 30 13:58:32.249280 systemd-networkd[746]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:58:32.241996 ignition[660]: Ignition finished successfully Jan 30 13:58:32.250971 systemd[1]: Reached target network.target - Network. Jan 30 13:58:32.251095 systemd-networkd[746]: eth0: Link UP Jan 30 13:58:32.251100 systemd-networkd[746]: eth0: Gained carrier Jan 30 13:58:32.251114 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 30 13:58:32.257124 systemd-networkd[746]: eth1: Link UP Jan 30 13:58:32.257130 systemd-networkd[746]: eth1: Gained carrier Jan 30 13:58:32.257149 systemd-networkd[746]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:58:32.260217 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:58:32.271377 systemd-networkd[746]: eth0: DHCPv4 address 143.198.62.166/20, gateway 143.198.48.1 acquired from 169.254.169.253 Jan 30 13:58:32.275884 systemd-networkd[746]: eth1: DHCPv4 address 10.124.0.9/20 acquired from 169.254.169.253 Jan 30 13:58:32.292464 ignition[757]: Ignition 2.19.0 Jan 30 13:58:32.293447 ignition[757]: Stage: fetch Jan 30 13:58:32.294369 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:58:32.295278 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:58:32.296274 ignition[757]: parsed url from cmdline: "" Jan 30 13:58:32.296280 ignition[757]: no config URL provided Jan 30 13:58:32.296292 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:58:32.296311 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:58:32.296341 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 30 13:58:32.327523 ignition[757]: GET result: OK Jan 30 13:58:32.327698 ignition[757]: parsing config with SHA512: 6386de8df505d281150402e2cbe0c543132b66258f4602a73a3f610e4257de6856da8b85f1087910d6013ee0933ce9cb1cca7a1639187361d069abfcc207bc2f Jan 30 13:58:32.337480 unknown[757]: fetched base config from "system" Jan 30 13:58:32.337522 unknown[757]: fetched base config from "system" Jan 30 13:58:32.337532 unknown[757]: fetched user config from "digitalocean" Jan 30 13:58:32.338498 ignition[757]: fetch: fetch complete Jan 30 13:58:32.338507 ignition[757]: fetch: fetch passed Jan 30 13:58:32.338599 ignition[757]: Ignition finished successfully Jan 30 13:58:32.341497 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:58:32.349005 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:58:32.377489 ignition[764]: Ignition 2.19.0 Jan 30 13:58:32.377504 ignition[764]: Stage: kargs Jan 30 13:58:32.377920 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:58:32.377938 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:58:32.383039 ignition[764]: kargs: kargs passed Jan 30 13:58:32.383857 ignition[764]: Ignition finished successfully Jan 30 13:58:32.386274 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:58:32.396035 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:58:32.416318 ignition[770]: Ignition 2.19.0 Jan 30 13:58:32.416335 ignition[770]: Stage: disks Jan 30 13:58:32.416695 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:58:32.416731 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:58:32.420452 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:58:32.418341 ignition[770]: disks: disks passed Jan 30 13:58:32.427472 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:58:32.418437 ignition[770]: Ignition finished successfully Jan 30 13:58:32.428236 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:58:32.429803 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:58:32.431024 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:58:32.432132 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:58:32.443076 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:58:32.469897 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:58:32.477024 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:58:32.483901 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:58:32.611760 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:58:32.613123 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:58:32.615545 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:58:32.633923 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:58:32.638405 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:58:32.642035 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Jan 30 13:58:32.652040 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:58:32.679171 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) Jan 30 13:58:32.679216 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:58:32.679240 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:58:32.679263 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:58:32.679285 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:58:32.658329 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:58:32.658400 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:58:32.686314 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:58:32.692404 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:58:32.705343 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:58:32.790107 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:58:32.794738 coreos-metadata[789]: Jan 30 13:58:32.793 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:58:32.807493 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:58:32.808612 coreos-metadata[789]: Jan 30 13:58:32.808 INFO Fetch successful Jan 30 13:58:32.810102 coreos-metadata[790]: Jan 30 13:58:32.809 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:58:32.817669 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Jan 30 13:58:32.818982 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Jan 30 13:58:32.821520 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:58:32.825854 coreos-metadata[790]: Jan 30 13:58:32.825 INFO Fetch successful Jan 30 13:58:32.827910 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:58:32.833406 coreos-metadata[790]: Jan 30 13:58:32.833 INFO wrote hostname ci-4081.3.0-f-9c719b1623 to /sysroot/etc/hostname Jan 30 13:58:32.835105 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:58:32.973969 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:58:32.979934 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:58:32.984010 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:58:32.999974 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:58:33.004045 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:58:33.043110 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:58:33.053526 ignition[909]: INFO : Ignition 2.19.0 Jan 30 13:58:33.053526 ignition[909]: INFO : Stage: mount Jan 30 13:58:33.055373 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:58:33.055373 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:58:33.057246 ignition[909]: INFO : mount: mount passed Jan 30 13:58:33.057246 ignition[909]: INFO : Ignition finished successfully Jan 30 13:58:33.056924 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:58:33.063871 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:58:33.097064 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:58:33.116761 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (920) Jan 30 13:58:33.122045 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:58:33.122137 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:58:33.122165 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:58:33.132749 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:58:33.138830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:58:33.173947 ignition[937]: INFO : Ignition 2.19.0 Jan 30 13:58:33.173947 ignition[937]: INFO : Stage: files Jan 30 13:58:33.175731 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:58:33.175731 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:58:33.175731 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:58:33.178554 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:58:33.178554 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:58:33.195128 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:58:33.196803 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:58:33.198023 unknown[937]: wrote ssh authorized keys file for user: core Jan 30 13:58:33.199192 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:58:33.204411 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:58:33.205867 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:58:33.245692 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:58:33.291849 systemd-networkd[746]: eth1: Gained IPv6LL Jan 30 13:58:33.388670 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:58:33.388670 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:58:33.388670 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:58:33.895555 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:58:33.974898 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:58:33.974898 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:58:33.978091 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:58:34.006819 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:58:34.006819 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:58:34.006819 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:58:33.995157 systemd-networkd[746]: eth0: Gained IPv6LL Jan 30 13:58:34.404440 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:58:34.684536 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:58:34.684536 ignition[937]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:58:34.687436 ignition[937]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:58:34.687436 ignition[937]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:58:34.687436 ignition[937]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:58:34.687436 ignition[937]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:58:34.692817 ignition[937]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:58:34.692817 ignition[937]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:58:34.692817 ignition[937]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:58:34.692817 ignition[937]: INFO : files: files passed Jan 30 13:58:34.692817 ignition[937]: INFO : Ignition finished successfully Jan 30 13:58:34.689942 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:58:34.698156 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:58:34.702010 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:58:34.710833 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:58:34.711811 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:58:34.724857 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:58:34.724857 initrd-setup-root-after-ignition[966]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:58:34.728784 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:58:34.731382 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:58:34.733087 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:58:34.745161 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:58:34.797434 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:58:34.797692 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:58:34.799993 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:58:34.800816 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:58:34.802261 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:58:34.816058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:58:34.835199 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:58:34.845120 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:58:34.864740 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:58:34.865978 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:58:34.867379 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:58:34.868785 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:58:34.869002 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:58:34.870828 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:58:34.872529 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:58:34.873642 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:58:34.874933 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:58:34.876167 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:58:34.877555 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:58:34.878967 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:58:34.880511 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:58:34.881903 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:58:34.883267 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:58:34.884384 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:58:34.884579 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:58:34.886054 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:58:34.886973 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:58:34.888539 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:58:34.888741 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:58:34.890093 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:58:34.890298 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:58:34.892163 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:58:34.892463 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:58:34.893927 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:58:34.894192 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:58:34.895161 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:58:34.895408 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:58:34.906848 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:58:34.907607 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:58:34.907935 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:58:34.912101 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:58:34.914076 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:58:34.914459 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:58:34.917045 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:58:34.917306 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:58:34.927299 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:58:34.927493 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:58:34.953628 ignition[990]: INFO : Ignition 2.19.0 Jan 30 13:58:34.953628 ignition[990]: INFO : Stage: umount Jan 30 13:58:34.956139 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:58:34.957029 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 30 13:58:34.959773 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:58:34.961137 ignition[990]: INFO : umount: umount passed Jan 30 13:58:34.963462 ignition[990]: INFO : Ignition finished successfully Jan 30 13:58:34.964290 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:58:34.964476 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:58:34.977351 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:58:34.977508 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:58:34.978294 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:58:34.978413 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:58:34.979582 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:58:34.979661 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:58:34.980893 systemd[1]: Stopped target network.target - Network. Jan 30 13:58:34.982338 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:58:34.982467 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:58:35.002958 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:58:35.004084 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:58:35.004392 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:58:35.005409 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:58:35.006775 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:58:35.008130 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:58:35.008199 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:58:35.033142 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:58:35.033241 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:58:35.037489 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:58:35.037581 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:58:35.040221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:58:35.040307 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:58:35.041214 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:58:35.042741 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:58:35.044837 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:58:35.044977 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:58:35.045825 systemd-networkd[746]: eth0: DHCPv6 lease lost Jan 30 13:58:35.049145 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:58:35.049234 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:58:35.050646 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:58:35.050883 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:58:35.051858 systemd-networkd[746]: eth1: DHCPv6 lease lost Jan 30 13:58:35.055538 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:58:35.055697 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:58:35.058648 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:58:35.058764 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:58:35.066910 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:58:35.067657 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:58:35.067783 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:58:35.071869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:58:35.071938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:58:35.072477 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:58:35.072543 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:58:35.073210 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:58:35.073274 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:58:35.076096 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:58:35.095236 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:58:35.095455 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:58:35.099670 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:58:35.099868 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:58:35.102501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:58:35.102629 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:58:35.104536 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:58:35.104608 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:58:35.106120 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:58:35.106224 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:58:35.108592 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:58:35.108695 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:58:35.109972 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:58:35.110062 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:58:35.125125 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:58:35.127171 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:58:35.127292 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:58:35.128068 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:58:35.128133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:58:35.136059 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:58:35.136260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:58:35.139320 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:58:35.146041 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:58:35.174268 systemd[1]: Switching root. Jan 30 13:58:35.248855 systemd-journald[182]: Journal stopped Jan 30 13:58:36.820190 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Jan 30 13:58:36.820313 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:58:36.820350 kernel: SELinux: policy capability open_perms=1 Jan 30 13:58:36.820382 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:58:36.820434 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:58:36.820462 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:58:36.820506 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:58:36.820540 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:58:36.820581 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:58:36.820613 kernel: audit: type=1403 audit(1738245515.438:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:58:36.820644 systemd[1]: Successfully loaded SELinux policy in 56.759ms. Jan 30 13:58:36.820682 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.594ms. Jan 30 13:58:36.823773 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:58:36.823857 systemd[1]: Detected virtualization kvm. Jan 30 13:58:36.823882 systemd[1]: Detected architecture x86-64. Jan 30 13:58:36.823901 systemd[1]: Detected first boot. Jan 30 13:58:36.823922 systemd[1]: Hostname set to . Jan 30 13:58:36.823941 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:58:36.823961 zram_generator::config[1032]: No configuration found. Jan 30 13:58:36.823982 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:58:36.824014 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:58:36.824034 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:58:36.824052 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:58:36.824073 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:58:36.824095 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:58:36.824115 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:58:36.824133 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:58:36.824155 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:58:36.824176 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:58:36.824210 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:58:36.824231 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:58:36.824253 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:58:36.824277 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:58:36.824299 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:58:36.824321 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:58:36.824345 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:58:36.824369 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:58:36.824399 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:58:36.824430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:58:36.824453 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:58:36.824476 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:58:36.824496 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:58:36.824517 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:58:36.824548 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:58:36.824579 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:58:36.824599 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:58:36.824622 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:58:36.824642 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:58:36.824662 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:58:36.824681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:58:36.824699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:58:36.824760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:58:36.824780 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:58:36.824813 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:58:36.824835 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:58:36.824856 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:58:36.824880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:36.824902 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:58:36.824921 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:58:36.824942 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:58:36.824964 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:58:36.824983 systemd[1]: Reached target machines.target - Containers. Jan 30 13:58:36.825017 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:58:36.825038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:58:36.825058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:58:36.825080 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:58:36.825103 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:58:36.825122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:58:36.825143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:58:36.825163 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:58:36.825195 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:58:36.825218 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:58:36.825237 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:58:36.825256 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:58:36.825275 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:58:36.825295 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:58:36.825316 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:58:36.825335 kernel: fuse: init (API version 7.39) Jan 30 13:58:36.825361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:58:36.825407 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:58:36.825430 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:58:36.825454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:58:36.825475 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:58:36.825496 systemd[1]: Stopped verity-setup.service. Jan 30 13:58:36.825519 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:36.825543 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:58:36.825563 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:58:36.825585 kernel: ACPI: bus type drm_connector registered Jan 30 13:58:36.825618 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:58:36.825641 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:58:36.825676 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:58:36.826885 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:58:36.826938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:58:36.826954 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:58:36.826968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:58:36.826984 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:58:36.827003 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:58:36.827018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:58:36.827037 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:58:36.827052 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:58:36.827066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:58:36.827080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:58:36.827094 kernel: loop: module loaded Jan 30 13:58:36.827109 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:58:36.827124 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:58:36.827138 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:58:36.827151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:58:36.827171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:58:36.827192 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:58:36.827213 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:58:36.827230 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:58:36.827296 systemd-journald[1108]: Collecting audit messages is disabled. Jan 30 13:58:36.827360 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:58:36.827389 systemd-journald[1108]: Journal started Jan 30 13:58:36.827425 systemd-journald[1108]: Runtime Journal (/run/log/journal/22d0a38f101b451a9142bde567dacceb) is 4.9M, max 39.3M, 34.4M free. Jan 30 13:58:36.353302 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:58:36.380064 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:58:36.380668 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:58:36.840751 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:58:36.846742 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:58:36.846860 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:58:36.851752 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:58:36.865780 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:58:36.883744 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:58:36.889739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:58:36.900746 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:58:36.906762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:58:36.914799 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:58:36.919745 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:58:36.943286 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:58:36.954754 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:58:36.971747 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:58:36.987001 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:58:36.988530 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:58:36.990049 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:58:36.990984 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:58:36.992185 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:58:36.997818 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:58:37.036999 kernel: loop0: detected capacity change from 0 to 210664 Jan 30 13:58:37.041768 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:58:37.057068 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:58:37.071019 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:58:37.082082 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:58:37.105739 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:58:37.106832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:58:37.135055 systemd-journald[1108]: Time spent on flushing to /var/log/journal/22d0a38f101b451a9142bde567dacceb is 136.211ms for 1000 entries. Jan 30 13:58:37.135055 systemd-journald[1108]: System Journal (/var/log/journal/22d0a38f101b451a9142bde567dacceb) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:58:37.308777 systemd-journald[1108]: Received client request to flush runtime journal. Jan 30 13:58:37.308915 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:58:37.309081 kernel: loop2: detected capacity change from 0 to 8 Jan 30 13:58:37.309114 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:58:37.140895 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:58:37.153105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:58:37.178402 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:58:37.199937 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:58:37.201829 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:58:37.317682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:58:37.350742 kernel: loop4: detected capacity change from 0 to 210664 Jan 30 13:58:37.362534 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 30 13:58:37.363453 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 30 13:58:37.372307 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:58:37.388746 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 13:58:37.438751 kernel: loop6: detected capacity change from 0 to 8 Jan 30 13:58:37.443731 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 13:58:37.483423 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 30 13:58:37.485105 (sd-merge)[1176]: Merged extensions into '/usr'. Jan 30 13:58:37.501587 systemd[1]: Reloading requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:58:37.501609 systemd[1]: Reloading... Jan 30 13:58:37.704749 zram_generator::config[1200]: No configuration found. Jan 30 13:58:37.850150 ldconfig[1130]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:58:38.015748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:58:38.120296 systemd[1]: Reloading finished in 615 ms. Jan 30 13:58:38.154445 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:58:38.160383 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:58:38.172141 systemd[1]: Starting ensure-sysext.service... Jan 30 13:58:38.181069 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:58:38.196926 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:58:38.196943 systemd[1]: Reloading... Jan 30 13:58:38.263320 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:58:38.267030 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:58:38.271070 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:58:38.272506 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 30 13:58:38.274012 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 30 13:58:38.285498 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:58:38.285519 systemd-tmpfiles[1248]: Skipping /boot Jan 30 13:58:38.339755 zram_generator::config[1275]: No configuration found. Jan 30 13:58:38.342197 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:58:38.343947 systemd-tmpfiles[1248]: Skipping /boot Jan 30 13:58:38.618321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:58:38.689389 systemd[1]: Reloading finished in 491 ms. Jan 30 13:58:38.712525 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:58:38.713855 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:58:38.738978 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:58:38.752999 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:58:38.755524 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:58:38.767114 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:58:38.775176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:58:38.785106 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:58:38.793664 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:38.793887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:58:38.799076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:58:38.803107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:58:38.812225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:58:38.814000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:58:38.814156 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:38.825192 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:58:38.829242 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:38.829449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:58:38.829619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:58:38.830766 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:38.836176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:38.836448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:58:38.844228 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:58:38.846030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:58:38.846389 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:38.849797 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:58:38.859453 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:58:38.861815 systemd[1]: Finished ensure-sysext.service. Jan 30 13:58:38.882989 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:58:38.884381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:58:38.885612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:58:38.887330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:58:38.907015 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:58:38.909097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:58:38.909858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:58:38.912161 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:58:38.912933 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:58:38.919572 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:58:38.930245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:58:38.931858 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:58:38.937045 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:58:38.937494 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:58:38.942799 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:58:38.968861 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Jan 30 13:58:38.976611 augenrules[1357]: No rules Jan 30 13:58:38.980120 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:58:38.983414 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:58:39.025650 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:58:39.038022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:58:39.129898 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:58:39.131002 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:58:39.161190 systemd-resolved[1325]: Positive Trust Anchors: Jan 30 13:58:39.161215 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:58:39.161270 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:58:39.169864 systemd-resolved[1325]: Using system hostname 'ci-4081.3.0-f-9c719b1623'. Jan 30 13:58:39.178284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:58:39.178978 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:58:39.215883 systemd-networkd[1370]: lo: Link UP Jan 30 13:58:39.215895 systemd-networkd[1370]: lo: Gained carrier Jan 30 13:58:39.217043 systemd-networkd[1370]: Enumeration completed Jan 30 13:58:39.217181 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:58:39.218909 systemd[1]: Reached target network.target - Network. Jan 30 13:58:39.225614 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:58:39.247934 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:58:39.298883 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 30 13:58:39.302753 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1377) Jan 30 13:58:39.301863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:39.302166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:58:39.307992 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:58:39.324088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:58:39.333384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:58:39.336390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:58:39.336448 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:58:39.336466 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:58:39.341757 kernel: ISO 9660 Extensions: RRIP_1991A Jan 30 13:58:39.344504 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 30 13:58:39.353299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:58:39.353492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:58:39.356176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:58:39.359423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:58:39.359606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:58:39.361159 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:58:39.361689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:58:39.382024 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:58:39.405933 systemd-networkd[1370]: eth0: Configuring with /run/systemd/network/10-86:13:8b:80:d3:1b.network. Jan 30 13:58:39.406872 systemd-networkd[1370]: eth0: Link UP Jan 30 13:58:39.406881 systemd-networkd[1370]: eth0: Gained carrier Jan 30 13:58:39.414367 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 13:58:39.433624 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:58:39.444697 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:58:39.458946 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:58:39.463884 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 30 13:58:39.477323 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:58:39.465137 systemd-networkd[1370]: eth1: Configuring with /run/systemd/network/10-9a:dc:c2:e3:2a:3b.network. Jan 30 13:58:39.468344 systemd-networkd[1370]: eth1: Link UP Jan 30 13:58:39.468353 systemd-networkd[1370]: eth1: Gained carrier Jan 30 13:58:39.469042 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 13:58:39.473056 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 13:58:39.473452 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 13:58:39.489818 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:58:39.543753 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:58:39.554920 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 30 13:58:39.555035 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 30 13:58:39.558734 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:58:39.558822 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:58:39.558850 kernel: [drm] features: -context_init Jan 30 13:58:39.561369 kernel: [drm] number of scanouts: 1 Jan 30 13:58:39.561511 kernel: [drm] number of cap sets: 0 Jan 30 13:58:39.567743 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 30 13:58:39.576732 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 30 13:58:39.576819 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:58:39.586580 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:58:39.647558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:58:39.650827 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:58:39.667949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:58:39.668479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:58:39.727316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:58:39.745209 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:58:39.745586 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:58:39.766200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:58:39.788727 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:58:39.817274 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:58:39.833065 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:58:39.855879 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:58:39.895360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:58:39.897727 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:58:39.899784 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:58:39.899951 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:58:39.900229 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:58:39.900414 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:58:39.901027 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:58:39.903426 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:58:39.903605 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:58:39.903739 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:58:39.903790 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:58:39.903883 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:58:39.905631 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:58:39.911347 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:58:39.923314 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:58:39.936076 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:58:39.939440 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:58:39.942360 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:58:39.944040 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:58:39.945317 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:58:39.945784 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:58:39.945834 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:58:39.953988 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:58:39.961915 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:58:39.972216 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:58:39.985936 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:58:39.998011 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:58:39.998834 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:58:40.007014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:58:40.019889 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:58:40.026004 coreos-metadata[1433]: Jan 30 13:58:40.025 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:58:40.029286 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:58:40.037107 jq[1437]: false Jan 30 13:58:40.050280 dbus-daemon[1434]: [system] SELinux support is enabled Jan 30 13:58:40.059326 coreos-metadata[1433]: Jan 30 13:58:40.041 INFO Fetch successful Jan 30 13:58:40.038114 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:58:40.054136 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:58:40.057552 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:58:40.061173 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:58:40.070616 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:58:40.081943 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:58:40.086317 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:58:40.092413 jq[1451]: true Jan 30 13:58:40.098826 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:58:40.116089 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:58:40.116363 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:58:40.116927 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:58:40.117175 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:58:40.131381 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:58:40.132871 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:58:40.158332 update_engine[1445]: I20250130 13:58:40.157842 1445 main.cc:92] Flatcar Update Engine starting Jan 30 13:58:40.162025 update_engine[1445]: I20250130 13:58:40.161960 1445 update_check_scheduler.cc:74] Next update check in 5m29s Jan 30 13:58:40.180279 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:58:40.182541 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:58:40.182602 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:58:40.186527 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:58:40.186673 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 30 13:58:40.186720 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:58:40.198030 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:58:40.222483 systemd-logind[1444]: New seat seat0. Jan 30 13:58:40.228357 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:58:40.229858 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:58:40.239743 jq[1456]: true Jan 30 13:58:40.249561 extend-filesystems[1438]: Found loop4 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found loop5 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found loop6 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found loop7 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found vda Jan 30 13:58:40.249561 extend-filesystems[1438]: Found vda1 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found vda2 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found vda3 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found usr Jan 30 13:58:40.249561 extend-filesystems[1438]: Found vda4 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found vda6 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found vda7 Jan 30 13:58:40.249561 extend-filesystems[1438]: Found vda9 Jan 30 13:58:40.249561 extend-filesystems[1438]: Checking size of /dev/vda9 Jan 30 13:58:40.243915 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:58:40.311424 tar[1455]: linux-amd64/helm Jan 30 13:58:40.250231 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:58:40.267168 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:58:40.272876 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:58:40.321998 extend-filesystems[1438]: Resized partition /dev/vda9 Jan 30 13:58:40.337452 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:58:40.346825 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 30 13:58:40.392761 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1381) Jan 30 13:58:40.432994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:58:40.524211 systemd-networkd[1370]: eth1: Gained IPv6LL Jan 30 13:58:40.525399 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 13:58:40.538137 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:58:40.540760 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:58:40.542969 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:58:40.557177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:58:40.568228 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:58:40.581321 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:58:40.591649 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:58:40.612628 systemd[1]: Starting sshkeys.service... Jan 30 13:58:40.678781 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 13:58:40.715848 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:58:40.734013 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:58:40.753492 extend-filesystems[1481]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:58:40.753492 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 13:58:40.753492 extend-filesystems[1481]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 13:58:40.775954 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Jan 30 13:58:40.775954 extend-filesystems[1438]: Found vdb Jan 30 13:58:40.754430 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:58:40.755545 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:58:40.828192 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:58:40.844949 systemd-networkd[1370]: eth0: Gained IPv6LL Jan 30 13:58:40.845430 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 13:58:40.919566 coreos-metadata[1516]: Jan 30 13:58:40.919 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 30 13:58:40.938879 coreos-metadata[1516]: Jan 30 13:58:40.937 INFO Fetch successful Jan 30 13:58:40.955796 unknown[1516]: wrote ssh authorized keys file for user: core Jan 30 13:58:41.030400 update-ssh-keys[1526]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:58:41.031666 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:58:41.042351 systemd[1]: Finished sshkeys.service. Jan 30 13:58:41.106080 containerd[1471]: time="2025-01-30T13:58:41.105934502Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:58:41.151451 sshd_keygen[1472]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:58:41.227839 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:58:41.241310 containerd[1471]: time="2025-01-30T13:58:41.240278780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:58:41.245148 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.252335820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.252395113Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.252421007Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.252742513Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.252772842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.252841461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.252857561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.253113802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.253132419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.253147278Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:58:41.253812 containerd[1471]: time="2025-01-30T13:58:41.253159491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:58:41.254437 containerd[1471]: time="2025-01-30T13:58:41.253263062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:58:41.254437 containerd[1471]: time="2025-01-30T13:58:41.253589262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:58:41.254229 systemd[1]: Started sshd@0-143.198.62.166:22-147.75.109.163:52326.service - OpenSSH per-connection server daemon (147.75.109.163:52326). Jan 30 13:58:41.261579 containerd[1471]: time="2025-01-30T13:58:41.257886621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:58:41.261579 containerd[1471]: time="2025-01-30T13:58:41.257934079Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:58:41.261579 containerd[1471]: time="2025-01-30T13:58:41.258095778Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:58:41.261579 containerd[1471]: time="2025-01-30T13:58:41.258253883Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:58:41.281403 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:58:41.281807 containerd[1471]: time="2025-01-30T13:58:41.281765600Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:58:41.281876 containerd[1471]: time="2025-01-30T13:58:41.281838769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:58:41.281876 containerd[1471]: time="2025-01-30T13:58:41.281862174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:58:41.281962 containerd[1471]: time="2025-01-30T13:58:41.281887672Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:58:41.281962 containerd[1471]: time="2025-01-30T13:58:41.281909687Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:58:41.282223 containerd[1471]: time="2025-01-30T13:58:41.282083290Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:58:41.282461 containerd[1471]: time="2025-01-30T13:58:41.282434612Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:58:41.282605 containerd[1471]: time="2025-01-30T13:58:41.282582467Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:58:41.282653 containerd[1471]: time="2025-01-30T13:58:41.282606062Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:58:41.282653 containerd[1471]: time="2025-01-30T13:58:41.282625868Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:58:41.282653 containerd[1471]: time="2025-01-30T13:58:41.282646716Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:58:41.282838 containerd[1471]: time="2025-01-30T13:58:41.282665587Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:58:41.282838 containerd[1471]: time="2025-01-30T13:58:41.282684364Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:58:41.283552 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:58:41.283852 containerd[1471]: time="2025-01-30T13:58:41.283808515Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:58:41.283999 containerd[1471]: time="2025-01-30T13:58:41.283965752Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:58:41.284111 containerd[1471]: time="2025-01-30T13:58:41.284092878Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:58:41.284230 containerd[1471]: time="2025-01-30T13:58:41.284213352Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:58:41.284318 containerd[1471]: time="2025-01-30T13:58:41.284300523Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284427699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284461895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284481240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284516200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284536060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284555024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284572500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284590924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284636232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284663890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284686599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284731131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284751258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284798278Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:58:41.285934 containerd[1471]: time="2025-01-30T13:58:41.284833626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.284851314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.284866942Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.284932735Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.284958851Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.284976653Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.284994410Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.285011502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.285031571Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.285052408Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:58:41.287881 containerd[1471]: time="2025-01-30T13:58:41.285066771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:58:41.297793 containerd[1471]: time="2025-01-30T13:58:41.294354306Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:58:41.298247 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:58:41.302071 containerd[1471]: time="2025-01-30T13:58:41.298247377Z" level=info msg="Connect containerd service" Jan 30 13:58:41.302071 containerd[1471]: time="2025-01-30T13:58:41.298352033Z" level=info msg="using legacy CRI server" Jan 30 13:58:41.302071 containerd[1471]: time="2025-01-30T13:58:41.298367082Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:58:41.302071 containerd[1471]: time="2025-01-30T13:58:41.298565148Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:58:41.308137 containerd[1471]: time="2025-01-30T13:58:41.307811810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:58:41.308137 containerd[1471]: time="2025-01-30T13:58:41.308003988Z" level=info msg="Start subscribing containerd event" Jan 30 13:58:41.308358 containerd[1471]: time="2025-01-30T13:58:41.308324888Z" level=info msg="Start recovering state" Jan 30 13:58:41.309438 containerd[1471]: time="2025-01-30T13:58:41.309405411Z" level=info msg="Start event monitor" Jan 30 13:58:41.309599 containerd[1471]: time="2025-01-30T13:58:41.309568233Z" level=info msg="Start snapshots syncer" Jan 30 13:58:41.309663 containerd[1471]: time="2025-01-30T13:58:41.309609112Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:58:41.309663 containerd[1471]: time="2025-01-30T13:58:41.309620658Z" level=info msg="Start streaming server" Jan 30 13:58:41.312622 containerd[1471]: time="2025-01-30T13:58:41.312531054Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:58:41.312935 containerd[1471]: time="2025-01-30T13:58:41.312896626Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:58:41.313110 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:58:41.318745 containerd[1471]: time="2025-01-30T13:58:41.316905812Z" level=info msg="containerd successfully booted in 0.216682s" Jan 30 13:58:41.367631 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:58:41.384900 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:58:41.393461 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:58:41.395594 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:58:41.475996 sshd[1540]: Accepted publickey for core from 147.75.109.163 port 52326 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:41.481956 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:41.504867 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:58:41.518274 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:58:41.529667 systemd-logind[1444]: New session 1 of user core. Jan 30 13:58:41.568828 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:58:41.580332 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:58:41.598652 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:58:41.698303 tar[1455]: linux-amd64/LICENSE Jan 30 13:58:41.699399 tar[1455]: linux-amd64/README.md Jan 30 13:58:41.735042 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:58:41.772034 systemd[1554]: Queued start job for default target default.target. Jan 30 13:58:41.777554 systemd[1554]: Created slice app.slice - User Application Slice. Jan 30 13:58:41.777780 systemd[1554]: Reached target paths.target - Paths. Jan 30 13:58:41.777907 systemd[1554]: Reached target timers.target - Timers. Jan 30 13:58:41.779592 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:58:41.813128 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:58:41.813348 systemd[1554]: Reached target sockets.target - Sockets. Jan 30 13:58:41.813378 systemd[1554]: Reached target basic.target - Basic System. Jan 30 13:58:41.813443 systemd[1554]: Reached target default.target - Main User Target. Jan 30 13:58:41.813484 systemd[1554]: Startup finished in 201ms. Jan 30 13:58:41.813893 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:58:41.825077 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:58:41.907899 systemd[1]: Started sshd@1-143.198.62.166:22-147.75.109.163:52338.service - OpenSSH per-connection server daemon (147.75.109.163:52338). Jan 30 13:58:42.012406 sshd[1568]: Accepted publickey for core from 147.75.109.163 port 52338 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:42.013382 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:42.023795 systemd-logind[1444]: New session 2 of user core. Jan 30 13:58:42.032103 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:58:42.107850 sshd[1568]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:42.121173 systemd[1]: sshd@1-143.198.62.166:22-147.75.109.163:52338.service: Deactivated successfully. Jan 30 13:58:42.123651 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:58:42.130016 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:58:42.138350 systemd[1]: Started sshd@2-143.198.62.166:22-147.75.109.163:52342.service - OpenSSH per-connection server daemon (147.75.109.163:52342). Jan 30 13:58:42.145296 systemd-logind[1444]: Removed session 2. Jan 30 13:58:42.205571 sshd[1575]: Accepted publickey for core from 147.75.109.163 port 52342 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:42.208763 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:42.219266 systemd-logind[1444]: New session 3 of user core. Jan 30 13:58:42.222902 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:58:42.289789 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:42.294659 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:58:42.297845 systemd[1]: sshd@2-143.198.62.166:22-147.75.109.163:52342.service: Deactivated successfully. Jan 30 13:58:42.301181 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:58:42.304804 systemd-logind[1444]: Removed session 3. Jan 30 13:58:42.496121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:58:42.496407 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:58:42.499035 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:58:42.502764 systemd[1]: Startup finished in 2.323s (kernel) + 6.697s (initrd) + 7.119s (userspace) = 16.140s. Jan 30 13:58:43.452636 kubelet[1586]: E0130 13:58:43.452556 1586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:58:43.455338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:58:43.455571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:58:43.456677 systemd[1]: kubelet.service: Consumed 1.390s CPU time. Jan 30 13:58:52.304324 systemd[1]: Started sshd@3-143.198.62.166:22-147.75.109.163:34042.service - OpenSSH per-connection server daemon (147.75.109.163:34042). Jan 30 13:58:52.361546 sshd[1599]: Accepted publickey for core from 147.75.109.163 port 34042 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:52.364343 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:52.371323 systemd-logind[1444]: New session 4 of user core. Jan 30 13:58:52.382211 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:58:52.450423 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:52.466690 systemd[1]: sshd@3-143.198.62.166:22-147.75.109.163:34042.service: Deactivated successfully. Jan 30 13:58:52.469822 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:58:52.472045 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:58:52.479280 systemd[1]: Started sshd@4-143.198.62.166:22-147.75.109.163:34054.service - OpenSSH per-connection server daemon (147.75.109.163:34054). Jan 30 13:58:52.482333 systemd-logind[1444]: Removed session 4. Jan 30 13:58:52.534249 sshd[1606]: Accepted publickey for core from 147.75.109.163 port 34054 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:52.536607 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:52.546718 systemd-logind[1444]: New session 5 of user core. Jan 30 13:58:52.556087 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:58:52.616490 sshd[1606]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:52.630540 systemd[1]: sshd@4-143.198.62.166:22-147.75.109.163:34054.service: Deactivated successfully. Jan 30 13:58:52.634470 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:58:52.635813 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:58:52.644280 systemd[1]: Started sshd@5-143.198.62.166:22-147.75.109.163:34058.service - OpenSSH per-connection server daemon (147.75.109.163:34058). Jan 30 13:58:52.646494 systemd-logind[1444]: Removed session 5. Jan 30 13:58:52.701061 sshd[1613]: Accepted publickey for core from 147.75.109.163 port 34058 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:52.703741 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:52.713358 systemd-logind[1444]: New session 6 of user core. Jan 30 13:58:52.720083 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:58:52.788663 sshd[1613]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:52.799638 systemd[1]: sshd@5-143.198.62.166:22-147.75.109.163:34058.service: Deactivated successfully. Jan 30 13:58:52.802410 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:58:52.805012 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:58:52.810312 systemd[1]: Started sshd@6-143.198.62.166:22-147.75.109.163:34068.service - OpenSSH per-connection server daemon (147.75.109.163:34068). Jan 30 13:58:52.813943 systemd-logind[1444]: Removed session 6. Jan 30 13:58:52.864020 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 34068 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:52.866613 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:52.874050 systemd-logind[1444]: New session 7 of user core. Jan 30 13:58:52.884041 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:58:52.962509 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:58:52.963153 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:58:52.978256 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 30 13:58:52.982948 sshd[1620]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:52.997567 systemd[1]: sshd@6-143.198.62.166:22-147.75.109.163:34068.service: Deactivated successfully. Jan 30 13:58:53.000553 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:58:53.003973 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:58:53.012247 systemd[1]: Started sshd@7-143.198.62.166:22-147.75.109.163:34080.service - OpenSSH per-connection server daemon (147.75.109.163:34080). Jan 30 13:58:53.015157 systemd-logind[1444]: Removed session 7. Jan 30 13:58:53.057651 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 34080 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:53.060118 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:53.069723 systemd-logind[1444]: New session 8 of user core. Jan 30 13:58:53.074124 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:58:53.140592 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:58:53.141080 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:58:53.146641 sudo[1632]: pam_unix(sudo:session): session closed for user root Jan 30 13:58:53.155515 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:58:53.156003 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:58:53.179261 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:58:53.182251 auditctl[1635]: No rules Jan 30 13:58:53.184433 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:58:53.184784 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:58:53.194943 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:58:53.234433 augenrules[1653]: No rules Jan 30 13:58:53.235457 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:58:53.237919 sudo[1631]: pam_unix(sudo:session): session closed for user root Jan 30 13:58:53.242740 sshd[1628]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:53.254324 systemd[1]: sshd@7-143.198.62.166:22-147.75.109.163:34080.service: Deactivated successfully. Jan 30 13:58:53.257002 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:58:53.259958 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:58:53.265254 systemd[1]: Started sshd@8-143.198.62.166:22-147.75.109.163:34086.service - OpenSSH per-connection server daemon (147.75.109.163:34086). Jan 30 13:58:53.267433 systemd-logind[1444]: Removed session 8. Jan 30 13:58:53.317203 sshd[1661]: Accepted publickey for core from 147.75.109.163 port 34086 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 13:58:53.319581 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:53.328337 systemd-logind[1444]: New session 9 of user core. Jan 30 13:58:53.331004 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:58:53.395537 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:58:53.396638 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:58:53.706354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:58:53.720238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:58:53.918083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:58:53.921427 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:58:54.024235 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:58:54.026693 kubelet[1687]: E0130 13:58:54.026617 1687 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:58:54.036014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:58:54.036251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:58:54.036464 (dockerd)[1695]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:58:54.617842 dockerd[1695]: time="2025-01-30T13:58:54.617607272Z" level=info msg="Starting up" Jan 30 13:58:54.823549 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2566137216-merged.mount: Deactivated successfully. Jan 30 13:58:54.876397 dockerd[1695]: time="2025-01-30T13:58:54.875681789Z" level=info msg="Loading containers: start." Jan 30 13:58:55.071754 kernel: Initializing XFRM netlink socket Jan 30 13:58:55.111869 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 30 13:58:56.199735 systemd-timesyncd[1342]: Contacted time server 184.105.182.16:123 (2.flatcar.pool.ntp.org). Jan 30 13:58:56.199819 systemd-timesyncd[1342]: Initial clock synchronization to Thu 2025-01-30 13:58:56.199385 UTC. Jan 30 13:58:56.201009 systemd-resolved[1325]: Clock change detected. Flushing caches. Jan 30 13:58:56.269177 systemd-networkd[1370]: docker0: Link UP Jan 30 13:58:56.299601 dockerd[1695]: time="2025-01-30T13:58:56.299504922Z" level=info msg="Loading containers: done." Jan 30 13:58:56.331818 dockerd[1695]: time="2025-01-30T13:58:56.331595523Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:58:56.331818 dockerd[1695]: time="2025-01-30T13:58:56.331749850Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:58:56.332198 dockerd[1695]: time="2025-01-30T13:58:56.331911335Z" level=info msg="Daemon has completed initialization" Jan 30 13:58:56.399359 dockerd[1695]: time="2025-01-30T13:58:56.399062030Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:58:56.400079 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:58:57.780621 containerd[1471]: time="2025-01-30T13:58:57.780219516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:58:58.457045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365850941.mount: Deactivated successfully. Jan 30 13:59:00.265291 containerd[1471]: time="2025-01-30T13:59:00.265219522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:00.269617 containerd[1471]: time="2025-01-30T13:59:00.269535145Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:59:00.273982 containerd[1471]: time="2025-01-30T13:59:00.273113207Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:00.281344 containerd[1471]: time="2025-01-30T13:59:00.281266476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:00.284196 containerd[1471]: time="2025-01-30T13:59:00.284124289Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.503835999s" Jan 30 13:59:00.284437 containerd[1471]: time="2025-01-30T13:59:00.284406284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:59:00.325460 containerd[1471]: time="2025-01-30T13:59:00.325395748Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:59:02.361935 containerd[1471]: time="2025-01-30T13:59:02.361293569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:02.370986 containerd[1471]: time="2025-01-30T13:59:02.370870723Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:59:02.377306 containerd[1471]: time="2025-01-30T13:59:02.377173997Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:02.392005 containerd[1471]: time="2025-01-30T13:59:02.390973548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:02.394499 containerd[1471]: time="2025-01-30T13:59:02.394393234Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.068539093s" Jan 30 13:59:02.394499 containerd[1471]: time="2025-01-30T13:59:02.394463462Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:59:02.433214 containerd[1471]: time="2025-01-30T13:59:02.432905197Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:59:02.437434 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 30 13:59:03.774254 containerd[1471]: time="2025-01-30T13:59:03.774175380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:03.778811 containerd[1471]: time="2025-01-30T13:59:03.778698318Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:59:03.783980 containerd[1471]: time="2025-01-30T13:59:03.783603094Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:03.790909 containerd[1471]: time="2025-01-30T13:59:03.790815611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:03.794450 containerd[1471]: time="2025-01-30T13:59:03.793783300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.360801193s" Jan 30 13:59:03.794450 containerd[1471]: time="2025-01-30T13:59:03.793867182Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:59:03.836078 containerd[1471]: time="2025-01-30T13:59:03.835988754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:59:05.130563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776749780.mount: Deactivated successfully. Jan 30 13:59:05.132624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:59:05.141204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:05.346262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:05.358734 (kubelet)[1936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:59:05.465834 kubelet[1936]: E0130 13:59:05.465491 1936 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:59:05.472318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:59:05.472554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:59:05.532567 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Jan 30 13:59:05.923473 containerd[1471]: time="2025-01-30T13:59:05.922839302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:05.926778 containerd[1471]: time="2025-01-30T13:59:05.926213314Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:59:05.930333 containerd[1471]: time="2025-01-30T13:59:05.930261399Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:05.935465 containerd[1471]: time="2025-01-30T13:59:05.935395534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:05.936631 containerd[1471]: time="2025-01-30T13:59:05.936556821Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.10024024s" Jan 30 13:59:05.936631 containerd[1471]: time="2025-01-30T13:59:05.936628982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:59:05.973992 containerd[1471]: time="2025-01-30T13:59:05.973916017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:59:06.664137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320716212.mount: Deactivated successfully. Jan 30 13:59:07.923051 containerd[1471]: time="2025-01-30T13:59:07.922972312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:07.927670 containerd[1471]: time="2025-01-30T13:59:07.927553663Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:59:07.930301 containerd[1471]: time="2025-01-30T13:59:07.930234283Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:07.937487 containerd[1471]: time="2025-01-30T13:59:07.936342099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:07.938990 containerd[1471]: time="2025-01-30T13:59:07.938900006Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.964909353s" Jan 30 13:59:07.938990 containerd[1471]: time="2025-01-30T13:59:07.938994053Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:59:07.976613 containerd[1471]: time="2025-01-30T13:59:07.976548682Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:59:08.612857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733517878.mount: Deactivated successfully. Jan 30 13:59:08.630757 containerd[1471]: time="2025-01-30T13:59:08.630453326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:08.633220 containerd[1471]: time="2025-01-30T13:59:08.633100291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:59:08.636970 containerd[1471]: time="2025-01-30T13:59:08.636835728Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:08.643878 containerd[1471]: time="2025-01-30T13:59:08.642397799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:08.643878 containerd[1471]: time="2025-01-30T13:59:08.643673001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 667.065285ms" Jan 30 13:59:08.643878 containerd[1471]: time="2025-01-30T13:59:08.643727890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:59:08.683570 containerd[1471]: time="2025-01-30T13:59:08.683528940Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:59:08.688271 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Jan 30 13:59:09.285002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133276983.mount: Deactivated successfully. Jan 30 13:59:11.503687 containerd[1471]: time="2025-01-30T13:59:11.503587531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:11.507499 containerd[1471]: time="2025-01-30T13:59:11.506978700Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:59:11.511028 containerd[1471]: time="2025-01-30T13:59:11.510694741Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:11.519014 containerd[1471]: time="2025-01-30T13:59:11.518900462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:11.520982 containerd[1471]: time="2025-01-30T13:59:11.520790183Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.836969486s" Jan 30 13:59:11.520982 containerd[1471]: time="2025-01-30T13:59:11.520858820Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:59:14.304461 systemd[1]: Started sshd@9-143.198.62.166:22-218.92.0.157:46769.service - OpenSSH per-connection server daemon (218.92.0.157:46769). Jan 30 13:59:14.829269 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:14.838521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:14.880737 systemd[1]: Reloading requested from client PID 2121 ('systemctl') (unit session-9.scope)... Jan 30 13:59:14.880763 systemd[1]: Reloading... Jan 30 13:59:15.074970 zram_generator::config[2163]: No configuration found. Jan 30 13:59:15.285503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:59:15.410388 systemd[1]: Reloading finished in 528 ms. Jan 30 13:59:15.479153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:15.486220 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:15.491804 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:59:15.492115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:15.497559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:15.659627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:15.667255 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:59:15.755014 kubelet[2218]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:59:15.755735 kubelet[2218]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:59:15.755735 kubelet[2218]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:59:15.765031 kubelet[2218]: I0130 13:59:15.764663 2218 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:59:15.901212 sshd[2225]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.157 user=root Jan 30 13:59:16.214251 kubelet[2218]: I0130 13:59:16.214189 2218 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:59:16.214251 kubelet[2218]: I0130 13:59:16.214229 2218 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:59:16.214591 kubelet[2218]: I0130 13:59:16.214556 2218 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:59:16.239786 kubelet[2218]: I0130 13:59:16.239345 2218 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:59:16.242675 kubelet[2218]: E0130 13:59:16.242630 2218 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.62.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.263412 kubelet[2218]: I0130 13:59:16.263186 2218 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:59:16.267993 kubelet[2218]: I0130 13:59:16.267696 2218 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:59:16.268163 kubelet[2218]: I0130 13:59:16.267764 2218 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-f-9c719b1623","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:59:16.268163 kubelet[2218]: I0130 13:59:16.268098 2218 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:59:16.268163 kubelet[2218]: I0130 13:59:16.268119 2218 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:59:16.268405 kubelet[2218]: I0130 13:59:16.268308 2218 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:59:16.271367 kubelet[2218]: W0130 13:59:16.271256 2218 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.62.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-9c719b1623&limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.271367 kubelet[2218]: E0130 13:59:16.271337 2218 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.62.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-9c719b1623&limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.272475 kubelet[2218]: I0130 13:59:16.272427 2218 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:59:16.272475 kubelet[2218]: I0130 13:59:16.272469 2218 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:59:16.272654 kubelet[2218]: I0130 13:59:16.272504 2218 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:59:16.272654 kubelet[2218]: I0130 13:59:16.272523 2218 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:59:16.277773 kubelet[2218]: W0130 13:59:16.277689 2218 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.62.166:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.277773 kubelet[2218]: E0130 13:59:16.277754 2218 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.62.166:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.278823 kubelet[2218]: I0130 13:59:16.278508 2218 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:59:16.281977 kubelet[2218]: I0130 13:59:16.281198 2218 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:59:16.281977 kubelet[2218]: W0130 13:59:16.281309 2218 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:59:16.282550 kubelet[2218]: I0130 13:59:16.282528 2218 server.go:1264] "Started kubelet" Jan 30 13:59:16.288423 kubelet[2218]: I0130 13:59:16.288348 2218 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:59:16.289723 kubelet[2218]: I0130 13:59:16.289675 2218 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:59:16.293470 kubelet[2218]: I0130 13:59:16.293415 2218 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:59:16.296972 kubelet[2218]: I0130 13:59:16.295672 2218 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:59:16.296972 kubelet[2218]: I0130 13:59:16.295997 2218 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:59:16.296972 kubelet[2218]: E0130 13:59:16.296456 2218 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.62.166:6443/api/v1/namespaces/default/events\": dial tcp 143.198.62.166:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-f-9c719b1623.181f7d1fb29e762b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-f-9c719b1623,UID:ci-4081.3.0-f-9c719b1623,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-f-9c719b1623,},FirstTimestamp:2025-01-30 13:59:16.282488363 +0000 UTC m=+0.608463455,LastTimestamp:2025-01-30 13:59:16.282488363 +0000 UTC m=+0.608463455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-f-9c719b1623,}" Jan 30 13:59:16.301765 kubelet[2218]: E0130 13:59:16.301712 2218 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.3.0-f-9c719b1623\" not found" Jan 30 13:59:16.301994 kubelet[2218]: I0130 13:59:16.301982 2218 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:59:16.302306 kubelet[2218]: I0130 13:59:16.302290 2218 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:59:16.302522 kubelet[2218]: I0130 13:59:16.302510 2218 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:59:16.306389 kubelet[2218]: E0130 13:59:16.305689 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.62.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-9c719b1623?timeout=10s\": dial tcp 143.198.62.166:6443: connect: connection refused" interval="200ms" Jan 30 13:59:16.306389 kubelet[2218]: W0130 13:59:16.305798 2218 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.62.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.306389 kubelet[2218]: E0130 13:59:16.305875 2218 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.62.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.308437 kubelet[2218]: I0130 13:59:16.308403 2218 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:59:16.309331 kubelet[2218]: I0130 13:59:16.309291 2218 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:59:16.310405 kubelet[2218]: E0130 13:59:16.309653 2218 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:59:16.312986 kubelet[2218]: I0130 13:59:16.312318 2218 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:59:16.339625 kubelet[2218]: I0130 13:59:16.339596 2218 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:59:16.340041 kubelet[2218]: I0130 13:59:16.340020 2218 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:59:16.340230 kubelet[2218]: I0130 13:59:16.340099 2218 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:59:16.342958 kubelet[2218]: I0130 13:59:16.342706 2218 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:59:16.346166 kubelet[2218]: I0130 13:59:16.345921 2218 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:59:16.346166 kubelet[2218]: I0130 13:59:16.345989 2218 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:59:16.346166 kubelet[2218]: I0130 13:59:16.346022 2218 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:59:16.346725 kubelet[2218]: E0130 13:59:16.346574 2218 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:59:16.349966 kubelet[2218]: W0130 13:59:16.349813 2218 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.62.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.349966 kubelet[2218]: E0130 13:59:16.349886 2218 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.62.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:16.404009 kubelet[2218]: I0130 13:59:16.403878 2218 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.404564 kubelet[2218]: E0130 13:59:16.404516 2218 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.62.166:6443/api/v1/nodes\": dial tcp 143.198.62.166:6443: connect: connection refused" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.423142 kubelet[2218]: I0130 13:59:16.423102 2218 policy_none.go:49] "None policy: Start" Jan 30 13:59:16.426886 kubelet[2218]: I0130 13:59:16.426402 2218 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:59:16.426886 kubelet[2218]: I0130 13:59:16.426463 2218 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:59:16.450373 kubelet[2218]: E0130 13:59:16.450283 2218 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:59:16.470685 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:59:16.482994 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:59:16.490159 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:59:16.500248 kubelet[2218]: I0130 13:59:16.500208 2218 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:59:16.500475 kubelet[2218]: I0130 13:59:16.500436 2218 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:59:16.500595 kubelet[2218]: I0130 13:59:16.500581 2218 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:59:16.503422 kubelet[2218]: E0130 13:59:16.503383 2218 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-f-9c719b1623\" not found" Jan 30 13:59:16.506659 kubelet[2218]: E0130 13:59:16.506582 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.62.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-9c719b1623?timeout=10s\": dial tcp 143.198.62.166:6443: connect: connection refused" interval="400ms" Jan 30 13:59:16.607549 kubelet[2218]: I0130 13:59:16.606839 2218 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.607549 kubelet[2218]: E0130 13:59:16.607255 2218 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.62.166:6443/api/v1/nodes\": dial tcp 143.198.62.166:6443: connect: connection refused" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.650876 kubelet[2218]: I0130 13:59:16.650764 2218 topology_manager.go:215] "Topology Admit Handler" podUID="53540ae47a47b659733b2acda9f0aecf" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.652254 kubelet[2218]: I0130 13:59:16.652162 2218 topology_manager.go:215] "Topology Admit Handler" podUID="acedd71468e70b7f426e63a322a7f885" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.655662 kubelet[2218]: I0130 13:59:16.655618 2218 topology_manager.go:215] "Topology Admit Handler" podUID="fe1f09c852a227093eccf9b28b2dd433" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.665225 systemd[1]: Created slice kubepods-burstable-pod53540ae47a47b659733b2acda9f0aecf.slice - libcontainer container kubepods-burstable-pod53540ae47a47b659733b2acda9f0aecf.slice. Jan 30 13:59:16.680257 systemd[1]: Created slice kubepods-burstable-podacedd71468e70b7f426e63a322a7f885.slice - libcontainer container kubepods-burstable-podacedd71468e70b7f426e63a322a7f885.slice. Jan 30 13:59:16.690636 systemd[1]: Created slice kubepods-burstable-podfe1f09c852a227093eccf9b28b2dd433.slice - libcontainer container kubepods-burstable-podfe1f09c852a227093eccf9b28b2dd433.slice. Jan 30 13:59:16.803694 kubelet[2218]: I0130 13:59:16.803535 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53540ae47a47b659733b2acda9f0aecf-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-f-9c719b1623\" (UID: \"53540ae47a47b659733b2acda9f0aecf\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.803694 kubelet[2218]: I0130 13:59:16.803612 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53540ae47a47b659733b2acda9f0aecf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-f-9c719b1623\" (UID: \"53540ae47a47b659733b2acda9f0aecf\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.803694 kubelet[2218]: I0130 13:59:16.803652 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.804358 kubelet[2218]: I0130 13:59:16.803703 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.804358 kubelet[2218]: I0130 13:59:16.803731 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe1f09c852a227093eccf9b28b2dd433-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-f-9c719b1623\" (UID: \"fe1f09c852a227093eccf9b28b2dd433\") " pod="kube-system/kube-scheduler-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.804358 kubelet[2218]: I0130 13:59:16.803759 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53540ae47a47b659733b2acda9f0aecf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-f-9c719b1623\" (UID: \"53540ae47a47b659733b2acda9f0aecf\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.804358 kubelet[2218]: I0130 13:59:16.803783 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.804358 kubelet[2218]: I0130 13:59:16.803806 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.804660 kubelet[2218]: I0130 13:59:16.803833 2218 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:16.908115 kubelet[2218]: E0130 13:59:16.908036 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.62.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-9c719b1623?timeout=10s\": dial tcp 143.198.62.166:6443: connect: connection refused" interval="800ms" Jan 30 13:59:16.974457 kubelet[2218]: E0130 13:59:16.974404 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:16.975297 containerd[1471]: time="2025-01-30T13:59:16.975254152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-f-9c719b1623,Uid:53540ae47a47b659733b2acda9f0aecf,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:16.988333 kubelet[2218]: E0130 13:59:16.988228 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:16.994630 kubelet[2218]: E0130 13:59:16.994587 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:16.998222 containerd[1471]: time="2025-01-30T13:59:16.997745581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-f-9c719b1623,Uid:acedd71468e70b7f426e63a322a7f885,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:16.998589 containerd[1471]: time="2025-01-30T13:59:16.997747909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-f-9c719b1623,Uid:fe1f09c852a227093eccf9b28b2dd433,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:17.010434 kubelet[2218]: I0130 13:59:17.009793 2218 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:17.010434 kubelet[2218]: E0130 13:59:17.010308 2218 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.62.166:6443/api/v1/nodes\": dial tcp 143.198.62.166:6443: connect: connection refused" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:17.542311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582328747.mount: Deactivated successfully. Jan 30 13:59:17.549887 kubelet[2218]: W0130 13:59:17.549727 2218 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.62.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:17.549887 kubelet[2218]: E0130 13:59:17.549876 2218 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://143.198.62.166:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:17.584971 containerd[1471]: time="2025-01-30T13:59:17.583420023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:17.585754 containerd[1471]: time="2025-01-30T13:59:17.585711822Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:17.588612 containerd[1471]: time="2025-01-30T13:59:17.588511467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:59:17.594283 containerd[1471]: time="2025-01-30T13:59:17.594190636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:59:17.597402 containerd[1471]: time="2025-01-30T13:59:17.597293664Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:17.601012 containerd[1471]: time="2025-01-30T13:59:17.600564051Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:17.603081 containerd[1471]: time="2025-01-30T13:59:17.602872122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:59:17.605119 kubelet[2218]: W0130 13:59:17.604963 2218 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.62.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-9c719b1623&limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:17.605119 kubelet[2218]: E0130 13:59:17.605051 2218 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://143.198.62.166:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-f-9c719b1623&limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:17.610269 containerd[1471]: time="2025-01-30T13:59:17.610188873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:59:17.611916 containerd[1471]: time="2025-01-30T13:59:17.611624124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.12137ms" Jan 30 13:59:17.613048 containerd[1471]: time="2025-01-30T13:59:17.612993465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.635609ms" Jan 30 13:59:17.615858 kubelet[2218]: W0130 13:59:17.615476 2218 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.62.166:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:17.615858 kubelet[2218]: E0130 13:59:17.615584 2218 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://143.198.62.166:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:17.618485 containerd[1471]: time="2025-01-30T13:59:17.618395180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 620.515898ms" Jan 30 13:59:17.712192 kubelet[2218]: E0130 13:59:17.709286 2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.62.166:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-f-9c719b1623?timeout=10s\": dial tcp 143.198.62.166:6443: connect: connection refused" interval="1.6s" Jan 30 13:59:17.812918 kubelet[2218]: I0130 13:59:17.812756 2218 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:17.814897 kubelet[2218]: E0130 13:59:17.814314 2218 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://143.198.62.166:6443/api/v1/nodes\": dial tcp 143.198.62.166:6443: connect: connection refused" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:17.816599 sshd[2113]: PAM: Permission denied for root from 218.92.0.157 Jan 30 13:59:17.834792 containerd[1471]: time="2025-01-30T13:59:17.831035300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:17.834792 containerd[1471]: time="2025-01-30T13:59:17.831147640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:17.834792 containerd[1471]: time="2025-01-30T13:59:17.831195656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:17.834792 containerd[1471]: time="2025-01-30T13:59:17.831389217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:17.835313 containerd[1471]: time="2025-01-30T13:59:17.834846175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:17.835313 containerd[1471]: time="2025-01-30T13:59:17.834931642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:17.835313 containerd[1471]: time="2025-01-30T13:59:17.834996304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:17.838134 containerd[1471]: time="2025-01-30T13:59:17.837434691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:17.846992 containerd[1471]: time="2025-01-30T13:59:17.845614936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:17.846992 containerd[1471]: time="2025-01-30T13:59:17.845739557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:17.846992 containerd[1471]: time="2025-01-30T13:59:17.845766083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:17.846992 containerd[1471]: time="2025-01-30T13:59:17.845909702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:17.880386 systemd[1]: Started cri-containerd-7cc6dac1b42d3f21dc7015b4a7be942bc16db1f5ae39b94706768533ef29c6c8.scope - libcontainer container 7cc6dac1b42d3f21dc7015b4a7be942bc16db1f5ae39b94706768533ef29c6c8. Jan 30 13:59:17.884326 kubelet[2218]: W0130 13:59:17.884074 2218 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.62.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:17.884326 kubelet[2218]: E0130 13:59:17.884276 2218 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://143.198.62.166:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:17.893240 systemd[1]: Started cri-containerd-952dd6e8b3739e3eab4e5ff3737968edb07217e9294659786a5243b2b19bd43a.scope - libcontainer container 952dd6e8b3739e3eab4e5ff3737968edb07217e9294659786a5243b2b19bd43a. Jan 30 13:59:17.921262 systemd[1]: Started cri-containerd-9d7305417e4256b49a1481a5adec20747169f507a4ed60ade148a66b4cc2152f.scope - libcontainer container 9d7305417e4256b49a1481a5adec20747169f507a4ed60ade148a66b4cc2152f. Jan 30 13:59:17.987000 containerd[1471]: time="2025-01-30T13:59:17.985118260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-f-9c719b1623,Uid:acedd71468e70b7f426e63a322a7f885,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cc6dac1b42d3f21dc7015b4a7be942bc16db1f5ae39b94706768533ef29c6c8\"" Jan 30 13:59:17.994166 kubelet[2218]: E0130 13:59:17.993463 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:18.000801 containerd[1471]: time="2025-01-30T13:59:18.000526224Z" level=info msg="CreateContainer within sandbox \"7cc6dac1b42d3f21dc7015b4a7be942bc16db1f5ae39b94706768533ef29c6c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:59:18.036398 containerd[1471]: time="2025-01-30T13:59:18.036334868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-f-9c719b1623,Uid:53540ae47a47b659733b2acda9f0aecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"952dd6e8b3739e3eab4e5ff3737968edb07217e9294659786a5243b2b19bd43a\"" Jan 30 13:59:18.039850 kubelet[2218]: E0130 13:59:18.039781 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:18.049714 containerd[1471]: time="2025-01-30T13:59:18.049533984Z" level=info msg="CreateContainer within sandbox \"952dd6e8b3739e3eab4e5ff3737968edb07217e9294659786a5243b2b19bd43a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:59:18.067091 containerd[1471]: time="2025-01-30T13:59:18.064873746Z" level=info msg="CreateContainer within sandbox \"7cc6dac1b42d3f21dc7015b4a7be942bc16db1f5ae39b94706768533ef29c6c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"234c60f9f505ceaaea157cacc92786986c548b83a83951f0fc1d30f874109276\"" Jan 30 13:59:18.070065 containerd[1471]: time="2025-01-30T13:59:18.068197763Z" level=info msg="StartContainer for \"234c60f9f505ceaaea157cacc92786986c548b83a83951f0fc1d30f874109276\"" Jan 30 13:59:18.076493 containerd[1471]: time="2025-01-30T13:59:18.076430522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-f-9c719b1623,Uid:fe1f09c852a227093eccf9b28b2dd433,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d7305417e4256b49a1481a5adec20747169f507a4ed60ade148a66b4cc2152f\"" Jan 30 13:59:18.078114 kubelet[2218]: E0130 13:59:18.077712 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:18.082167 containerd[1471]: time="2025-01-30T13:59:18.081257271Z" level=info msg="CreateContainer within sandbox \"9d7305417e4256b49a1481a5adec20747169f507a4ed60ade148a66b4cc2152f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:59:18.100505 containerd[1471]: time="2025-01-30T13:59:18.100396533Z" level=info msg="CreateContainer within sandbox \"952dd6e8b3739e3eab4e5ff3737968edb07217e9294659786a5243b2b19bd43a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d7552764a83eccb6027ea577c22b20bb6d5f45de716ff412e9a0192ca38f3911\"" Jan 30 13:59:18.101669 containerd[1471]: time="2025-01-30T13:59:18.101604297Z" level=info msg="StartContainer for \"d7552764a83eccb6027ea577c22b20bb6d5f45de716ff412e9a0192ca38f3911\"" Jan 30 13:59:18.116684 containerd[1471]: time="2025-01-30T13:59:18.116471588Z" level=info msg="CreateContainer within sandbox \"9d7305417e4256b49a1481a5adec20747169f507a4ed60ade148a66b4cc2152f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c2118265bd2813adc0537d2d42de06b3f7a7ee336af8e56eaa7cd947a4e9e5cf\"" Jan 30 13:59:18.119007 containerd[1471]: time="2025-01-30T13:59:18.118056428Z" level=info msg="StartContainer for \"c2118265bd2813adc0537d2d42de06b3f7a7ee336af8e56eaa7cd947a4e9e5cf\"" Jan 30 13:59:18.128931 systemd[1]: Started cri-containerd-234c60f9f505ceaaea157cacc92786986c548b83a83951f0fc1d30f874109276.scope - libcontainer container 234c60f9f505ceaaea157cacc92786986c548b83a83951f0fc1d30f874109276. Jan 30 13:59:18.199261 systemd[1]: Started cri-containerd-d7552764a83eccb6027ea577c22b20bb6d5f45de716ff412e9a0192ca38f3911.scope - libcontainer container d7552764a83eccb6027ea577c22b20bb6d5f45de716ff412e9a0192ca38f3911. Jan 30 13:59:18.217353 systemd[1]: Started cri-containerd-c2118265bd2813adc0537d2d42de06b3f7a7ee336af8e56eaa7cd947a4e9e5cf.scope - libcontainer container c2118265bd2813adc0537d2d42de06b3f7a7ee336af8e56eaa7cd947a4e9e5cf. Jan 30 13:59:18.255246 containerd[1471]: time="2025-01-30T13:59:18.255164523Z" level=info msg="StartContainer for \"234c60f9f505ceaaea157cacc92786986c548b83a83951f0fc1d30f874109276\" returns successfully" Jan 30 13:59:18.304091 kubelet[2218]: E0130 13:59:18.304009 2218 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://143.198.62.166:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 143.198.62.166:6443: connect: connection refused Jan 30 13:59:18.319923 containerd[1471]: time="2025-01-30T13:59:18.318191908Z" level=info msg="StartContainer for \"d7552764a83eccb6027ea577c22b20bb6d5f45de716ff412e9a0192ca38f3911\" returns successfully" Jan 30 13:59:18.356033 containerd[1471]: time="2025-01-30T13:59:18.354707798Z" level=info msg="StartContainer for \"c2118265bd2813adc0537d2d42de06b3f7a7ee336af8e56eaa7cd947a4e9e5cf\" returns successfully" Jan 30 13:59:18.370235 kubelet[2218]: E0130 13:59:18.370186 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:18.375322 kubelet[2218]: E0130 13:59:18.374919 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:19.378274 kubelet[2218]: E0130 13:59:19.377768 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:19.378274 kubelet[2218]: E0130 13:59:19.378065 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:19.425070 kubelet[2218]: I0130 13:59:19.424485 2218 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:20.906984 sshd[2113]: Received disconnect from 218.92.0.157 port 46769:11: [preauth] Jan 30 13:59:20.906984 sshd[2113]: Disconnected from authenticating user root 218.92.0.157 port 46769 [preauth] Jan 30 13:59:20.908577 systemd[1]: sshd@9-143.198.62.166:22-218.92.0.157:46769.service: Deactivated successfully. Jan 30 13:59:21.130444 kubelet[2218]: E0130 13:59:21.130403 2218 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-f-9c719b1623\" not found" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:21.155512 kubelet[2218]: E0130 13:59:21.155478 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:21.223136 kubelet[2218]: I0130 13:59:21.221859 2218 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:21.275971 kubelet[2218]: I0130 13:59:21.274968 2218 apiserver.go:52] "Watching apiserver" Jan 30 13:59:21.303263 kubelet[2218]: I0130 13:59:21.303202 2218 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:59:22.572289 kubelet[2218]: W0130 13:59:22.572178 2218 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:22.573056 kubelet[2218]: E0130 13:59:22.572918 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:23.387419 kubelet[2218]: E0130 13:59:23.387369 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:23.628763 systemd[1]: Reloading requested from client PID 2501 ('systemctl') (unit session-9.scope)... Jan 30 13:59:23.628784 systemd[1]: Reloading... Jan 30 13:59:23.772146 zram_generator::config[2541]: No configuration found. Jan 30 13:59:23.973090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:59:24.145296 systemd[1]: Reloading finished in 515 ms. Jan 30 13:59:24.214985 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:24.229857 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:59:24.230390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:24.230471 systemd[1]: kubelet.service: Consumed 1.141s CPU time, 113.3M memory peak, 0B memory swap peak. Jan 30 13:59:24.237584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:59:24.498307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:59:24.500416 (kubelet)[2591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:59:24.600854 kubelet[2591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:59:24.600854 kubelet[2591]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:59:24.600854 kubelet[2591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:59:24.601633 kubelet[2591]: I0130 13:59:24.600910 2591 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:59:24.609372 kubelet[2591]: I0130 13:59:24.609298 2591 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:59:24.609372 kubelet[2591]: I0130 13:59:24.609336 2591 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:59:24.609608 kubelet[2591]: I0130 13:59:24.609597 2591 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:59:24.615178 kubelet[2591]: I0130 13:59:24.615100 2591 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:59:24.623547 kubelet[2591]: I0130 13:59:24.622563 2591 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:59:24.639989 kubelet[2591]: I0130 13:59:24.639625 2591 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:59:24.640269 kubelet[2591]: I0130 13:59:24.640205 2591 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:59:24.640643 kubelet[2591]: I0130 13:59:24.640349 2591 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-f-9c719b1623","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:59:24.641394 kubelet[2591]: I0130 13:59:24.640872 2591 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:59:24.641394 kubelet[2591]: I0130 13:59:24.640906 2591 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:59:24.641394 kubelet[2591]: I0130 13:59:24.641017 2591 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:59:24.641394 kubelet[2591]: I0130 13:59:24.641185 2591 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:59:24.641394 kubelet[2591]: I0130 13:59:24.641217 2591 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:59:24.641394 kubelet[2591]: I0130 13:59:24.641258 2591 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:59:24.641394 kubelet[2591]: I0130 13:59:24.641299 2591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:59:24.643751 kubelet[2591]: I0130 13:59:24.643727 2591 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:59:24.644255 kubelet[2591]: I0130 13:59:24.644229 2591 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:59:24.647980 kubelet[2591]: I0130 13:59:24.645474 2591 server.go:1264] "Started kubelet" Jan 30 13:59:24.654132 kubelet[2591]: I0130 13:59:24.654102 2591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:59:24.671187 kubelet[2591]: I0130 13:59:24.671133 2591 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:59:24.676584 kubelet[2591]: I0130 13:59:24.676547 2591 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:59:24.683105 kubelet[2591]: I0130 13:59:24.680222 2591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:59:24.683316 kubelet[2591]: I0130 13:59:24.683253 2591 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:59:24.687410 kubelet[2591]: I0130 13:59:24.687129 2591 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:59:24.687851 sudo[2605]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:59:24.688465 sudo[2605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:59:24.691013 kubelet[2591]: I0130 13:59:24.689305 2591 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:59:24.691013 kubelet[2591]: I0130 13:59:24.689538 2591 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:59:24.698581 kubelet[2591]: I0130 13:59:24.698531 2591 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:59:24.698747 kubelet[2591]: I0130 13:59:24.698672 2591 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:59:24.704199 kubelet[2591]: I0130 13:59:24.703803 2591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:59:24.707994 kubelet[2591]: I0130 13:59:24.707309 2591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:59:24.707994 kubelet[2591]: I0130 13:59:24.707374 2591 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:59:24.707994 kubelet[2591]: I0130 13:59:24.707414 2591 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:59:24.707994 kubelet[2591]: E0130 13:59:24.707492 2591 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:59:24.712133 kubelet[2591]: I0130 13:59:24.711895 2591 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:59:24.811585 kubelet[2591]: I0130 13:59:24.811453 2591 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:24.817919 kubelet[2591]: E0130 13:59:24.816355 2591 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:59:24.840928 kubelet[2591]: I0130 13:59:24.840203 2591 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:24.840928 kubelet[2591]: I0130 13:59:24.840334 2591 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-f-9c719b1623" Jan 30 13:59:24.871278 kubelet[2591]: I0130 13:59:24.871239 2591 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:59:24.871888 kubelet[2591]: I0130 13:59:24.871508 2591 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:59:24.871888 kubelet[2591]: I0130 13:59:24.871542 2591 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:59:24.871888 kubelet[2591]: I0130 13:59:24.871767 2591 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:59:24.871888 kubelet[2591]: I0130 13:59:24.871782 2591 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:59:24.871888 kubelet[2591]: I0130 13:59:24.871840 2591 policy_none.go:49] "None policy: Start" Jan 30 13:59:24.876563 kubelet[2591]: I0130 13:59:24.875560 2591 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:59:24.876563 kubelet[2591]: I0130 13:59:24.875606 2591 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:59:24.876563 kubelet[2591]: I0130 13:59:24.875911 2591 state_mem.go:75] "Updated machine memory state" Jan 30 13:59:24.896131 kubelet[2591]: I0130 13:59:24.894549 2591 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:59:24.896131 kubelet[2591]: I0130 13:59:24.894806 2591 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:59:24.906038 kubelet[2591]: I0130 13:59:24.905643 2591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:59:25.017322 kubelet[2591]: I0130 13:59:25.017244 2591 topology_manager.go:215] "Topology Admit Handler" podUID="acedd71468e70b7f426e63a322a7f885" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.019600 kubelet[2591]: I0130 13:59:25.018164 2591 topology_manager.go:215] "Topology Admit Handler" podUID="fe1f09c852a227093eccf9b28b2dd433" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.021487 kubelet[2591]: I0130 13:59:25.021235 2591 topology_manager.go:215] "Topology Admit Handler" podUID="53540ae47a47b659733b2acda9f0aecf" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.060450 kubelet[2591]: W0130 13:59:25.058231 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:25.060450 kubelet[2591]: W0130 13:59:25.059022 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:25.064202 kubelet[2591]: W0130 13:59:25.063915 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:25.064202 kubelet[2591]: E0130 13:59:25.064035 2591 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-f-9c719b1623\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.092327 kubelet[2591]: I0130 13:59:25.092204 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.092729 kubelet[2591]: I0130 13:59:25.092668 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.093047 kubelet[2591]: I0130 13:59:25.092879 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.093047 kubelet[2591]: I0130 13:59:25.092998 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.093909 kubelet[2591]: I0130 13:59:25.093306 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53540ae47a47b659733b2acda9f0aecf-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-f-9c719b1623\" (UID: \"53540ae47a47b659733b2acda9f0aecf\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.093909 kubelet[2591]: I0130 13:59:25.093507 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acedd71468e70b7f426e63a322a7f885-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" (UID: \"acedd71468e70b7f426e63a322a7f885\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.093909 kubelet[2591]: I0130 13:59:25.093666 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe1f09c852a227093eccf9b28b2dd433-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-f-9c719b1623\" (UID: \"fe1f09c852a227093eccf9b28b2dd433\") " pod="kube-system/kube-scheduler-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.093909 kubelet[2591]: I0130 13:59:25.093699 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53540ae47a47b659733b2acda9f0aecf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-f-9c719b1623\" (UID: \"53540ae47a47b659733b2acda9f0aecf\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.093909 kubelet[2591]: I0130 13:59:25.093735 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53540ae47a47b659733b2acda9f0aecf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-f-9c719b1623\" (UID: \"53540ae47a47b659733b2acda9f0aecf\") " pod="kube-system/kube-apiserver-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.361045 kubelet[2591]: E0130 13:59:25.360460 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:25.362809 kubelet[2591]: E0130 13:59:25.362278 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:25.365497 kubelet[2591]: E0130 13:59:25.365429 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:25.643914 kubelet[2591]: I0130 13:59:25.643365 2591 apiserver.go:52] "Watching apiserver" Jan 30 13:59:25.649433 sudo[2605]: pam_unix(sudo:session): session closed for user root Jan 30 13:59:25.690255 kubelet[2591]: I0130 13:59:25.690016 2591 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:59:25.765057 kubelet[2591]: E0130 13:59:25.764694 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:25.775531 kubelet[2591]: W0130 13:59:25.775477 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:25.776434 kubelet[2591]: E0130 13:59:25.775577 2591 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-f-9c719b1623\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.776434 kubelet[2591]: E0130 13:59:25.776298 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:25.780265 kubelet[2591]: W0130 13:59:25.780226 2591 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:59:25.780602 kubelet[2591]: E0130 13:59:25.780575 2591 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-f-9c719b1623\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-f-9c719b1623" Jan 30 13:59:25.782522 kubelet[2591]: E0130 13:59:25.782458 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:25.825167 kubelet[2591]: I0130 13:59:25.825101 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-f-9c719b1623" podStartSLOduration=0.825077999 podStartE2EDuration="825.077999ms" podCreationTimestamp="2025-01-30 13:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:59:25.824280761 +0000 UTC m=+1.311531684" watchObservedRunningTime="2025-01-30 13:59:25.825077999 +0000 UTC m=+1.312328922" Jan 30 13:59:25.878408 kubelet[2591]: I0130 13:59:25.878347 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-f-9c719b1623" podStartSLOduration=0.878323847 podStartE2EDuration="878.323847ms" podCreationTimestamp="2025-01-30 13:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:59:25.842206615 +0000 UTC m=+1.329457537" watchObservedRunningTime="2025-01-30 13:59:25.878323847 +0000 UTC m=+1.365574769" Jan 30 13:59:26.439907 update_engine[1445]: I20250130 13:59:26.438010 1445 update_attempter.cc:509] Updating boot flags... Jan 30 13:59:26.494989 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2640) Jan 30 13:59:26.605985 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2638) Jan 30 13:59:26.697982 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2638) Jan 30 13:59:26.771979 kubelet[2591]: E0130 13:59:26.769620 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:26.775054 kubelet[2591]: E0130 13:59:26.772760 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:26.776649 kubelet[2591]: E0130 13:59:26.776597 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:27.704742 sudo[1664]: pam_unix(sudo:session): session closed for user root Jan 30 13:59:27.709744 sshd[1661]: pam_unix(sshd:session): session closed for user core Jan 30 13:59:27.715751 systemd[1]: sshd@8-143.198.62.166:22-147.75.109.163:34086.service: Deactivated successfully. Jan 30 13:59:27.719222 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:59:27.719679 systemd[1]: session-9.scope: Consumed 6.337s CPU time, 191.2M memory peak, 0B memory swap peak. Jan 30 13:59:27.721306 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:59:27.723197 systemd-logind[1444]: Removed session 9. Jan 30 13:59:27.809417 kubelet[2591]: E0130 13:59:27.808931 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:34.236923 kubelet[2591]: E0130 13:59:34.236818 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:34.263596 kubelet[2591]: I0130 13:59:34.263516 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-f-9c719b1623" podStartSLOduration=12.263458634 podStartE2EDuration="12.263458634s" podCreationTimestamp="2025-01-30 13:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:59:25.886528386 +0000 UTC m=+1.373779310" watchObservedRunningTime="2025-01-30 13:59:34.263458634 +0000 UTC m=+9.750709552" Jan 30 13:59:34.787149 kubelet[2591]: E0130 13:59:34.787007 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:35.087711 kubelet[2591]: E0130 13:59:35.087307 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:35.790773 kubelet[2591]: E0130 13:59:35.790723 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:37.815681 kubelet[2591]: E0130 13:59:37.815642 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:38.130661 kubelet[2591]: I0130 13:59:38.130179 2591 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:59:38.131839 containerd[1471]: time="2025-01-30T13:59:38.131756470Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:59:38.132825 kubelet[2591]: I0130 13:59:38.132283 2591 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:59:38.799020 kubelet[2591]: E0130 13:59:38.797748 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:38.983089 kubelet[2591]: I0130 13:59:38.982366 2591 topology_manager.go:215] "Topology Admit Handler" podUID="7bce4ef6-0324-494e-8553-f5f1b34610d2" podNamespace="kube-system" podName="kube-proxy-hhhtf" Jan 30 13:59:38.995680 kubelet[2591]: I0130 13:59:38.994193 2591 topology_manager.go:215] "Topology Admit Handler" podUID="7f0f1169-53d9-4dea-9562-11b25b7a019d" podNamespace="kube-system" podName="cilium-m7rgr" Jan 30 13:59:38.998015 systemd[1]: Created slice kubepods-besteffort-pod7bce4ef6_0324_494e_8553_f5f1b34610d2.slice - libcontainer container kubepods-besteffort-pod7bce4ef6_0324_494e_8553_f5f1b34610d2.slice. Jan 30 13:59:39.009926 kubelet[2591]: I0130 13:59:39.007861 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-host-proc-sys-net\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.009926 kubelet[2591]: I0130 13:59:39.007988 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-hostproc\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.009926 kubelet[2591]: I0130 13:59:39.008019 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-xtables-lock\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.009926 kubelet[2591]: I0130 13:59:39.008046 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bce4ef6-0324-494e-8553-f5f1b34610d2-lib-modules\") pod \"kube-proxy-hhhtf\" (UID: \"7bce4ef6-0324-494e-8553-f5f1b34610d2\") " pod="kube-system/kube-proxy-hhhtf" Jan 30 13:59:39.009926 kubelet[2591]: I0130 13:59:39.008082 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-bpf-maps\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.009926 kubelet[2591]: I0130 13:59:39.008127 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf69c\" (UniqueName: \"kubernetes.io/projected/7bce4ef6-0324-494e-8553-f5f1b34610d2-kube-api-access-gf69c\") pod \"kube-proxy-hhhtf\" (UID: \"7bce4ef6-0324-494e-8553-f5f1b34610d2\") " pod="kube-system/kube-proxy-hhhtf" Jan 30 13:59:39.010319 kubelet[2591]: I0130 13:59:39.008154 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-config-path\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010319 kubelet[2591]: I0130 13:59:39.008177 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvgk\" (UniqueName: \"kubernetes.io/projected/7f0f1169-53d9-4dea-9562-11b25b7a019d-kube-api-access-msvgk\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010319 kubelet[2591]: I0130 13:59:39.008206 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-run\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010319 kubelet[2591]: I0130 13:59:39.008232 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-etc-cni-netd\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010319 kubelet[2591]: I0130 13:59:39.008255 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-lib-modules\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010319 kubelet[2591]: I0130 13:59:39.008312 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-cgroup\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010677 kubelet[2591]: I0130 13:59:39.008334 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bce4ef6-0324-494e-8553-f5f1b34610d2-xtables-lock\") pod \"kube-proxy-hhhtf\" (UID: \"7bce4ef6-0324-494e-8553-f5f1b34610d2\") " pod="kube-system/kube-proxy-hhhtf" Jan 30 13:59:39.010677 kubelet[2591]: I0130 13:59:39.008359 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f0f1169-53d9-4dea-9562-11b25b7a019d-clustermesh-secrets\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010677 kubelet[2591]: I0130 13:59:39.008383 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-host-proc-sys-kernel\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010677 kubelet[2591]: I0130 13:59:39.008417 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bce4ef6-0324-494e-8553-f5f1b34610d2-kube-proxy\") pod \"kube-proxy-hhhtf\" (UID: \"7bce4ef6-0324-494e-8553-f5f1b34610d2\") " pod="kube-system/kube-proxy-hhhtf" Jan 30 13:59:39.010677 kubelet[2591]: I0130 13:59:39.008439 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cni-path\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.010677 kubelet[2591]: I0130 13:59:39.008467 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f0f1169-53d9-4dea-9562-11b25b7a019d-hubble-tls\") pod \"cilium-m7rgr\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " pod="kube-system/cilium-m7rgr" Jan 30 13:59:39.021031 systemd[1]: Created slice kubepods-burstable-pod7f0f1169_53d9_4dea_9562_11b25b7a019d.slice - libcontainer container kubepods-burstable-pod7f0f1169_53d9_4dea_9562_11b25b7a019d.slice. Jan 30 13:59:39.236295 kubelet[2591]: I0130 13:59:39.236231 2591 topology_manager.go:215] "Topology Admit Handler" podUID="d9bf71e6-f1ff-4e52-85ce-5684a6ee6828" podNamespace="kube-system" podName="cilium-operator-599987898-b59zn" Jan 30 13:59:39.251739 systemd[1]: Created slice kubepods-besteffort-podd9bf71e6_f1ff_4e52_85ce_5684a6ee6828.slice - libcontainer container kubepods-besteffort-podd9bf71e6_f1ff_4e52_85ce_5684a6ee6828.slice. Jan 30 13:59:39.310371 kubelet[2591]: E0130 13:59:39.310289 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:39.311220 kubelet[2591]: I0130 13:59:39.311185 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828-cilium-config-path\") pod \"cilium-operator-599987898-b59zn\" (UID: \"d9bf71e6-f1ff-4e52-85ce-5684a6ee6828\") " pod="kube-system/cilium-operator-599987898-b59zn" Jan 30 13:59:39.311362 kubelet[2591]: I0130 13:59:39.311235 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lj5b\" (UniqueName: \"kubernetes.io/projected/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828-kube-api-access-7lj5b\") pod \"cilium-operator-599987898-b59zn\" (UID: \"d9bf71e6-f1ff-4e52-85ce-5684a6ee6828\") " pod="kube-system/cilium-operator-599987898-b59zn" Jan 30 13:59:39.311634 containerd[1471]: time="2025-01-30T13:59:39.311586713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhhtf,Uid:7bce4ef6-0324-494e-8553-f5f1b34610d2,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:39.326587 kubelet[2591]: E0130 13:59:39.326537 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:39.327304 containerd[1471]: time="2025-01-30T13:59:39.327224596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m7rgr,Uid:7f0f1169-53d9-4dea-9562-11b25b7a019d,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:39.404163 containerd[1471]: time="2025-01-30T13:59:39.403887628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:39.404163 containerd[1471]: time="2025-01-30T13:59:39.403986252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:39.404163 containerd[1471]: time="2025-01-30T13:59:39.404022666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:39.406354 containerd[1471]: time="2025-01-30T13:59:39.404153346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:39.436462 containerd[1471]: time="2025-01-30T13:59:39.436168420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:39.436462 containerd[1471]: time="2025-01-30T13:59:39.436292489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:39.436462 containerd[1471]: time="2025-01-30T13:59:39.436319467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:39.436861 containerd[1471]: time="2025-01-30T13:59:39.436461982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:39.438323 systemd[1]: Started cri-containerd-23005b339e90b0a0cee92fba160dc1b29adfce959ddcbc83e11a17e2102bcae4.scope - libcontainer container 23005b339e90b0a0cee92fba160dc1b29adfce959ddcbc83e11a17e2102bcae4. Jan 30 13:59:39.481496 systemd[1]: Started cri-containerd-3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de.scope - libcontainer container 3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de. Jan 30 13:59:39.488146 containerd[1471]: time="2025-01-30T13:59:39.486400813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hhhtf,Uid:7bce4ef6-0324-494e-8553-f5f1b34610d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"23005b339e90b0a0cee92fba160dc1b29adfce959ddcbc83e11a17e2102bcae4\"" Jan 30 13:59:39.488330 kubelet[2591]: E0130 13:59:39.487291 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:39.496753 containerd[1471]: time="2025-01-30T13:59:39.496341412Z" level=info msg="CreateContainer within sandbox \"23005b339e90b0a0cee92fba160dc1b29adfce959ddcbc83e11a17e2102bcae4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:59:39.538863 containerd[1471]: time="2025-01-30T13:59:39.538750090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m7rgr,Uid:7f0f1169-53d9-4dea-9562-11b25b7a019d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\"" Jan 30 13:59:39.540564 kubelet[2591]: E0130 13:59:39.540343 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:39.547331 containerd[1471]: time="2025-01-30T13:59:39.547269261Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:59:39.559430 kubelet[2591]: E0130 13:59:39.559047 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:39.560674 containerd[1471]: time="2025-01-30T13:59:39.560584240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b59zn,Uid:d9bf71e6-f1ff-4e52-85ce-5684a6ee6828,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:39.562714 containerd[1471]: time="2025-01-30T13:59:39.562410919Z" level=info msg="CreateContainer within sandbox \"23005b339e90b0a0cee92fba160dc1b29adfce959ddcbc83e11a17e2102bcae4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a97cc4456c8cb26a82ff3bf01077517f929a50ace2d79f11d57441f7ca9e713\"" Jan 30 13:59:39.565634 containerd[1471]: time="2025-01-30T13:59:39.565162914Z" level=info msg="StartContainer for \"3a97cc4456c8cb26a82ff3bf01077517f929a50ace2d79f11d57441f7ca9e713\"" Jan 30 13:59:39.623284 systemd[1]: Started cri-containerd-3a97cc4456c8cb26a82ff3bf01077517f929a50ace2d79f11d57441f7ca9e713.scope - libcontainer container 3a97cc4456c8cb26a82ff3bf01077517f929a50ace2d79f11d57441f7ca9e713. Jan 30 13:59:39.646049 containerd[1471]: time="2025-01-30T13:59:39.645337301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:59:39.646049 containerd[1471]: time="2025-01-30T13:59:39.645454271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:59:39.646049 containerd[1471]: time="2025-01-30T13:59:39.645476500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:39.646049 containerd[1471]: time="2025-01-30T13:59:39.645739423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:59:39.686266 systemd[1]: Started cri-containerd-36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713.scope - libcontainer container 36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713. Jan 30 13:59:39.696307 containerd[1471]: time="2025-01-30T13:59:39.694925669Z" level=info msg="StartContainer for \"3a97cc4456c8cb26a82ff3bf01077517f929a50ace2d79f11d57441f7ca9e713\" returns successfully" Jan 30 13:59:39.759672 containerd[1471]: time="2025-01-30T13:59:39.758711846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b59zn,Uid:d9bf71e6-f1ff-4e52-85ce-5684a6ee6828,Namespace:kube-system,Attempt:0,} returns sandbox id \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\"" Jan 30 13:59:39.761394 kubelet[2591]: E0130 13:59:39.761005 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:39.805652 kubelet[2591]: E0130 13:59:39.804979 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:44.765315 kubelet[2591]: I0130 13:59:44.765248 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hhhtf" podStartSLOduration=6.765224805 podStartE2EDuration="6.765224805s" podCreationTimestamp="2025-01-30 13:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:59:39.824064276 +0000 UTC m=+15.311315219" watchObservedRunningTime="2025-01-30 13:59:44.765224805 +0000 UTC m=+20.252475725" Jan 30 13:59:44.939080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162095339.mount: Deactivated successfully. Jan 30 13:59:47.718737 containerd[1471]: time="2025-01-30T13:59:47.718648984Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:47.721849 containerd[1471]: time="2025-01-30T13:59:47.721673259Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:59:47.723325 containerd[1471]: time="2025-01-30T13:59:47.723231249Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:47.732716 containerd[1471]: time="2025-01-30T13:59:47.732644525Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.185311255s" Jan 30 13:59:47.732716 containerd[1471]: time="2025-01-30T13:59:47.732697348Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:59:47.748221 containerd[1471]: time="2025-01-30T13:59:47.748066945Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:59:47.749172 containerd[1471]: time="2025-01-30T13:59:47.748794911Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:59:47.896034 containerd[1471]: time="2025-01-30T13:59:47.895894597Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\"" Jan 30 13:59:47.896632 containerd[1471]: time="2025-01-30T13:59:47.896594188Z" level=info msg="StartContainer for \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\"" Jan 30 13:59:48.020285 systemd[1]: Started cri-containerd-9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5.scope - libcontainer container 9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5. Jan 30 13:59:48.066972 containerd[1471]: time="2025-01-30T13:59:48.065719289Z" level=info msg="StartContainer for \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\" returns successfully" Jan 30 13:59:48.082352 systemd[1]: cri-containerd-9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5.scope: Deactivated successfully. Jan 30 13:59:48.300568 containerd[1471]: time="2025-01-30T13:59:48.282657874Z" level=info msg="shim disconnected" id=9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5 namespace=k8s.io Jan 30 13:59:48.300568 containerd[1471]: time="2025-01-30T13:59:48.300159527Z" level=warning msg="cleaning up after shim disconnected" id=9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5 namespace=k8s.io Jan 30 13:59:48.300568 containerd[1471]: time="2025-01-30T13:59:48.300186786Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:59:48.844961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5-rootfs.mount: Deactivated successfully. Jan 30 13:59:48.856294 kubelet[2591]: E0130 13:59:48.855968 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:48.865313 containerd[1471]: time="2025-01-30T13:59:48.863797161Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:59:48.906410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065148551.mount: Deactivated successfully. Jan 30 13:59:48.913827 containerd[1471]: time="2025-01-30T13:59:48.913644218Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\"" Jan 30 13:59:48.915288 containerd[1471]: time="2025-01-30T13:59:48.915247312Z" level=info msg="StartContainer for \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\"" Jan 30 13:59:48.973235 systemd[1]: Started cri-containerd-eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd.scope - libcontainer container eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd. Jan 30 13:59:49.022726 containerd[1471]: time="2025-01-30T13:59:49.022565605Z" level=info msg="StartContainer for \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\" returns successfully" Jan 30 13:59:49.040546 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:59:49.041490 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:59:49.041588 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:59:49.049396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:59:49.050073 systemd[1]: cri-containerd-eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd.scope: Deactivated successfully. Jan 30 13:59:49.086568 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:59:49.091168 containerd[1471]: time="2025-01-30T13:59:49.091006437Z" level=info msg="shim disconnected" id=eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd namespace=k8s.io Jan 30 13:59:49.091168 containerd[1471]: time="2025-01-30T13:59:49.091149056Z" level=warning msg="cleaning up after shim disconnected" id=eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd namespace=k8s.io Jan 30 13:59:49.091168 containerd[1471]: time="2025-01-30T13:59:49.091159280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:59:49.844526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd-rootfs.mount: Deactivated successfully. Jan 30 13:59:49.860518 kubelet[2591]: E0130 13:59:49.860449 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:49.864660 containerd[1471]: time="2025-01-30T13:59:49.863970497Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:59:49.925663 containerd[1471]: time="2025-01-30T13:59:49.925432501Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\"" Jan 30 13:59:49.926997 containerd[1471]: time="2025-01-30T13:59:49.926930228Z" level=info msg="StartContainer for \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\"" Jan 30 13:59:49.984306 systemd[1]: Started cri-containerd-5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585.scope - libcontainer container 5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585. Jan 30 13:59:50.037923 containerd[1471]: time="2025-01-30T13:59:50.037852726Z" level=info msg="StartContainer for \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\" returns successfully" Jan 30 13:59:50.039557 systemd[1]: cri-containerd-5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585.scope: Deactivated successfully. Jan 30 13:59:50.084355 containerd[1471]: time="2025-01-30T13:59:50.083918053Z" level=info msg="shim disconnected" id=5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585 namespace=k8s.io Jan 30 13:59:50.084355 containerd[1471]: time="2025-01-30T13:59:50.084131134Z" level=warning msg="cleaning up after shim disconnected" id=5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585 namespace=k8s.io Jan 30 13:59:50.084355 containerd[1471]: time="2025-01-30T13:59:50.084152344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:59:50.844893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585-rootfs.mount: Deactivated successfully. Jan 30 13:59:50.866244 kubelet[2591]: E0130 13:59:50.866203 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:50.872253 containerd[1471]: time="2025-01-30T13:59:50.871333256Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:59:50.916977 containerd[1471]: time="2025-01-30T13:59:50.916880234Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\"" Jan 30 13:59:50.919286 containerd[1471]: time="2025-01-30T13:59:50.919207983Z" level=info msg="StartContainer for \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\"" Jan 30 13:59:50.977460 systemd[1]: Started cri-containerd-892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf.scope - libcontainer container 892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf. Jan 30 13:59:51.025142 systemd[1]: cri-containerd-892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf.scope: Deactivated successfully. Jan 30 13:59:51.030296 containerd[1471]: time="2025-01-30T13:59:51.030206025Z" level=info msg="StartContainer for \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\" returns successfully" Jan 30 13:59:51.069378 containerd[1471]: time="2025-01-30T13:59:51.069286690Z" level=info msg="shim disconnected" id=892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf namespace=k8s.io Jan 30 13:59:51.070075 containerd[1471]: time="2025-01-30T13:59:51.069840719Z" level=warning msg="cleaning up after shim disconnected" id=892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf namespace=k8s.io Jan 30 13:59:51.070075 containerd[1471]: time="2025-01-30T13:59:51.069892026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:59:51.846481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf-rootfs.mount: Deactivated successfully. Jan 30 13:59:51.875014 kubelet[2591]: E0130 13:59:51.873558 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:51.883750 containerd[1471]: time="2025-01-30T13:59:51.883594130Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:59:51.933580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276897080.mount: Deactivated successfully. Jan 30 13:59:51.940761 containerd[1471]: time="2025-01-30T13:59:51.940424969Z" level=info msg="CreateContainer within sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\"" Jan 30 13:59:51.943276 containerd[1471]: time="2025-01-30T13:59:51.941807333Z" level=info msg="StartContainer for \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\"" Jan 30 13:59:52.032573 systemd[1]: Started cri-containerd-25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be.scope - libcontainer container 25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be. Jan 30 13:59:52.119436 containerd[1471]: time="2025-01-30T13:59:52.119037968Z" level=info msg="StartContainer for \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\" returns successfully" Jan 30 13:59:52.500598 containerd[1471]: time="2025-01-30T13:59:52.500536670Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:52.506193 containerd[1471]: time="2025-01-30T13:59:52.506076276Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:59:52.506834 kubelet[2591]: I0130 13:59:52.506794 2591 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:59:52.526658 containerd[1471]: time="2025-01-30T13:59:52.525309453Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:59:52.531589 containerd[1471]: time="2025-01-30T13:59:52.531432469Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.78257336s" Jan 30 13:59:52.531589 containerd[1471]: time="2025-01-30T13:59:52.531482709Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:59:52.539987 containerd[1471]: time="2025-01-30T13:59:52.539707424Z" level=info msg="CreateContainer within sandbox \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:59:52.571099 kubelet[2591]: I0130 13:59:52.570565 2591 topology_manager.go:215] "Topology Admit Handler" podUID="d258499c-cc46-419a-918a-8af31ed1224a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ht5vs" Jan 30 13:59:52.573054 containerd[1471]: time="2025-01-30T13:59:52.572238312Z" level=info msg="CreateContainer within sandbox \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\"" Jan 30 13:59:52.574655 kubelet[2591]: I0130 13:59:52.574599 2591 topology_manager.go:215] "Topology Admit Handler" podUID="c2aa5903-9664-47ed-a915-c33be7777ddf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lf48f" Jan 30 13:59:52.582681 containerd[1471]: time="2025-01-30T13:59:52.582297456Z" level=info msg="StartContainer for \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\"" Jan 30 13:59:52.604041 systemd[1]: Created slice kubepods-burstable-podc2aa5903_9664_47ed_a915_c33be7777ddf.slice - libcontainer container kubepods-burstable-podc2aa5903_9664_47ed_a915_c33be7777ddf.slice. Jan 30 13:59:52.624411 systemd[1]: Created slice kubepods-burstable-podd258499c_cc46_419a_918a_8af31ed1224a.slice - libcontainer container kubepods-burstable-podd258499c_cc46_419a_918a_8af31ed1224a.slice. Jan 30 13:59:52.688405 systemd[1]: Started cri-containerd-7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674.scope - libcontainer container 7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674. Jan 30 13:59:52.712011 kubelet[2591]: I0130 13:59:52.711500 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvm2t\" (UniqueName: \"kubernetes.io/projected/d258499c-cc46-419a-918a-8af31ed1224a-kube-api-access-rvm2t\") pod \"coredns-7db6d8ff4d-ht5vs\" (UID: \"d258499c-cc46-419a-918a-8af31ed1224a\") " pod="kube-system/coredns-7db6d8ff4d-ht5vs" Jan 30 13:59:52.712011 kubelet[2591]: I0130 13:59:52.711571 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d258499c-cc46-419a-918a-8af31ed1224a-config-volume\") pod \"coredns-7db6d8ff4d-ht5vs\" (UID: \"d258499c-cc46-419a-918a-8af31ed1224a\") " pod="kube-system/coredns-7db6d8ff4d-ht5vs" Jan 30 13:59:52.712011 kubelet[2591]: I0130 13:59:52.711606 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slt9c\" (UniqueName: \"kubernetes.io/projected/c2aa5903-9664-47ed-a915-c33be7777ddf-kube-api-access-slt9c\") pod \"coredns-7db6d8ff4d-lf48f\" (UID: \"c2aa5903-9664-47ed-a915-c33be7777ddf\") " pod="kube-system/coredns-7db6d8ff4d-lf48f" Jan 30 13:59:52.712011 kubelet[2591]: I0130 13:59:52.711635 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c2aa5903-9664-47ed-a915-c33be7777ddf-config-volume\") pod \"coredns-7db6d8ff4d-lf48f\" (UID: \"c2aa5903-9664-47ed-a915-c33be7777ddf\") " pod="kube-system/coredns-7db6d8ff4d-lf48f" Jan 30 13:59:52.808062 containerd[1471]: time="2025-01-30T13:59:52.806621439Z" level=info msg="StartContainer for \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\" returns successfully" Jan 30 13:59:52.883001 kubelet[2591]: E0130 13:59:52.882742 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:52.896031 kubelet[2591]: E0130 13:59:52.895924 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:52.917306 kubelet[2591]: E0130 13:59:52.915235 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:52.917453 containerd[1471]: time="2025-01-30T13:59:52.916409550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lf48f,Uid:c2aa5903-9664-47ed-a915-c33be7777ddf,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:52.920690 kubelet[2591]: I0130 13:59:52.920132 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-b59zn" podStartSLOduration=1.14635894 podStartE2EDuration="13.92010732s" podCreationTimestamp="2025-01-30 13:59:39 +0000 UTC" firstStartedPulling="2025-01-30 13:59:39.762419229 +0000 UTC m=+15.249670132" lastFinishedPulling="2025-01-30 13:59:52.536167589 +0000 UTC m=+28.023418512" observedRunningTime="2025-01-30 13:59:52.918802305 +0000 UTC m=+28.406053232" watchObservedRunningTime="2025-01-30 13:59:52.92010732 +0000 UTC m=+28.407358239" Jan 30 13:59:52.954304 kubelet[2591]: E0130 13:59:52.952666 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:52.958026 containerd[1471]: time="2025-01-30T13:59:52.955227683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ht5vs,Uid:d258499c-cc46-419a-918a-8af31ed1224a,Namespace:kube-system,Attempt:0,}" Jan 30 13:59:53.062394 kubelet[2591]: I0130 13:59:53.062237 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m7rgr" podStartSLOduration=6.863165399 podStartE2EDuration="15.062212956s" podCreationTimestamp="2025-01-30 13:59:38 +0000 UTC" firstStartedPulling="2025-01-30 13:59:39.542358687 +0000 UTC m=+15.029609602" lastFinishedPulling="2025-01-30 13:59:47.741406259 +0000 UTC m=+23.228657159" observedRunningTime="2025-01-30 13:59:53.060275903 +0000 UTC m=+28.547526825" watchObservedRunningTime="2025-01-30 13:59:53.062212956 +0000 UTC m=+28.549463883" Jan 30 13:59:53.898018 kubelet[2591]: E0130 13:59:53.897678 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:53.902802 kubelet[2591]: E0130 13:59:53.902650 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:54.901816 kubelet[2591]: E0130 13:59:54.901765 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:56.961719 systemd-networkd[1370]: cilium_host: Link UP Jan 30 13:59:56.963592 systemd-networkd[1370]: cilium_net: Link UP Jan 30 13:59:56.964863 systemd-networkd[1370]: cilium_net: Gained carrier Jan 30 13:59:56.965209 systemd-networkd[1370]: cilium_host: Gained carrier Jan 30 13:59:57.170121 systemd-networkd[1370]: cilium_vxlan: Link UP Jan 30 13:59:57.170137 systemd-networkd[1370]: cilium_vxlan: Gained carrier Jan 30 13:59:57.948234 systemd-networkd[1370]: cilium_net: Gained IPv6LL Jan 30 13:59:57.948659 systemd-networkd[1370]: cilium_host: Gained IPv6LL Jan 30 13:59:58.065484 kernel: NET: Registered PF_ALG protocol family Jan 30 13:59:58.460252 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Jan 30 13:59:59.177030 systemd-networkd[1370]: lxc_health: Link UP Jan 30 13:59:59.183135 systemd-networkd[1370]: lxc_health: Gained carrier Jan 30 13:59:59.331575 kubelet[2591]: E0130 13:59:59.331512 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 13:59:59.627652 systemd-networkd[1370]: lxcdd95789b17af: Link UP Jan 30 13:59:59.631501 kernel: eth0: renamed from tmp843c5 Jan 30 13:59:59.643025 systemd-networkd[1370]: lxcdd95789b17af: Gained carrier Jan 30 13:59:59.664317 systemd-networkd[1370]: lxc2e8254e597bf: Link UP Jan 30 13:59:59.669593 kernel: eth0: renamed from tmpd1129 Jan 30 13:59:59.682144 systemd-networkd[1370]: lxc2e8254e597bf: Gained carrier Jan 30 14:00:00.656716 kubelet[2591]: I0130 14:00:00.656609 2591 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:00:00.661773 kubelet[2591]: E0130 14:00:00.660756 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:00.925616 kubelet[2591]: E0130 14:00:00.925116 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:01.150149 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jan 30 14:00:01.618817 systemd-networkd[1370]: lxcdd95789b17af: Gained IPv6LL Jan 30 14:00:01.670546 systemd-networkd[1370]: lxc2e8254e597bf: Gained IPv6LL Jan 30 14:00:04.786312 systemd[1]: Started sshd@10-143.198.62.166:22-147.75.109.163:53106.service - OpenSSH per-connection server daemon (147.75.109.163:53106). Jan 30 14:00:05.152327 sshd[3800]: Accepted publickey for core from 147.75.109.163 port 53106 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:05.157727 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:05.175182 systemd-logind[1444]: New session 10 of user core. Jan 30 14:00:05.190158 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:00:06.963039 sshd[3800]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:06.990341 systemd[1]: sshd@10-143.198.62.166:22-147.75.109.163:53106.service: Deactivated successfully. Jan 30 14:00:06.999698 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:00:07.005028 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:00:07.011133 systemd-logind[1444]: Removed session 10. Jan 30 14:00:09.976175 containerd[1471]: time="2025-01-30T14:00:09.973328539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:09.976175 containerd[1471]: time="2025-01-30T14:00:09.973670955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:09.976175 containerd[1471]: time="2025-01-30T14:00:09.973745367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:09.976175 containerd[1471]: time="2025-01-30T14:00:09.974564454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:10.023538 containerd[1471]: time="2025-01-30T14:00:10.019883618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:00:10.023538 containerd[1471]: time="2025-01-30T14:00:10.020017736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:00:10.023538 containerd[1471]: time="2025-01-30T14:00:10.020038046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:10.023538 containerd[1471]: time="2025-01-30T14:00:10.020178677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:00:10.074328 systemd[1]: Started cri-containerd-d11297399f034905b92cffed65ead1fa6a49af437443857eb03b9dbe2ee50b37.scope - libcontainer container d11297399f034905b92cffed65ead1fa6a49af437443857eb03b9dbe2ee50b37. Jan 30 14:00:10.103360 systemd[1]: Started cri-containerd-843c508bf412cf51212b755cc6c9e896562fdc187b656891994f7b9deb9c0c97.scope - libcontainer container 843c508bf412cf51212b755cc6c9e896562fdc187b656891994f7b9deb9c0c97. Jan 30 14:00:10.238718 containerd[1471]: time="2025-01-30T14:00:10.238455480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lf48f,Uid:c2aa5903-9664-47ed-a915-c33be7777ddf,Namespace:kube-system,Attempt:0,} returns sandbox id \"843c508bf412cf51212b755cc6c9e896562fdc187b656891994f7b9deb9c0c97\"" Jan 30 14:00:10.245397 kubelet[2591]: E0130 14:00:10.242112 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:10.260944 containerd[1471]: time="2025-01-30T14:00:10.260690949Z" level=info msg="CreateContainer within sandbox \"843c508bf412cf51212b755cc6c9e896562fdc187b656891994f7b9deb9c0c97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:00:10.349070 containerd[1471]: time="2025-01-30T14:00:10.348998759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ht5vs,Uid:d258499c-cc46-419a-918a-8af31ed1224a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d11297399f034905b92cffed65ead1fa6a49af437443857eb03b9dbe2ee50b37\"" Jan 30 14:00:10.352834 kubelet[2591]: E0130 14:00:10.352762 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:10.370630 containerd[1471]: time="2025-01-30T14:00:10.370392242Z" level=info msg="CreateContainer within sandbox \"d11297399f034905b92cffed65ead1fa6a49af437443857eb03b9dbe2ee50b37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:00:10.426520 containerd[1471]: time="2025-01-30T14:00:10.424816525Z" level=info msg="CreateContainer within sandbox \"843c508bf412cf51212b755cc6c9e896562fdc187b656891994f7b9deb9c0c97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c707d46b01fe3523a5347b6561c0bd05fe576ca12f0a7816f2379e950ebdf4e9\"" Jan 30 14:00:10.430251 containerd[1471]: time="2025-01-30T14:00:10.429555195Z" level=info msg="StartContainer for \"c707d46b01fe3523a5347b6561c0bd05fe576ca12f0a7816f2379e950ebdf4e9\"" Jan 30 14:00:10.483532 containerd[1471]: time="2025-01-30T14:00:10.482955638Z" level=info msg="CreateContainer within sandbox \"d11297399f034905b92cffed65ead1fa6a49af437443857eb03b9dbe2ee50b37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f237ea60c2baf674f1df4dd84dd54f30b90293680a8ea0bce8687948a85c4e22\"" Jan 30 14:00:10.484823 containerd[1471]: time="2025-01-30T14:00:10.484321366Z" level=info msg="StartContainer for \"f237ea60c2baf674f1df4dd84dd54f30b90293680a8ea0bce8687948a85c4e22\"" Jan 30 14:00:10.512591 systemd[1]: Started cri-containerd-c707d46b01fe3523a5347b6561c0bd05fe576ca12f0a7816f2379e950ebdf4e9.scope - libcontainer container c707d46b01fe3523a5347b6561c0bd05fe576ca12f0a7816f2379e950ebdf4e9. Jan 30 14:00:10.580280 systemd[1]: Started cri-containerd-f237ea60c2baf674f1df4dd84dd54f30b90293680a8ea0bce8687948a85c4e22.scope - libcontainer container f237ea60c2baf674f1df4dd84dd54f30b90293680a8ea0bce8687948a85c4e22. Jan 30 14:00:10.668831 containerd[1471]: time="2025-01-30T14:00:10.668763443Z" level=info msg="StartContainer for \"c707d46b01fe3523a5347b6561c0bd05fe576ca12f0a7816f2379e950ebdf4e9\" returns successfully" Jan 30 14:00:10.670845 containerd[1471]: time="2025-01-30T14:00:10.670274111Z" level=info msg="StartContainer for \"f237ea60c2baf674f1df4dd84dd54f30b90293680a8ea0bce8687948a85c4e22\" returns successfully" Jan 30 14:00:11.003704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686088652.mount: Deactivated successfully. Jan 30 14:00:11.018126 kubelet[2591]: E0130 14:00:11.016766 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:11.027244 kubelet[2591]: E0130 14:00:11.027182 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:11.055033 kubelet[2591]: I0130 14:00:11.054380 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ht5vs" podStartSLOduration=32.054350678 podStartE2EDuration="32.054350678s" podCreationTimestamp="2025-01-30 13:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:00:11.050408875 +0000 UTC m=+46.537659801" watchObservedRunningTime="2025-01-30 14:00:11.054350678 +0000 UTC m=+46.541601611" Jan 30 14:00:11.990534 systemd[1]: Started sshd@11-143.198.62.166:22-147.75.109.163:33958.service - OpenSSH per-connection server daemon (147.75.109.163:33958). Jan 30 14:00:12.030322 kubelet[2591]: E0130 14:00:12.030255 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:12.035515 kubelet[2591]: E0130 14:00:12.034243 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:12.074982 kubelet[2591]: I0130 14:00:12.072620 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lf48f" podStartSLOduration=33.072597266 podStartE2EDuration="33.072597266s" podCreationTimestamp="2025-01-30 13:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:00:11.09492623 +0000 UTC m=+46.582177154" watchObservedRunningTime="2025-01-30 14:00:12.072597266 +0000 UTC m=+47.559848257" Jan 30 14:00:12.150460 sshd[3990]: Accepted publickey for core from 147.75.109.163 port 33958 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:12.158307 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:12.177453 systemd-logind[1444]: New session 11 of user core. Jan 30 14:00:12.185627 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:00:12.521293 sshd[3990]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:12.546086 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:00:12.546892 systemd[1]: sshd@11-143.198.62.166:22-147.75.109.163:33958.service: Deactivated successfully. Jan 30 14:00:12.552079 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:00:12.564655 systemd-logind[1444]: Removed session 11. Jan 30 14:00:13.032567 kubelet[2591]: E0130 14:00:13.032484 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:13.034248 kubelet[2591]: E0130 14:00:13.033686 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:17.544505 systemd[1]: Started sshd@12-143.198.62.166:22-147.75.109.163:52442.service - OpenSSH per-connection server daemon (147.75.109.163:52442). Jan 30 14:00:17.597989 sshd[4013]: Accepted publickey for core from 147.75.109.163 port 52442 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:17.599810 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:17.607284 systemd-logind[1444]: New session 12 of user core. Jan 30 14:00:17.615765 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:00:17.842026 sshd[4013]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:17.852375 systemd[1]: sshd@12-143.198.62.166:22-147.75.109.163:52442.service: Deactivated successfully. Jan 30 14:00:17.856477 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:00:17.858287 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:00:17.860489 systemd-logind[1444]: Removed session 12. Jan 30 14:00:22.862532 systemd[1]: Started sshd@13-143.198.62.166:22-147.75.109.163:52458.service - OpenSSH per-connection server daemon (147.75.109.163:52458). Jan 30 14:00:22.928568 sshd[4026]: Accepted publickey for core from 147.75.109.163 port 52458 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:22.932148 sshd[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:22.943046 systemd-logind[1444]: New session 13 of user core. Jan 30 14:00:22.947350 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:00:23.138560 sshd[4026]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:23.150476 systemd[1]: sshd@13-143.198.62.166:22-147.75.109.163:52458.service: Deactivated successfully. Jan 30 14:00:23.155748 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:00:23.159779 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:00:23.168845 systemd[1]: Started sshd@14-143.198.62.166:22-147.75.109.163:52474.service - OpenSSH per-connection server daemon (147.75.109.163:52474). Jan 30 14:00:23.172272 systemd-logind[1444]: Removed session 13. Jan 30 14:00:23.235055 sshd[4040]: Accepted publickey for core from 147.75.109.163 port 52474 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:23.236097 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:23.245390 systemd-logind[1444]: New session 14 of user core. Jan 30 14:00:23.251316 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:00:23.501893 sshd[4040]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:23.516850 systemd[1]: sshd@14-143.198.62.166:22-147.75.109.163:52474.service: Deactivated successfully. Jan 30 14:00:23.522593 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:00:23.529715 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:00:23.540919 systemd[1]: Started sshd@15-143.198.62.166:22-147.75.109.163:52488.service - OpenSSH per-connection server daemon (147.75.109.163:52488). Jan 30 14:00:23.545166 systemd-logind[1444]: Removed session 14. Jan 30 14:00:23.612869 sshd[4050]: Accepted publickey for core from 147.75.109.163 port 52488 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:23.615568 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:23.624490 systemd-logind[1444]: New session 15 of user core. Jan 30 14:00:23.630221 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:00:23.824983 sshd[4050]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:23.831719 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:00:23.832463 systemd[1]: sshd@15-143.198.62.166:22-147.75.109.163:52488.service: Deactivated successfully. Jan 30 14:00:23.836542 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:00:23.838718 systemd-logind[1444]: Removed session 15. Jan 30 14:00:28.845588 systemd[1]: Started sshd@16-143.198.62.166:22-147.75.109.163:49356.service - OpenSSH per-connection server daemon (147.75.109.163:49356). Jan 30 14:00:28.908023 sshd[4065]: Accepted publickey for core from 147.75.109.163 port 49356 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:28.912017 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:28.921210 systemd-logind[1444]: New session 16 of user core. Jan 30 14:00:28.926927 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:00:29.120885 sshd[4065]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:29.129741 systemd[1]: sshd@16-143.198.62.166:22-147.75.109.163:49356.service: Deactivated successfully. Jan 30 14:00:29.136120 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:00:29.141763 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:00:29.144154 systemd-logind[1444]: Removed session 16. Jan 30 14:00:34.147472 systemd[1]: Started sshd@17-143.198.62.166:22-147.75.109.163:49358.service - OpenSSH per-connection server daemon (147.75.109.163:49358). Jan 30 14:00:34.214905 sshd[4078]: Accepted publickey for core from 147.75.109.163 port 49358 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:34.218307 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:34.227329 systemd-logind[1444]: New session 17 of user core. Jan 30 14:00:34.234577 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:00:34.446718 sshd[4078]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:34.454287 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:00:34.454604 systemd[1]: sshd@17-143.198.62.166:22-147.75.109.163:49358.service: Deactivated successfully. Jan 30 14:00:34.458443 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:00:34.461389 systemd-logind[1444]: Removed session 17. Jan 30 14:00:36.710140 kubelet[2591]: E0130 14:00:36.709905 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:37.710006 kubelet[2591]: E0130 14:00:37.709641 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:39.473347 systemd[1]: Started sshd@18-143.198.62.166:22-147.75.109.163:56604.service - OpenSSH per-connection server daemon (147.75.109.163:56604). Jan 30 14:00:39.529449 sshd[4091]: Accepted publickey for core from 147.75.109.163 port 56604 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:39.530613 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:39.542153 systemd-logind[1444]: New session 18 of user core. Jan 30 14:00:39.549341 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:00:39.709247 sshd[4091]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:39.722874 systemd[1]: sshd@18-143.198.62.166:22-147.75.109.163:56604.service: Deactivated successfully. Jan 30 14:00:39.726716 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:00:39.731547 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:00:39.737505 systemd[1]: Started sshd@19-143.198.62.166:22-147.75.109.163:56606.service - OpenSSH per-connection server daemon (147.75.109.163:56606). Jan 30 14:00:39.739828 systemd-logind[1444]: Removed session 18. Jan 30 14:00:39.795004 sshd[4103]: Accepted publickey for core from 147.75.109.163 port 56606 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:39.797691 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:39.806885 systemd-logind[1444]: New session 19 of user core. Jan 30 14:00:39.812359 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:00:40.427065 sshd[4103]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:40.444428 systemd[1]: sshd@19-143.198.62.166:22-147.75.109.163:56606.service: Deactivated successfully. Jan 30 14:00:40.447934 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:00:40.451593 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:00:40.457481 systemd[1]: Started sshd@20-143.198.62.166:22-147.75.109.163:56608.service - OpenSSH per-connection server daemon (147.75.109.163:56608). Jan 30 14:00:40.460095 systemd-logind[1444]: Removed session 19. Jan 30 14:00:40.555307 sshd[4117]: Accepted publickey for core from 147.75.109.163 port 56608 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:40.556340 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:40.564131 systemd-logind[1444]: New session 20 of user core. Jan 30 14:00:40.574274 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:00:43.038687 sshd[4117]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:43.060802 systemd[1]: sshd@20-143.198.62.166:22-147.75.109.163:56608.service: Deactivated successfully. Jan 30 14:00:43.068922 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:00:43.073377 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:00:43.082434 systemd[1]: Started sshd@21-143.198.62.166:22-147.75.109.163:56614.service - OpenSSH per-connection server daemon (147.75.109.163:56614). Jan 30 14:00:43.088572 systemd-logind[1444]: Removed session 20. Jan 30 14:00:43.156896 sshd[4134]: Accepted publickey for core from 147.75.109.163 port 56614 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:43.159641 sshd[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:43.167366 systemd-logind[1444]: New session 21 of user core. Jan 30 14:00:43.176307 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:00:43.792984 sshd[4134]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:43.807493 systemd[1]: sshd@21-143.198.62.166:22-147.75.109.163:56614.service: Deactivated successfully. Jan 30 14:00:43.812692 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:00:43.816920 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:00:43.825682 systemd[1]: Started sshd@22-143.198.62.166:22-147.75.109.163:56626.service - OpenSSH per-connection server daemon (147.75.109.163:56626). Jan 30 14:00:43.831564 systemd-logind[1444]: Removed session 21. Jan 30 14:00:43.880058 sshd[4145]: Accepted publickey for core from 147.75.109.163 port 56626 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:43.881803 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:43.890156 systemd-logind[1444]: New session 22 of user core. Jan 30 14:00:43.898253 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:00:44.087331 sshd[4145]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:44.095601 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:00:44.097186 systemd[1]: sshd@22-143.198.62.166:22-147.75.109.163:56626.service: Deactivated successfully. Jan 30 14:00:44.102308 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:00:44.104472 systemd-logind[1444]: Removed session 22. Jan 30 14:00:49.110415 systemd[1]: Started sshd@23-143.198.62.166:22-147.75.109.163:47960.service - OpenSSH per-connection server daemon (147.75.109.163:47960). Jan 30 14:00:49.158905 sshd[4161]: Accepted publickey for core from 147.75.109.163 port 47960 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:49.162241 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:49.171386 systemd-logind[1444]: New session 23 of user core. Jan 30 14:00:49.176323 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:00:49.343189 sshd[4161]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:49.347994 systemd[1]: sshd@23-143.198.62.166:22-147.75.109.163:47960.service: Deactivated successfully. Jan 30 14:00:49.351671 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:00:49.355439 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:00:49.358249 systemd-logind[1444]: Removed session 23. Jan 30 14:00:49.709436 kubelet[2591]: E0130 14:00:49.709265 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:54.364546 systemd[1]: Started sshd@24-143.198.62.166:22-147.75.109.163:47962.service - OpenSSH per-connection server daemon (147.75.109.163:47962). Jan 30 14:00:54.413026 sshd[4173]: Accepted publickey for core from 147.75.109.163 port 47962 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:54.417035 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:54.424375 systemd-logind[1444]: New session 24 of user core. Jan 30 14:00:54.433274 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:00:54.615109 sshd[4173]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:54.624717 systemd[1]: sshd@24-143.198.62.166:22-147.75.109.163:47962.service: Deactivated successfully. Jan 30 14:00:54.628159 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:00:54.629931 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:00:54.632401 systemd-logind[1444]: Removed session 24. Jan 30 14:00:55.709204 kubelet[2591]: E0130 14:00:55.709004 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:00:59.637535 systemd[1]: Started sshd@25-143.198.62.166:22-147.75.109.163:41436.service - OpenSSH per-connection server daemon (147.75.109.163:41436). Jan 30 14:00:59.696000 sshd[4186]: Accepted publickey for core from 147.75.109.163 port 41436 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:00:59.698587 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:00:59.706178 systemd-logind[1444]: New session 25 of user core. Jan 30 14:00:59.712331 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:00:59.890588 sshd[4186]: pam_unix(sshd:session): session closed for user core Jan 30 14:00:59.899232 systemd[1]: sshd@25-143.198.62.166:22-147.75.109.163:41436.service: Deactivated successfully. Jan 30 14:00:59.903480 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:00:59.905009 systemd-logind[1444]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:00:59.907021 systemd-logind[1444]: Removed session 25. Jan 30 14:01:04.915557 systemd[1]: Started sshd@26-143.198.62.166:22-147.75.109.163:41442.service - OpenSSH per-connection server daemon (147.75.109.163:41442). Jan 30 14:01:04.967075 sshd[4199]: Accepted publickey for core from 147.75.109.163 port 41442 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:04.973976 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:04.988722 systemd-logind[1444]: New session 26 of user core. Jan 30 14:01:04.995408 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:01:05.194437 sshd[4199]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:05.209697 systemd[1]: sshd@26-143.198.62.166:22-147.75.109.163:41442.service: Deactivated successfully. Jan 30 14:01:05.213875 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:01:05.217249 systemd-logind[1444]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:01:05.225551 systemd[1]: Started sshd@27-143.198.62.166:22-147.75.109.163:41444.service - OpenSSH per-connection server daemon (147.75.109.163:41444). Jan 30 14:01:05.228258 systemd-logind[1444]: Removed session 26. Jan 30 14:01:05.333393 sshd[4211]: Accepted publickey for core from 147.75.109.163 port 41444 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:05.336088 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:05.346750 systemd-logind[1444]: New session 27 of user core. Jan 30 14:01:05.352344 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 14:01:07.187282 containerd[1471]: time="2025-01-30T14:01:07.187203533Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:01:07.320127 containerd[1471]: time="2025-01-30T14:01:07.320062031Z" level=info msg="StopContainer for \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\" with timeout 30 (s)" Jan 30 14:01:07.320343 containerd[1471]: time="2025-01-30T14:01:07.320229150Z" level=info msg="StopContainer for \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\" with timeout 2 (s)" Jan 30 14:01:07.320968 containerd[1471]: time="2025-01-30T14:01:07.320693049Z" level=info msg="Stop container \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\" with signal terminated" Jan 30 14:01:07.324101 containerd[1471]: time="2025-01-30T14:01:07.324045752Z" level=info msg="Stop container \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\" with signal terminated" Jan 30 14:01:07.336259 systemd-networkd[1370]: lxc_health: Link DOWN Jan 30 14:01:07.336280 systemd-networkd[1370]: lxc_health: Lost carrier Jan 30 14:01:07.356957 systemd[1]: cri-containerd-7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674.scope: Deactivated successfully. Jan 30 14:01:07.376694 systemd[1]: cri-containerd-25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be.scope: Deactivated successfully. Jan 30 14:01:07.378441 systemd[1]: cri-containerd-25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be.scope: Consumed 12.354s CPU time. Jan 30 14:01:07.436645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674-rootfs.mount: Deactivated successfully. Jan 30 14:01:07.450340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be-rootfs.mount: Deactivated successfully. Jan 30 14:01:07.454975 containerd[1471]: time="2025-01-30T14:01:07.454132303Z" level=info msg="shim disconnected" id=7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674 namespace=k8s.io Jan 30 14:01:07.454975 containerd[1471]: time="2025-01-30T14:01:07.454244430Z" level=warning msg="cleaning up after shim disconnected" id=7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674 namespace=k8s.io Jan 30 14:01:07.454975 containerd[1471]: time="2025-01-30T14:01:07.454257866Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:07.466879 containerd[1471]: time="2025-01-30T14:01:07.466232407Z" level=info msg="shim disconnected" id=25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be namespace=k8s.io Jan 30 14:01:07.466879 containerd[1471]: time="2025-01-30T14:01:07.466315313Z" level=warning msg="cleaning up after shim disconnected" id=25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be namespace=k8s.io Jan 30 14:01:07.466879 containerd[1471]: time="2025-01-30T14:01:07.466328178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:07.506527 containerd[1471]: time="2025-01-30T14:01:07.506447215Z" level=info msg="StopContainer for \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\" returns successfully" Jan 30 14:01:07.507850 containerd[1471]: time="2025-01-30T14:01:07.507804239Z" level=info msg="StopPodSandbox for \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\"" Jan 30 14:01:07.508057 containerd[1471]: time="2025-01-30T14:01:07.507866725Z" level=info msg="Container to stop \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.510364 containerd[1471]: time="2025-01-30T14:01:07.509572126Z" level=info msg="StopContainer for \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\" returns successfully" Jan 30 14:01:07.514127 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713-shm.mount: Deactivated successfully. Jan 30 14:01:07.516522 containerd[1471]: time="2025-01-30T14:01:07.515346072Z" level=info msg="StopPodSandbox for \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\"" Jan 30 14:01:07.516522 containerd[1471]: time="2025-01-30T14:01:07.515431355Z" level=info msg="Container to stop \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.516522 containerd[1471]: time="2025-01-30T14:01:07.515451813Z" level=info msg="Container to stop \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.516522 containerd[1471]: time="2025-01-30T14:01:07.515471642Z" level=info msg="Container to stop \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.516522 containerd[1471]: time="2025-01-30T14:01:07.515485238Z" level=info msg="Container to stop \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.516522 containerd[1471]: time="2025-01-30T14:01:07.515498757Z" level=info msg="Container to stop \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:01:07.528322 systemd[1]: cri-containerd-36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713.scope: Deactivated successfully. Jan 30 14:01:07.530580 systemd[1]: cri-containerd-3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de.scope: Deactivated successfully. Jan 30 14:01:07.582393 containerd[1471]: time="2025-01-30T14:01:07.582117854Z" level=info msg="shim disconnected" id=3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de namespace=k8s.io Jan 30 14:01:07.582393 containerd[1471]: time="2025-01-30T14:01:07.582193415Z" level=warning msg="cleaning up after shim disconnected" id=3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de namespace=k8s.io Jan 30 14:01:07.582393 containerd[1471]: time="2025-01-30T14:01:07.582207342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:07.582804 containerd[1471]: time="2025-01-30T14:01:07.582548749Z" level=info msg="shim disconnected" id=36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713 namespace=k8s.io Jan 30 14:01:07.582804 containerd[1471]: time="2025-01-30T14:01:07.582596973Z" level=warning msg="cleaning up after shim disconnected" id=36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713 namespace=k8s.io Jan 30 14:01:07.582804 containerd[1471]: time="2025-01-30T14:01:07.582609095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:07.627114 containerd[1471]: time="2025-01-30T14:01:07.627005867Z" level=info msg="TearDown network for sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" successfully" Jan 30 14:01:07.627114 containerd[1471]: time="2025-01-30T14:01:07.627059670Z" level=info msg="StopPodSandbox for \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" returns successfully" Jan 30 14:01:07.630173 containerd[1471]: time="2025-01-30T14:01:07.630101844Z" level=info msg="TearDown network for sandbox \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\" successfully" Jan 30 14:01:07.630173 containerd[1471]: time="2025-01-30T14:01:07.630168350Z" level=info msg="StopPodSandbox for \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\" returns successfully" Jan 30 14:01:07.792652 kubelet[2591]: I0130 14:01:07.791756 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-host-proc-sys-kernel\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.792652 kubelet[2591]: I0130 14:01:07.791865 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-run\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.792652 kubelet[2591]: I0130 14:01:07.791892 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-host-proc-sys-net\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.792652 kubelet[2591]: I0130 14:01:07.791909 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.792652 kubelet[2591]: I0130 14:01:07.791963 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828-cilium-config-path\") pod \"d9bf71e6-f1ff-4e52-85ce-5684a6ee6828\" (UID: \"d9bf71e6-f1ff-4e52-85ce-5684a6ee6828\") " Jan 30 14:01:07.792652 kubelet[2591]: I0130 14:01:07.791992 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-bpf-maps\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795242 kubelet[2591]: I0130 14:01:07.792012 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.795242 kubelet[2591]: I0130 14:01:07.792016 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cni-path\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795242 kubelet[2591]: I0130 14:01:07.792040 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cni-path" (OuterVolumeSpecName: "cni-path") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.795242 kubelet[2591]: I0130 14:01:07.792103 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msvgk\" (UniqueName: \"kubernetes.io/projected/7f0f1169-53d9-4dea-9562-11b25b7a019d-kube-api-access-msvgk\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795242 kubelet[2591]: I0130 14:01:07.792132 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-etc-cni-netd\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795242 kubelet[2591]: I0130 14:01:07.792169 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-lib-modules\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795671 kubelet[2591]: I0130 14:01:07.792207 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f0f1169-53d9-4dea-9562-11b25b7a019d-hubble-tls\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795671 kubelet[2591]: I0130 14:01:07.792231 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-hostproc\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795671 kubelet[2591]: I0130 14:01:07.792255 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-xtables-lock\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795671 kubelet[2591]: I0130 14:01:07.792286 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-config-path\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795671 kubelet[2591]: I0130 14:01:07.792309 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-cgroup\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.795671 kubelet[2591]: I0130 14:01:07.792336 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lj5b\" (UniqueName: \"kubernetes.io/projected/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828-kube-api-access-7lj5b\") pod \"d9bf71e6-f1ff-4e52-85ce-5684a6ee6828\" (UID: \"d9bf71e6-f1ff-4e52-85ce-5684a6ee6828\") " Jan 30 14:01:07.797439 kubelet[2591]: I0130 14:01:07.792372 2591 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f0f1169-53d9-4dea-9562-11b25b7a019d-clustermesh-secrets\") pod \"7f0f1169-53d9-4dea-9562-11b25b7a019d\" (UID: \"7f0f1169-53d9-4dea-9562-11b25b7a019d\") " Jan 30 14:01:07.797439 kubelet[2591]: I0130 14:01:07.796236 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9bf71e6-f1ff-4e52-85ce-5684a6ee6828" (UID: "d9bf71e6-f1ff-4e52-85ce-5684a6ee6828"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:01:07.797439 kubelet[2591]: I0130 14:01:07.796344 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.800214 kubelet[2591]: I0130 14:01:07.800149 2591 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-host-proc-sys-kernel\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.800578 kubelet[2591]: I0130 14:01:07.800544 2591 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-host-proc-sys-net\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.806435 kubelet[2591]: I0130 14:01:07.805421 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0f1169-53d9-4dea-9562-11b25b7a019d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:01:07.806435 kubelet[2591]: I0130 14:01:07.806192 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-hostproc" (OuterVolumeSpecName: "hostproc") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.806435 kubelet[2591]: I0130 14:01:07.806235 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.809837 kubelet[2591]: I0130 14:01:07.809756 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:01:07.810082 kubelet[2591]: I0130 14:01:07.809880 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.813484 kubelet[2591]: I0130 14:01:07.811262 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0f1169-53d9-4dea-9562-11b25b7a019d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:07.813484 kubelet[2591]: I0130 14:01:07.811362 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.814575 kubelet[2591]: I0130 14:01:07.814509 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828-kube-api-access-7lj5b" (OuterVolumeSpecName: "kube-api-access-7lj5b") pod "d9bf71e6-f1ff-4e52-85ce-5684a6ee6828" (UID: "d9bf71e6-f1ff-4e52-85ce-5684a6ee6828"). InnerVolumeSpecName "kube-api-access-7lj5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:07.814741 kubelet[2591]: I0130 14:01:07.814667 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.814741 kubelet[2591]: I0130 14:01:07.814707 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:01:07.819401 kubelet[2591]: I0130 14:01:07.819333 2591 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0f1169-53d9-4dea-9562-11b25b7a019d-kube-api-access-msvgk" (OuterVolumeSpecName: "kube-api-access-msvgk") pod "7f0f1169-53d9-4dea-9562-11b25b7a019d" (UID: "7f0f1169-53d9-4dea-9562-11b25b7a019d"). InnerVolumeSpecName "kube-api-access-msvgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:01:07.901964 kubelet[2591]: I0130 14:01:07.901871 2591 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-bpf-maps\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.902396 kubelet[2591]: I0130 14:01:07.902179 2591 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cni-path\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.902396 kubelet[2591]: I0130 14:01:07.902208 2591 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-etc-cni-netd\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.902396 kubelet[2591]: I0130 14:01:07.902226 2591 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-lib-modules\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.902396 kubelet[2591]: I0130 14:01:07.902240 2591 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7f0f1169-53d9-4dea-9562-11b25b7a019d-hubble-tls\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.902396 kubelet[2591]: I0130 14:01:07.902253 2591 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-hostproc\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.902396 kubelet[2591]: I0130 14:01:07.902268 2591 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-msvgk\" (UniqueName: \"kubernetes.io/projected/7f0f1169-53d9-4dea-9562-11b25b7a019d-kube-api-access-msvgk\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.902396 kubelet[2591]: I0130 14:01:07.902281 2591 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-xtables-lock\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.902396 kubelet[2591]: I0130 14:01:07.902302 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-config-path\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.903101 kubelet[2591]: I0130 14:01:07.902316 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-cgroup\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.903101 kubelet[2591]: I0130 14:01:07.902329 2591 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7lj5b\" (UniqueName: \"kubernetes.io/projected/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828-kube-api-access-7lj5b\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.903101 kubelet[2591]: I0130 14:01:07.902342 2591 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7f0f1169-53d9-4dea-9562-11b25b7a019d-clustermesh-secrets\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.903101 kubelet[2591]: I0130 14:01:07.902354 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7f0f1169-53d9-4dea-9562-11b25b7a019d-cilium-run\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:07.903101 kubelet[2591]: I0130 14:01:07.902368 2591 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828-cilium-config-path\") on node \"ci-4081.3.0-f-9c719b1623\" DevicePath \"\"" Jan 30 14:01:08.155371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713-rootfs.mount: Deactivated successfully. Jan 30 14:01:08.155921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de-rootfs.mount: Deactivated successfully. Jan 30 14:01:08.156286 systemd[1]: var-lib-kubelet-pods-d9bf71e6\x2df1ff\x2d4e52\x2d85ce\x2d5684a6ee6828-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7lj5b.mount: Deactivated successfully. Jan 30 14:01:08.156591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de-shm.mount: Deactivated successfully. Jan 30 14:01:08.156853 systemd[1]: var-lib-kubelet-pods-7f0f1169\x2d53d9\x2d4dea\x2d9562\x2d11b25b7a019d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmsvgk.mount: Deactivated successfully. Jan 30 14:01:08.157274 systemd[1]: var-lib-kubelet-pods-7f0f1169\x2d53d9\x2d4dea\x2d9562\x2d11b25b7a019d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 14:01:08.157611 systemd[1]: var-lib-kubelet-pods-7f0f1169\x2d53d9\x2d4dea\x2d9562\x2d11b25b7a019d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 14:01:08.208186 kubelet[2591]: I0130 14:01:08.208149 2591 scope.go:117] "RemoveContainer" containerID="7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674" Jan 30 14:01:08.214312 containerd[1471]: time="2025-01-30T14:01:08.213759522Z" level=info msg="RemoveContainer for \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\"" Jan 30 14:01:08.222677 systemd[1]: Removed slice kubepods-besteffort-podd9bf71e6_f1ff_4e52_85ce_5684a6ee6828.slice - libcontainer container kubepods-besteffort-podd9bf71e6_f1ff_4e52_85ce_5684a6ee6828.slice. Jan 30 14:01:08.233626 containerd[1471]: time="2025-01-30T14:01:08.232603830Z" level=info msg="RemoveContainer for \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\" returns successfully" Jan 30 14:01:08.238739 kubelet[2591]: I0130 14:01:08.238391 2591 scope.go:117] "RemoveContainer" containerID="7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674" Jan 30 14:01:08.244387 systemd[1]: Removed slice kubepods-burstable-pod7f0f1169_53d9_4dea_9562_11b25b7a019d.slice - libcontainer container kubepods-burstable-pod7f0f1169_53d9_4dea_9562_11b25b7a019d.slice. Jan 30 14:01:08.244685 systemd[1]: kubepods-burstable-pod7f0f1169_53d9_4dea_9562_11b25b7a019d.slice: Consumed 12.469s CPU time. Jan 30 14:01:08.263156 containerd[1471]: time="2025-01-30T14:01:08.244267650Z" level=error msg="ContainerStatus for \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\": not found" Jan 30 14:01:08.263783 kubelet[2591]: E0130 14:01:08.263687 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\": not found" containerID="7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674" Jan 30 14:01:08.268654 kubelet[2591]: I0130 14:01:08.263768 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674"} err="failed to get container status \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\": rpc error: code = NotFound desc = an error occurred when try to find container \"7963c7bf2fde9460c51923246321d76c977bcec25a28809a1764bb389a6c8674\": not found" Jan 30 14:01:08.268654 kubelet[2591]: I0130 14:01:08.268035 2591 scope.go:117] "RemoveContainer" containerID="25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be" Jan 30 14:01:08.275610 containerd[1471]: time="2025-01-30T14:01:08.275564289Z" level=info msg="RemoveContainer for \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\"" Jan 30 14:01:08.287804 containerd[1471]: time="2025-01-30T14:01:08.287753462Z" level=info msg="RemoveContainer for \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\" returns successfully" Jan 30 14:01:08.288863 kubelet[2591]: I0130 14:01:08.288509 2591 scope.go:117] "RemoveContainer" containerID="892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf" Jan 30 14:01:08.292902 containerd[1471]: time="2025-01-30T14:01:08.292418971Z" level=info msg="RemoveContainer for \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\"" Jan 30 14:01:08.303228 containerd[1471]: time="2025-01-30T14:01:08.302444644Z" level=info msg="RemoveContainer for \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\" returns successfully" Jan 30 14:01:08.304208 kubelet[2591]: I0130 14:01:08.304000 2591 scope.go:117] "RemoveContainer" containerID="5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585" Jan 30 14:01:08.308840 containerd[1471]: time="2025-01-30T14:01:08.308264255Z" level=info msg="RemoveContainer for \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\"" Jan 30 14:01:08.316650 containerd[1471]: time="2025-01-30T14:01:08.316590271Z" level=info msg="RemoveContainer for \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\" returns successfully" Jan 30 14:01:08.317521 kubelet[2591]: I0130 14:01:08.317478 2591 scope.go:117] "RemoveContainer" containerID="eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd" Jan 30 14:01:08.321471 containerd[1471]: time="2025-01-30T14:01:08.320567936Z" level=info msg="RemoveContainer for \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\"" Jan 30 14:01:08.342527 containerd[1471]: time="2025-01-30T14:01:08.342440742Z" level=info msg="RemoveContainer for \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\" returns successfully" Jan 30 14:01:08.343221 kubelet[2591]: I0130 14:01:08.343160 2591 scope.go:117] "RemoveContainer" containerID="9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5" Jan 30 14:01:08.345486 containerd[1471]: time="2025-01-30T14:01:08.345430704Z" level=info msg="RemoveContainer for \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\"" Jan 30 14:01:08.354117 containerd[1471]: time="2025-01-30T14:01:08.354049376Z" level=info msg="RemoveContainer for \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\" returns successfully" Jan 30 14:01:08.354478 kubelet[2591]: I0130 14:01:08.354444 2591 scope.go:117] "RemoveContainer" containerID="25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be" Jan 30 14:01:08.355107 containerd[1471]: time="2025-01-30T14:01:08.354838806Z" level=error msg="ContainerStatus for \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\": not found" Jan 30 14:01:08.355232 kubelet[2591]: E0130 14:01:08.355198 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\": not found" containerID="25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be" Jan 30 14:01:08.355293 kubelet[2591]: I0130 14:01:08.355270 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be"} err="failed to get container status \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\": rpc error: code = NotFound desc = an error occurred when try to find container \"25a05d2fac597423b18f7c4a3b601a386c9d1c686fac9edc95773ef7b72a74be\": not found" Jan 30 14:01:08.355339 kubelet[2591]: I0130 14:01:08.355306 2591 scope.go:117] "RemoveContainer" containerID="892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf" Jan 30 14:01:08.355812 containerd[1471]: time="2025-01-30T14:01:08.355765168Z" level=error msg="ContainerStatus for \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\": not found" Jan 30 14:01:08.356001 kubelet[2591]: E0130 14:01:08.355920 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\": not found" containerID="892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf" Jan 30 14:01:08.356001 kubelet[2591]: I0130 14:01:08.355988 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf"} err="failed to get container status \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\": rpc error: code = NotFound desc = an error occurred when try to find container \"892576c6f41515a0005dad1581c98fa1846f450f80e6e095d0f2d556edd9ceaf\": not found" Jan 30 14:01:08.356105 kubelet[2591]: I0130 14:01:08.356013 2591 scope.go:117] "RemoveContainer" containerID="5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585" Jan 30 14:01:08.356464 containerd[1471]: time="2025-01-30T14:01:08.356414921Z" level=error msg="ContainerStatus for \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\": not found" Jan 30 14:01:08.356567 kubelet[2591]: E0130 14:01:08.356538 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\": not found" containerID="5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585" Jan 30 14:01:08.356651 kubelet[2591]: I0130 14:01:08.356571 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585"} err="failed to get container status \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fa0c94756fe762f26c455efb9097fecbe521a7842f7371b93c47cb272464585\": not found" Jan 30 14:01:08.356651 kubelet[2591]: I0130 14:01:08.356593 2591 scope.go:117] "RemoveContainer" containerID="eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd" Jan 30 14:01:08.357140 containerd[1471]: time="2025-01-30T14:01:08.357090486Z" level=error msg="ContainerStatus for \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\": not found" Jan 30 14:01:08.357273 kubelet[2591]: E0130 14:01:08.357239 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\": not found" containerID="eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd" Jan 30 14:01:08.357364 kubelet[2591]: I0130 14:01:08.357269 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd"} err="failed to get container status \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"eee13043282434d03bd6127f0753bfd44a6619b18cff75e2469ccd4c02aea3bd\": not found" Jan 30 14:01:08.357364 kubelet[2591]: I0130 14:01:08.357290 2591 scope.go:117] "RemoveContainer" containerID="9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5" Jan 30 14:01:08.357486 containerd[1471]: time="2025-01-30T14:01:08.357448904Z" level=error msg="ContainerStatus for \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\": not found" Jan 30 14:01:08.357756 kubelet[2591]: E0130 14:01:08.357707 2591 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\": not found" containerID="9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5" Jan 30 14:01:08.357756 kubelet[2591]: I0130 14:01:08.357744 2591 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5"} err="failed to get container status \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c106acb3151864a6c655f58258485cfd162ea62dadf35d8bb664392afb8c5a5\": not found" Jan 30 14:01:08.713002 kubelet[2591]: I0130 14:01:08.712912 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0f1169-53d9-4dea-9562-11b25b7a019d" path="/var/lib/kubelet/pods/7f0f1169-53d9-4dea-9562-11b25b7a019d/volumes" Jan 30 14:01:08.713872 kubelet[2591]: I0130 14:01:08.713801 2591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9bf71e6-f1ff-4e52-85ce-5684a6ee6828" path="/var/lib/kubelet/pods/d9bf71e6-f1ff-4e52-85ce-5684a6ee6828/volumes" Jan 30 14:01:09.025392 sshd[4211]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:09.037366 systemd[1]: sshd@27-143.198.62.166:22-147.75.109.163:41444.service: Deactivated successfully. Jan 30 14:01:09.042311 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 14:01:09.046309 systemd-logind[1444]: Session 27 logged out. Waiting for processes to exit. Jan 30 14:01:09.057144 systemd[1]: Started sshd@28-143.198.62.166:22-147.75.109.163:58272.service - OpenSSH per-connection server daemon (147.75.109.163:58272). Jan 30 14:01:09.059526 systemd-logind[1444]: Removed session 27. Jan 30 14:01:09.113984 sshd[4371]: Accepted publickey for core from 147.75.109.163 port 58272 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:09.117288 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:09.126812 systemd-logind[1444]: New session 28 of user core. Jan 30 14:01:09.130271 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 14:01:09.761630 sshd[4371]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:09.778442 systemd[1]: sshd@28-143.198.62.166:22-147.75.109.163:58272.service: Deactivated successfully. Jan 30 14:01:09.786642 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 14:01:09.789127 systemd-logind[1444]: Session 28 logged out. Waiting for processes to exit. Jan 30 14:01:09.806295 systemd[1]: Started sshd@29-143.198.62.166:22-147.75.109.163:58274.service - OpenSSH per-connection server daemon (147.75.109.163:58274). Jan 30 14:01:09.809099 systemd-logind[1444]: Removed session 28. Jan 30 14:01:09.829652 kubelet[2591]: I0130 14:01:09.829508 2591 topology_manager.go:215] "Topology Admit Handler" podUID="56658211-7b0e-4196-9cfa-f6d5066cf3d8" podNamespace="kube-system" podName="cilium-t9cpc" Jan 30 14:01:09.837758 kubelet[2591]: E0130 14:01:09.837487 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f0f1169-53d9-4dea-9562-11b25b7a019d" containerName="apply-sysctl-overwrites" Jan 30 14:01:09.837758 kubelet[2591]: E0130 14:01:09.837537 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f0f1169-53d9-4dea-9562-11b25b7a019d" containerName="mount-bpf-fs" Jan 30 14:01:09.837758 kubelet[2591]: E0130 14:01:09.837548 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f0f1169-53d9-4dea-9562-11b25b7a019d" containerName="clean-cilium-state" Jan 30 14:01:09.837758 kubelet[2591]: E0130 14:01:09.837556 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f0f1169-53d9-4dea-9562-11b25b7a019d" containerName="cilium-agent" Jan 30 14:01:09.837758 kubelet[2591]: E0130 14:01:09.837565 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9bf71e6-f1ff-4e52-85ce-5684a6ee6828" containerName="cilium-operator" Jan 30 14:01:09.837758 kubelet[2591]: E0130 14:01:09.837609 2591 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f0f1169-53d9-4dea-9562-11b25b7a019d" containerName="mount-cgroup" Jan 30 14:01:09.857853 kubelet[2591]: I0130 14:01:09.837660 2591 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0f1169-53d9-4dea-9562-11b25b7a019d" containerName="cilium-agent" Jan 30 14:01:09.857853 kubelet[2591]: I0130 14:01:09.856078 2591 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9bf71e6-f1ff-4e52-85ce-5684a6ee6828" containerName="cilium-operator" Jan 30 14:01:09.909909 sshd[4383]: Accepted publickey for core from 147.75.109.163 port 58274 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:09.913753 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:09.929985 kubelet[2591]: I0130 14:01:09.928775 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-bpf-maps\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.929985 kubelet[2591]: I0130 14:01:09.928840 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-host-proc-sys-net\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.929985 kubelet[2591]: I0130 14:01:09.928873 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-host-proc-sys-kernel\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.929985 kubelet[2591]: I0130 14:01:09.928905 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-cilium-cgroup\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.929985 kubelet[2591]: I0130 14:01:09.928930 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-cni-path\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.929985 kubelet[2591]: I0130 14:01:09.928987 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-xtables-lock\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930471 kubelet[2591]: I0130 14:01:09.929013 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-lib-modules\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930471 kubelet[2591]: I0130 14:01:09.929039 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/56658211-7b0e-4196-9cfa-f6d5066cf3d8-cilium-ipsec-secrets\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930471 kubelet[2591]: I0130 14:01:09.929061 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-etc-cni-netd\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930471 kubelet[2591]: I0130 14:01:09.929089 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56658211-7b0e-4196-9cfa-f6d5066cf3d8-hubble-tls\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930471 kubelet[2591]: I0130 14:01:09.929117 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56658211-7b0e-4196-9cfa-f6d5066cf3d8-cilium-config-path\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930471 kubelet[2591]: I0130 14:01:09.929140 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67d65\" (UniqueName: \"kubernetes.io/projected/56658211-7b0e-4196-9cfa-f6d5066cf3d8-kube-api-access-67d65\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930753 kubelet[2591]: I0130 14:01:09.929163 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-cilium-run\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930753 kubelet[2591]: I0130 14:01:09.929186 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56658211-7b0e-4196-9cfa-f6d5066cf3d8-hostproc\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.930753 kubelet[2591]: I0130 14:01:09.929210 2591 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56658211-7b0e-4196-9cfa-f6d5066cf3d8-clustermesh-secrets\") pod \"cilium-t9cpc\" (UID: \"56658211-7b0e-4196-9cfa-f6d5066cf3d8\") " pod="kube-system/cilium-t9cpc" Jan 30 14:01:09.941149 systemd-logind[1444]: New session 29 of user core. Jan 30 14:01:09.948433 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 14:01:09.968520 systemd[1]: Created slice kubepods-burstable-pod56658211_7b0e_4196_9cfa_f6d5066cf3d8.slice - libcontainer container kubepods-burstable-pod56658211_7b0e_4196_9cfa_f6d5066cf3d8.slice. Jan 30 14:01:10.046847 sshd[4383]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:10.066135 systemd[1]: sshd@29-143.198.62.166:22-147.75.109.163:58274.service: Deactivated successfully. Jan 30 14:01:10.073310 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 14:01:10.076278 systemd-logind[1444]: Session 29 logged out. Waiting for processes to exit. Jan 30 14:01:10.089547 systemd[1]: Started sshd@30-143.198.62.166:22-147.75.109.163:58280.service - OpenSSH per-connection server daemon (147.75.109.163:58280). Jan 30 14:01:10.091688 systemd-logind[1444]: Removed session 29. Jan 30 14:01:10.111177 kubelet[2591]: E0130 14:01:10.039332 2591 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:01:10.187081 sshd[4391]: Accepted publickey for core from 147.75.109.163 port 58280 ssh2: RSA SHA256:nniWhUXb7YSTcabVY3ysk3m2XR3g3yvp0Y+YACmZFTQ Jan 30 14:01:10.189634 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:01:10.198875 systemd-logind[1444]: New session 30 of user core. Jan 30 14:01:10.205291 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 30 14:01:10.285295 kubelet[2591]: E0130 14:01:10.285246 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:10.287029 containerd[1471]: time="2025-01-30T14:01:10.286095935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t9cpc,Uid:56658211-7b0e-4196-9cfa-f6d5066cf3d8,Namespace:kube-system,Attempt:0,}" Jan 30 14:01:10.354087 containerd[1471]: time="2025-01-30T14:01:10.350697796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:01:10.354087 containerd[1471]: time="2025-01-30T14:01:10.353020143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:01:10.354087 containerd[1471]: time="2025-01-30T14:01:10.353048530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:10.354087 containerd[1471]: time="2025-01-30T14:01:10.353226526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:01:10.400272 systemd[1]: Started cri-containerd-6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50.scope - libcontainer container 6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50. Jan 30 14:01:10.484009 containerd[1471]: time="2025-01-30T14:01:10.482207251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t9cpc,Uid:56658211-7b0e-4196-9cfa-f6d5066cf3d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\"" Jan 30 14:01:10.484248 kubelet[2591]: E0130 14:01:10.483825 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:10.499078 containerd[1471]: time="2025-01-30T14:01:10.498919492Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:01:10.535373 containerd[1471]: time="2025-01-30T14:01:10.535268884Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"48a222842c0e689ef0b220f3e1443c21ad6d150a0fef47ad29b31f23aebbc4f5\"" Jan 30 14:01:10.536509 containerd[1471]: time="2025-01-30T14:01:10.536456555Z" level=info msg="StartContainer for \"48a222842c0e689ef0b220f3e1443c21ad6d150a0fef47ad29b31f23aebbc4f5\"" Jan 30 14:01:10.580304 systemd[1]: Started cri-containerd-48a222842c0e689ef0b220f3e1443c21ad6d150a0fef47ad29b31f23aebbc4f5.scope - libcontainer container 48a222842c0e689ef0b220f3e1443c21ad6d150a0fef47ad29b31f23aebbc4f5. Jan 30 14:01:10.643542 containerd[1471]: time="2025-01-30T14:01:10.643396923Z" level=info msg="StartContainer for \"48a222842c0e689ef0b220f3e1443c21ad6d150a0fef47ad29b31f23aebbc4f5\" returns successfully" Jan 30 14:01:10.671905 systemd[1]: cri-containerd-48a222842c0e689ef0b220f3e1443c21ad6d150a0fef47ad29b31f23aebbc4f5.scope: Deactivated successfully. Jan 30 14:01:10.721698 containerd[1471]: time="2025-01-30T14:01:10.721533377Z" level=info msg="shim disconnected" id=48a222842c0e689ef0b220f3e1443c21ad6d150a0fef47ad29b31f23aebbc4f5 namespace=k8s.io Jan 30 14:01:10.721698 containerd[1471]: time="2025-01-30T14:01:10.721650083Z" level=warning msg="cleaning up after shim disconnected" id=48a222842c0e689ef0b220f3e1443c21ad6d150a0fef47ad29b31f23aebbc4f5 namespace=k8s.io Jan 30 14:01:10.721698 containerd[1471]: time="2025-01-30T14:01:10.721663141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:11.235303 kubelet[2591]: E0130 14:01:11.235259 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:11.243002 containerd[1471]: time="2025-01-30T14:01:11.242089098Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:01:11.284905 containerd[1471]: time="2025-01-30T14:01:11.284828327Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34\"" Jan 30 14:01:11.288785 containerd[1471]: time="2025-01-30T14:01:11.288628964Z" level=info msg="StartContainer for \"09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34\"" Jan 30 14:01:11.343206 systemd[1]: run-containerd-runc-k8s.io-09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34-runc.a2KKEC.mount: Deactivated successfully. Jan 30 14:01:11.353266 systemd[1]: Started cri-containerd-09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34.scope - libcontainer container 09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34. Jan 30 14:01:11.395404 containerd[1471]: time="2025-01-30T14:01:11.395123533Z" level=info msg="StartContainer for \"09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34\" returns successfully" Jan 30 14:01:11.417458 systemd[1]: cri-containerd-09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34.scope: Deactivated successfully. Jan 30 14:01:11.456360 containerd[1471]: time="2025-01-30T14:01:11.456263798Z" level=info msg="shim disconnected" id=09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34 namespace=k8s.io Jan 30 14:01:11.456360 containerd[1471]: time="2025-01-30T14:01:11.456351434Z" level=warning msg="cleaning up after shim disconnected" id=09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34 namespace=k8s.io Jan 30 14:01:11.456360 containerd[1471]: time="2025-01-30T14:01:11.456365093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:12.125492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09984304721ceb6166c72aaf740a436259e5bc068fc2270fd22e78261872ee34-rootfs.mount: Deactivated successfully. Jan 30 14:01:12.241963 kubelet[2591]: E0130 14:01:12.241778 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:12.251405 containerd[1471]: time="2025-01-30T14:01:12.251340721Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:01:12.313409 containerd[1471]: time="2025-01-30T14:01:12.313328680Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869\"" Jan 30 14:01:12.315091 containerd[1471]: time="2025-01-30T14:01:12.315033677Z" level=info msg="StartContainer for \"3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869\"" Jan 30 14:01:12.378360 systemd[1]: Started cri-containerd-3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869.scope - libcontainer container 3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869. Jan 30 14:01:12.464112 containerd[1471]: time="2025-01-30T14:01:12.464022062Z" level=info msg="StartContainer for \"3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869\" returns successfully" Jan 30 14:01:12.469102 systemd[1]: cri-containerd-3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869.scope: Deactivated successfully. Jan 30 14:01:12.521840 containerd[1471]: time="2025-01-30T14:01:12.521744564Z" level=info msg="shim disconnected" id=3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869 namespace=k8s.io Jan 30 14:01:12.523004 containerd[1471]: time="2025-01-30T14:01:12.522886344Z" level=warning msg="cleaning up after shim disconnected" id=3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869 namespace=k8s.io Jan 30 14:01:12.523004 containerd[1471]: time="2025-01-30T14:01:12.522952671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:13.125326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3970fdf9451fc4c6c381a4057642c500a17888f2981dd58769a5880278c77869-rootfs.mount: Deactivated successfully. Jan 30 14:01:13.249019 kubelet[2591]: E0130 14:01:13.248371 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:13.260533 containerd[1471]: time="2025-01-30T14:01:13.260257959Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:01:13.299546 containerd[1471]: time="2025-01-30T14:01:13.299455303Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c\"" Jan 30 14:01:13.302478 containerd[1471]: time="2025-01-30T14:01:13.302410890Z" level=info msg="StartContainer for \"8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c\"" Jan 30 14:01:13.367381 systemd[1]: Started cri-containerd-8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c.scope - libcontainer container 8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c. Jan 30 14:01:13.407463 systemd[1]: cri-containerd-8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c.scope: Deactivated successfully. Jan 30 14:01:13.417109 containerd[1471]: time="2025-01-30T14:01:13.414686857Z" level=info msg="StartContainer for \"8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c\" returns successfully" Jan 30 14:01:13.457676 containerd[1471]: time="2025-01-30T14:01:13.457591062Z" level=info msg="shim disconnected" id=8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c namespace=k8s.io Jan 30 14:01:13.458229 containerd[1471]: time="2025-01-30T14:01:13.458195508Z" level=warning msg="cleaning up after shim disconnected" id=8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c namespace=k8s.io Jan 30 14:01:13.458358 containerd[1471]: time="2025-01-30T14:01:13.458335438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:01:14.125699 systemd[1]: run-containerd-runc-k8s.io-8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c-runc.xumqYJ.mount: Deactivated successfully. Jan 30 14:01:14.125870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c13f5fbb80a271ebe249526fd0da94d22270d09cea6cc738294141a85eca09c-rootfs.mount: Deactivated successfully. Jan 30 14:01:14.258736 kubelet[2591]: E0130 14:01:14.258663 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:14.266761 containerd[1471]: time="2025-01-30T14:01:14.266630118Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:01:14.306861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833936217.mount: Deactivated successfully. Jan 30 14:01:14.311016 containerd[1471]: time="2025-01-30T14:01:14.310793576Z" level=info msg="CreateContainer within sandbox \"6187b2577c3b81684721298b31cab8bc390a13549a449b599ec2bfda47f90e50\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"100cd0d3a88cba022923fed760f94d616ce7038e6b1c3cdfbb36d29d2aff06c8\"" Jan 30 14:01:14.313801 containerd[1471]: time="2025-01-30T14:01:14.313754678Z" level=info msg="StartContainer for \"100cd0d3a88cba022923fed760f94d616ce7038e6b1c3cdfbb36d29d2aff06c8\"" Jan 30 14:01:14.365236 systemd[1]: Started cri-containerd-100cd0d3a88cba022923fed760f94d616ce7038e6b1c3cdfbb36d29d2aff06c8.scope - libcontainer container 100cd0d3a88cba022923fed760f94d616ce7038e6b1c3cdfbb36d29d2aff06c8. Jan 30 14:01:14.424360 containerd[1471]: time="2025-01-30T14:01:14.424293726Z" level=info msg="StartContainer for \"100cd0d3a88cba022923fed760f94d616ce7038e6b1c3cdfbb36d29d2aff06c8\" returns successfully" Jan 30 14:01:15.272260 kubelet[2591]: E0130 14:01:15.270526 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:15.288769 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 14:01:16.288402 kubelet[2591]: E0130 14:01:16.287796 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:17.021387 systemd[1]: run-containerd-runc-k8s.io-100cd0d3a88cba022923fed760f94d616ce7038e6b1c3cdfbb36d29d2aff06c8-runc.S5Uloi.mount: Deactivated successfully. Jan 30 14:01:19.371757 systemd-networkd[1370]: lxc_health: Link UP Jan 30 14:01:19.386961 systemd-networkd[1370]: lxc_health: Gained carrier Jan 30 14:01:20.290541 kubelet[2591]: E0130 14:01:20.290486 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:20.322845 kubelet[2591]: I0130 14:01:20.321869 2591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t9cpc" podStartSLOduration=11.321839884 podStartE2EDuration="11.321839884s" podCreationTimestamp="2025-01-30 14:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:01:15.312417599 +0000 UTC m=+110.799668523" watchObservedRunningTime="2025-01-30 14:01:20.321839884 +0000 UTC m=+115.809090802" Jan 30 14:01:21.279049 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jan 30 14:01:21.297168 kubelet[2591]: E0130 14:01:21.297113 2591 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Jan 30 14:01:21.600363 systemd[1]: run-containerd-runc-k8s.io-100cd0d3a88cba022923fed760f94d616ce7038e6b1c3cdfbb36d29d2aff06c8-runc.VAMooQ.mount: Deactivated successfully. Jan 30 14:01:24.709131 containerd[1471]: time="2025-01-30T14:01:24.707250409Z" level=info msg="StopPodSandbox for \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\"" Jan 30 14:01:24.721266 containerd[1471]: time="2025-01-30T14:01:24.721084147Z" level=info msg="TearDown network for sandbox \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\" successfully" Jan 30 14:01:24.721266 containerd[1471]: time="2025-01-30T14:01:24.721130210Z" level=info msg="StopPodSandbox for \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\" returns successfully" Jan 30 14:01:24.722034 containerd[1471]: time="2025-01-30T14:01:24.721992422Z" level=info msg="RemovePodSandbox for \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\"" Jan 30 14:01:24.722125 containerd[1471]: time="2025-01-30T14:01:24.722057524Z" level=info msg="Forcibly stopping sandbox \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\"" Jan 30 14:01:24.722168 containerd[1471]: time="2025-01-30T14:01:24.722149026Z" level=info msg="TearDown network for sandbox \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\" successfully" Jan 30 14:01:24.730273 containerd[1471]: time="2025-01-30T14:01:24.730195948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:01:24.730463 containerd[1471]: time="2025-01-30T14:01:24.730302411Z" level=info msg="RemovePodSandbox \"36c6fe9b04195d36645e8488756ef6f4b63769e517b4904a8b9f10428f592713\" returns successfully" Jan 30 14:01:24.731982 containerd[1471]: time="2025-01-30T14:01:24.731105806Z" level=info msg="StopPodSandbox for \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\"" Jan 30 14:01:24.731982 containerd[1471]: time="2025-01-30T14:01:24.731242262Z" level=info msg="TearDown network for sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" successfully" Jan 30 14:01:24.731982 containerd[1471]: time="2025-01-30T14:01:24.731266299Z" level=info msg="StopPodSandbox for \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" returns successfully" Jan 30 14:01:24.734993 containerd[1471]: time="2025-01-30T14:01:24.733241620Z" level=info msg="RemovePodSandbox for \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\"" Jan 30 14:01:24.734993 containerd[1471]: time="2025-01-30T14:01:24.733279802Z" level=info msg="Forcibly stopping sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\"" Jan 30 14:01:24.734993 containerd[1471]: time="2025-01-30T14:01:24.733350065Z" level=info msg="TearDown network for sandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" successfully" Jan 30 14:01:24.741206 containerd[1471]: time="2025-01-30T14:01:24.741051606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:01:24.741206 containerd[1471]: time="2025-01-30T14:01:24.741181133Z" level=info msg="RemovePodSandbox \"3539974cdbcb03e5a1f106be68be6088cff6e5ed3e96064568b876c154ca46de\" returns successfully" Jan 30 14:01:26.182254 sshd[4391]: pam_unix(sshd:session): session closed for user core Jan 30 14:01:26.189172 systemd[1]: sshd@30-143.198.62.166:22-147.75.109.163:58280.service: Deactivated successfully. Jan 30 14:01:26.193554 systemd[1]: session-30.scope: Deactivated successfully. Jan 30 14:01:26.202428 systemd-logind[1444]: Session 30 logged out. Waiting for processes to exit. Jan 30 14:01:26.205811 systemd-logind[1444]: Removed session 30.