Feb 13 20:29:42.968979 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:29:42.969018 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:29:42.969039 kernel: BIOS-provided physical RAM map: Feb 13 20:29:42.969050 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:29:42.969061 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:29:42.969072 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:29:42.969086 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 20:29:42.969097 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 20:29:42.969107 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:29:42.969123 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:29:42.969136 kernel: NX (Execute Disable) protection: active Feb 13 20:29:42.969147 kernel: APIC: Static calls initialized Feb 13 20:29:42.969166 kernel: SMBIOS 2.8 present. Feb 13 20:29:42.969179 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 20:29:42.969195 kernel: Hypervisor detected: KVM Feb 13 20:29:42.969214 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:29:42.969231 kernel: kvm-clock: using sched offset of 3385225779 cycles Feb 13 20:29:42.969245 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:29:42.969259 kernel: tsc: Detected 2494.140 MHz processor Feb 13 20:29:42.969273 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:29:42.969287 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:29:42.969301 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 20:29:42.969314 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:29:42.969328 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:29:42.969346 kernel: ACPI: Early table checksum verification disabled Feb 13 20:29:42.969360 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 20:29:42.969371 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:29:42.969382 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:29:42.969394 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:29:42.969405 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 20:29:42.969457 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:29:42.969470 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:29:42.969484 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:29:42.969499 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:29:42.969507 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 20:29:42.969514 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 20:29:42.969522 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 20:29:42.969530 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 20:29:42.969538 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 20:29:42.969546 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 20:29:42.969560 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 20:29:42.969568 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:29:42.969576 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:29:42.969585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:29:42.969593 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:29:42.969605 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 20:29:42.969614 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 20:29:42.969626 kernel: Zone ranges: Feb 13 20:29:42.969634 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:29:42.969642 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 20:29:42.969651 kernel: Normal empty Feb 13 20:29:42.969659 kernel: Movable zone start for each node Feb 13 20:29:42.969667 kernel: Early memory node ranges Feb 13 20:29:42.969675 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:29:42.969683 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 20:29:42.969692 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 20:29:42.969703 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:29:42.969711 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:29:42.969723 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 20:29:42.969735 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:29:42.969748 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:29:42.969762 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:29:42.969776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:29:42.969790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:29:42.969805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:29:42.969823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:29:42.969837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:29:42.969852 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:29:42.969867 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:29:42.969882 kernel: TSC deadline timer available Feb 13 20:29:42.969895 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:29:42.969907 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:29:42.969921 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 20:29:42.969939 kernel: Booting paravirtualized kernel on KVM Feb 13 20:29:42.969951 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:29:42.969969 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:29:42.969981 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:29:42.969992 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:29:42.970004 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:29:42.970016 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:29:42.970031 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:29:42.970045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:29:42.970058 kernel: random: crng init done Feb 13 20:29:42.970076 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:29:42.970088 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:29:42.970103 kernel: Fallback order for Node 0: 0 Feb 13 20:29:42.970117 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 20:29:42.970130 kernel: Policy zone: DMA32 Feb 13 20:29:42.970144 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:29:42.970159 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125148K reserved, 0K cma-reserved) Feb 13 20:29:42.970174 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:29:42.970192 kernel: Kernel/User page tables isolation: enabled Feb 13 20:29:42.970207 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:29:42.970221 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:29:42.970236 kernel: Dynamic Preempt: voluntary Feb 13 20:29:42.970251 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:29:42.970267 kernel: rcu: RCU event tracing is enabled. Feb 13 20:29:42.970283 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:29:42.970297 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:29:42.970310 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:29:42.970325 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:29:42.970343 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:29:42.970356 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:29:42.970368 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:29:42.970381 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:29:42.970401 kernel: Console: colour VGA+ 80x25 Feb 13 20:29:42.971477 kernel: printk: console [tty0] enabled Feb 13 20:29:42.971494 kernel: printk: console [ttyS0] enabled Feb 13 20:29:42.971504 kernel: ACPI: Core revision 20230628 Feb 13 20:29:42.971513 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:29:42.971527 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:29:42.971535 kernel: x2apic enabled Feb 13 20:29:42.971544 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:29:42.971553 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:29:42.971561 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Feb 13 20:29:42.971570 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Feb 13 20:29:42.971578 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:29:42.971586 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:29:42.971606 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:29:42.971615 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:29:42.971623 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:29:42.971635 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:29:42.971644 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:29:42.971652 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:29:42.971661 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:29:42.971670 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:29:42.971679 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:29:42.971694 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:29:42.971703 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:29:42.971711 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:29:42.971720 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:29:42.971729 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:29:42.971738 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:29:42.971747 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:29:42.971755 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:29:42.971767 kernel: landlock: Up and running. Feb 13 20:29:42.971776 kernel: SELinux: Initializing. Feb 13 20:29:42.971785 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:29:42.971794 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:29:42.971803 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 20:29:42.971812 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:29:42.971821 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:29:42.971829 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:29:42.971844 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 20:29:42.971858 kernel: signal: max sigframe size: 1776 Feb 13 20:29:42.971866 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:29:42.971876 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:29:42.971885 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:29:42.971894 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:29:42.971902 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:29:42.971911 kernel: .... node #0, CPUs: #1 Feb 13 20:29:42.971920 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:29:42.971936 kernel: smpboot: Max logical packages: 1 Feb 13 20:29:42.971952 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Feb 13 20:29:42.971966 kernel: devtmpfs: initialized Feb 13 20:29:42.971978 kernel: x86/mm: Memory block size: 128MB Feb 13 20:29:42.971990 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:29:42.971999 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:29:42.972008 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:29:42.972017 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:29:42.972026 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:29:42.972035 kernel: audit: type=2000 audit(1739478581.203:1): state=initialized audit_enabled=0 res=1 Feb 13 20:29:42.972046 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:29:42.972055 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:29:42.972063 kernel: cpuidle: using governor menu Feb 13 20:29:42.972072 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:29:42.972081 kernel: dca service started, version 1.12.1 Feb 13 20:29:42.972090 kernel: PCI: Using configuration type 1 for base access Feb 13 20:29:42.972099 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:29:42.972108 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:29:42.972116 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:29:42.972128 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:29:42.972137 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:29:42.972146 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:29:42.972154 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:29:42.972163 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:29:42.972172 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:29:42.972181 kernel: ACPI: Interpreter enabled Feb 13 20:29:42.972190 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:29:42.972199 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:29:42.972211 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:29:42.972219 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:29:42.972228 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:29:42.972237 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:29:42.972461 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:29:42.972574 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:29:42.972669 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:29:42.972685 kernel: acpiphp: Slot [3] registered Feb 13 20:29:42.972694 kernel: acpiphp: Slot [4] registered Feb 13 20:29:42.972703 kernel: acpiphp: Slot [5] registered Feb 13 20:29:42.972712 kernel: acpiphp: Slot [6] registered Feb 13 20:29:42.972720 kernel: acpiphp: Slot [7] registered Feb 13 20:29:42.972729 kernel: acpiphp: Slot [8] registered Feb 13 20:29:42.972737 kernel: acpiphp: Slot [9] registered Feb 13 20:29:42.972746 kernel: acpiphp: Slot [10] registered Feb 13 20:29:42.972754 kernel: acpiphp: Slot [11] registered Feb 13 20:29:42.972763 kernel: acpiphp: Slot [12] registered Feb 13 20:29:42.972771 kernel: acpiphp: Slot [13] registered Feb 13 20:29:42.972782 kernel: acpiphp: Slot [14] registered Feb 13 20:29:42.972791 kernel: acpiphp: Slot [15] registered Feb 13 20:29:42.972800 kernel: acpiphp: Slot [16] registered Feb 13 20:29:42.972808 kernel: acpiphp: Slot [17] registered Feb 13 20:29:42.972817 kernel: acpiphp: Slot [18] registered Feb 13 20:29:42.972826 kernel: acpiphp: Slot [19] registered Feb 13 20:29:42.972834 kernel: acpiphp: Slot [20] registered Feb 13 20:29:42.972843 kernel: acpiphp: Slot [21] registered Feb 13 20:29:42.972852 kernel: acpiphp: Slot [22] registered Feb 13 20:29:42.972861 kernel: acpiphp: Slot [23] registered Feb 13 20:29:42.972872 kernel: acpiphp: Slot [24] registered Feb 13 20:29:42.972881 kernel: acpiphp: Slot [25] registered Feb 13 20:29:42.972890 kernel: acpiphp: Slot [26] registered Feb 13 20:29:42.972898 kernel: acpiphp: Slot [27] registered Feb 13 20:29:42.972907 kernel: acpiphp: Slot [28] registered Feb 13 20:29:42.972916 kernel: acpiphp: Slot [29] registered Feb 13 20:29:42.972924 kernel: acpiphp: Slot [30] registered Feb 13 20:29:42.972933 kernel: acpiphp: Slot [31] registered Feb 13 20:29:42.972942 kernel: PCI host bridge to bus 0000:00 Feb 13 20:29:42.973051 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:29:42.973139 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:29:42.973223 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:29:42.973306 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:29:42.973388 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 20:29:42.973495 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:29:42.973616 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:29:42.973745 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:29:42.973853 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:29:42.973948 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 20:29:42.974043 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:29:42.974141 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:29:42.974251 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:29:42.974354 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:29:42.974506 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 20:29:42.974756 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 20:29:42.974876 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:29:42.974999 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:29:42.975101 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:29:42.975269 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:29:42.975406 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:29:42.975596 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 20:29:42.975694 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 20:29:42.975789 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:29:42.975885 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:29:42.975995 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:29:42.976098 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 20:29:42.976195 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 20:29:42.976292 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 20:29:42.976398 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:29:42.976546 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 20:29:42.976643 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 20:29:42.976741 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 20:29:42.976894 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 20:29:42.977005 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 20:29:42.977100 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 20:29:42.977196 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 20:29:42.977317 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:29:42.977441 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:29:42.977539 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 20:29:42.977639 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 20:29:42.977791 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:29:42.977911 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 20:29:42.978026 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 20:29:42.978128 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 20:29:42.978249 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:29:42.978355 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 20:29:42.978531 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 20:29:42.978550 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:29:42.978564 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:29:42.978578 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:29:42.978591 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:29:42.978604 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:29:42.978618 kernel: iommu: Default domain type: Translated Feb 13 20:29:42.978632 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:29:42.978642 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:29:42.978651 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:29:42.978660 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:29:42.978669 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 20:29:42.978784 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:29:42.978881 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:29:42.978973 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:29:42.978985 kernel: vgaarb: loaded Feb 13 20:29:42.978999 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:29:42.979008 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:29:42.979017 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:29:42.979026 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:29:42.979035 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:29:42.979044 kernel: pnp: PnP ACPI init Feb 13 20:29:42.979056 kernel: pnp: PnP ACPI: found 4 devices Feb 13 20:29:42.979070 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:29:42.979082 kernel: NET: Registered PF_INET protocol family Feb 13 20:29:42.979099 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:29:42.979113 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:29:42.979127 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:29:42.979142 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:29:42.979156 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:29:42.979171 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:29:42.979185 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:29:42.979198 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:29:42.979207 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:29:42.979220 kernel: NET: Registered PF_XDP protocol family Feb 13 20:29:42.979337 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:29:42.979456 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:29:42.979545 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:29:42.979642 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:29:42.979731 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 20:29:42.979838 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:29:42.979942 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:29:42.979961 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:29:42.980066 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 31658 usecs Feb 13 20:29:42.980079 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:29:42.980088 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:29:42.980098 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Feb 13 20:29:42.980107 kernel: Initialise system trusted keyrings Feb 13 20:29:42.980116 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:29:42.980125 kernel: Key type asymmetric registered Feb 13 20:29:42.980137 kernel: Asymmetric key parser 'x509' registered Feb 13 20:29:42.980147 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:29:42.980157 kernel: io scheduler mq-deadline registered Feb 13 20:29:42.980166 kernel: io scheduler kyber registered Feb 13 20:29:42.980175 kernel: io scheduler bfq registered Feb 13 20:29:42.980184 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:29:42.980193 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:29:42.980202 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:29:42.980211 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:29:42.980224 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:29:42.980238 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:29:42.980247 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:29:42.980256 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:29:42.980265 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:29:42.980274 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:29:42.980406 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:29:42.980600 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:29:42.980697 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:29:42 UTC (1739478582) Feb 13 20:29:42.980798 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:29:42.980810 kernel: intel_pstate: CPU model not supported Feb 13 20:29:42.980819 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:29:42.980828 kernel: Segment Routing with IPv6 Feb 13 20:29:42.980837 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:29:42.980846 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:29:42.980860 kernel: Key type dns_resolver registered Feb 13 20:29:42.980871 kernel: IPI shorthand broadcast: enabled Feb 13 20:29:42.980886 kernel: sched_clock: Marking stable (1091005897, 87801965)->(1206521872, -27714010) Feb 13 20:29:42.980895 kernel: registered taskstats version 1 Feb 13 20:29:42.980904 kernel: Loading compiled-in X.509 certificates Feb 13 20:29:42.980914 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:29:42.980923 kernel: Key type .fscrypt registered Feb 13 20:29:42.980932 kernel: Key type fscrypt-provisioning registered Feb 13 20:29:42.980941 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:29:42.980950 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:29:42.980958 kernel: ima: No architecture policies found Feb 13 20:29:42.980970 kernel: clk: Disabling unused clocks Feb 13 20:29:42.980979 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:29:42.980996 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:29:42.981024 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:29:42.981037 kernel: Run /init as init process Feb 13 20:29:42.981047 kernel: with arguments: Feb 13 20:29:42.981056 kernel: /init Feb 13 20:29:42.981066 kernel: with environment: Feb 13 20:29:42.981076 kernel: HOME=/ Feb 13 20:29:42.981087 kernel: TERM=linux Feb 13 20:29:42.981097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:29:42.981109 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:29:42.981122 systemd[1]: Detected virtualization kvm. Feb 13 20:29:42.981132 systemd[1]: Detected architecture x86-64. Feb 13 20:29:42.981141 systemd[1]: Running in initrd. Feb 13 20:29:42.981151 systemd[1]: No hostname configured, using default hostname. Feb 13 20:29:42.981163 systemd[1]: Hostname set to . Feb 13 20:29:42.981173 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:29:42.981182 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:29:42.981192 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:29:42.981202 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:29:42.981213 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:29:42.981223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:29:42.981233 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:29:42.981245 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:29:42.981257 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:29:42.981267 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:29:42.981277 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:29:42.981287 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:29:42.981296 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:29:42.981306 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:29:42.981332 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:29:42.981345 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:29:42.981354 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:29:42.981364 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:29:42.981374 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:29:42.981385 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:29:42.981397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:29:42.981407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:29:42.981471 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:29:42.981482 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:29:42.981491 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:29:42.981505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:29:42.981515 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:29:42.981525 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:29:42.981537 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:29:42.981547 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:29:42.981557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:29:42.981567 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:29:42.981609 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 20:29:42.981638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:29:42.981648 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:29:42.981659 systemd-journald[184]: Journal started Feb 13 20:29:42.981683 systemd-journald[184]: Runtime Journal (/run/log/journal/269d4cfeed534a8a912d7fac116d4522) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:29:42.983434 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:29:42.992724 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 20:29:42.996669 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:29:43.032482 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:29:43.037448 kernel: Bridge firewalling registered Feb 13 20:29:43.035701 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 20:29:43.039681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:29:43.041323 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:29:43.061796 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:29:43.062770 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:29:43.070733 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:29:43.073891 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:29:43.076697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:29:43.078004 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:29:43.098279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:29:43.100704 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:29:43.106752 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:29:43.107647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:29:43.111640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:29:43.122475 dracut-cmdline[216]: dracut-dracut-053 Feb 13 20:29:43.126330 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:29:43.148924 systemd-resolved[220]: Positive Trust Anchors: Feb 13 20:29:43.148941 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:29:43.148976 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:29:43.152128 systemd-resolved[220]: Defaulting to hostname 'linux'. Feb 13 20:29:43.153847 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:29:43.154607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:29:43.225454 kernel: SCSI subsystem initialized Feb 13 20:29:43.236471 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:29:43.248445 kernel: iscsi: registered transport (tcp) Feb 13 20:29:43.270577 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:29:43.270670 kernel: QLogic iSCSI HBA Driver Feb 13 20:29:43.325586 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:29:43.330828 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:29:43.370599 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:29:43.370696 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:29:43.371870 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:29:43.417507 kernel: raid6: avx2x4 gen() 17011 MB/s Feb 13 20:29:43.434479 kernel: raid6: avx2x2 gen() 15221 MB/s Feb 13 20:29:43.451898 kernel: raid6: avx2x1 gen() 12347 MB/s Feb 13 20:29:43.451978 kernel: raid6: using algorithm avx2x4 gen() 17011 MB/s Feb 13 20:29:43.469850 kernel: raid6: .... xor() 6703 MB/s, rmw enabled Feb 13 20:29:43.469937 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:29:43.492466 kernel: xor: automatically using best checksumming function avx Feb 13 20:29:43.671453 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:29:43.686149 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:29:43.692711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:29:43.725803 systemd-udevd[403]: Using default interface naming scheme 'v255'. Feb 13 20:29:43.734362 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:29:43.741112 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:29:43.764025 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:29:43.803164 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:29:43.811805 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:29:43.885134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:29:43.892615 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:29:43.918846 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:29:43.921044 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:29:43.922783 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:29:43.923134 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:29:43.928724 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:29:43.957881 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:29:43.963559 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 20:29:43.973975 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:29:43.975203 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:29:43.975222 kernel: GPT:9289727 != 125829119 Feb 13 20:29:43.975234 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:29:43.975246 kernel: GPT:9289727 != 125829119 Feb 13 20:29:43.975257 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:29:43.975268 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:29:43.975279 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 20:29:43.991523 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Feb 13 20:29:43.999690 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:29:44.010475 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:29:44.059793 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:29:44.064586 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Feb 13 20:29:44.059952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:29:44.062046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:29:44.062539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:29:44.062715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:29:44.063133 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:29:44.080445 kernel: ACPI: bus type USB registered Feb 13 20:29:44.081774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:29:44.084028 kernel: usbcore: registered new interface driver usbfs Feb 13 20:29:44.084078 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:29:44.086496 kernel: AES CTR mode by8 optimization enabled Feb 13 20:29:44.086556 kernel: usbcore: registered new interface driver hub Feb 13 20:29:44.086569 kernel: usbcore: registered new device driver usb Feb 13 20:29:44.121000 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (448) Feb 13 20:29:44.139812 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:29:44.189077 kernel: libata version 3.00 loaded. Feb 13 20:29:44.189126 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:29:44.189367 kernel: scsi host1: ata_piix Feb 13 20:29:44.189578 kernel: scsi host2: ata_piix Feb 13 20:29:44.189693 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 20:29:44.189706 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 20:29:44.196514 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 20:29:44.197397 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 20:29:44.197647 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 20:29:44.197787 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 20:29:44.197922 kernel: hub 1-0:1.0: USB hub found Feb 13 20:29:44.198115 kernel: hub 1-0:1.0: 2 ports detected Feb 13 20:29:44.192660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:29:44.208772 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:29:44.214008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:29:44.218485 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:29:44.218965 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:29:44.228739 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:29:44.231641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:29:44.240394 disk-uuid[532]: Primary Header is updated. Feb 13 20:29:44.240394 disk-uuid[532]: Secondary Entries is updated. Feb 13 20:29:44.240394 disk-uuid[532]: Secondary Header is updated. Feb 13 20:29:44.246660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:29:44.250859 kernel: GPT:disk_guids don't match. Feb 13 20:29:44.250945 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:29:44.250963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:29:44.260471 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:29:44.276475 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:29:45.259602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:29:45.260758 disk-uuid[533]: The operation has completed successfully. Feb 13 20:29:45.303789 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:29:45.303947 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:29:45.321697 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:29:45.328111 sh[563]: Success Feb 13 20:29:45.344453 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:29:45.416638 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:29:45.418393 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:29:45.420708 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:29:45.446514 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:29:45.446593 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:29:45.446610 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:29:45.446623 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:29:45.446649 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:29:45.454229 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:29:45.455631 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:29:45.463703 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:29:45.466768 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:29:45.480022 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:29:45.480093 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:29:45.480108 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:29:45.484438 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:29:45.495574 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:29:45.496832 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:29:45.503530 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:29:45.511649 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:29:45.640465 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:29:45.649707 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:29:45.652436 ignition[649]: Ignition 2.19.0 Feb 13 20:29:45.652445 ignition[649]: Stage: fetch-offline Feb 13 20:29:45.652491 ignition[649]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:29:45.652501 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:29:45.655049 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:29:45.652644 ignition[649]: parsed url from cmdline: "" Feb 13 20:29:45.652650 ignition[649]: no config URL provided Feb 13 20:29:45.652658 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:29:45.652670 ignition[649]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:29:45.652678 ignition[649]: failed to fetch config: resource requires networking Feb 13 20:29:45.652919 ignition[649]: Ignition finished successfully Feb 13 20:29:45.675848 systemd-networkd[753]: lo: Link UP Feb 13 20:29:45.675860 systemd-networkd[753]: lo: Gained carrier Feb 13 20:29:45.678305 systemd-networkd[753]: Enumeration completed Feb 13 20:29:45.678768 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:29:45.678772 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 20:29:45.679051 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:29:45.680101 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:29:45.680105 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:29:45.680869 systemd-networkd[753]: eth0: Link UP Feb 13 20:29:45.680873 systemd-networkd[753]: eth0: Gained carrier Feb 13 20:29:45.680882 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:29:45.681046 systemd[1]: Reached target network.target - Network. Feb 13 20:29:45.685753 systemd-networkd[753]: eth1: Link UP Feb 13 20:29:45.685758 systemd-networkd[753]: eth1: Gained carrier Feb 13 20:29:45.685770 systemd-networkd[753]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:29:45.691087 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:29:45.698645 systemd-networkd[753]: eth0: DHCPv4 address 64.23.133.101/20, gateway 64.23.128.1 acquired from 169.254.169.253 Feb 13 20:29:45.703542 systemd-networkd[753]: eth1: DHCPv4 address 10.124.0.14/20 acquired from 169.254.169.253 Feb 13 20:29:45.713613 ignition[756]: Ignition 2.19.0 Feb 13 20:29:45.713626 ignition[756]: Stage: fetch Feb 13 20:29:45.713856 ignition[756]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:29:45.713869 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:29:45.714002 ignition[756]: parsed url from cmdline: "" Feb 13 20:29:45.714007 ignition[756]: no config URL provided Feb 13 20:29:45.714013 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:29:45.714022 ignition[756]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:29:45.714043 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 20:29:45.753991 ignition[756]: GET result: OK Feb 13 20:29:45.754883 ignition[756]: parsing config with SHA512: ab717689614e12f0c0e5a7abf45f070551fe546eb11230c61f26a9cb0eca9a6fe4602329df97381bf522e915792a55791f4b2269aa7659dbf106b8bf62f246cd Feb 13 20:29:45.760766 unknown[756]: fetched base config from "system" Feb 13 20:29:45.760787 unknown[756]: fetched base config from "system" Feb 13 20:29:45.761596 ignition[756]: fetch: fetch complete Feb 13 20:29:45.760800 unknown[756]: fetched user config from "digitalocean" Feb 13 20:29:45.761605 ignition[756]: fetch: fetch passed Feb 13 20:29:45.763647 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:29:45.761701 ignition[756]: Ignition finished successfully Feb 13 20:29:45.771878 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:29:45.805307 ignition[763]: Ignition 2.19.0 Feb 13 20:29:45.805317 ignition[763]: Stage: kargs Feb 13 20:29:45.805687 ignition[763]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:29:45.805702 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:29:45.806802 ignition[763]: kargs: kargs passed Feb 13 20:29:45.808433 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:29:45.806875 ignition[763]: Ignition finished successfully Feb 13 20:29:45.815115 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:29:45.860477 ignition[769]: Ignition 2.19.0 Feb 13 20:29:45.860494 ignition[769]: Stage: disks Feb 13 20:29:45.860776 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:29:45.860790 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:29:45.863689 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:29:45.862144 ignition[769]: disks: disks passed Feb 13 20:29:45.868210 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:29:45.862219 ignition[769]: Ignition finished successfully Feb 13 20:29:45.869240 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:29:45.870041 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:29:45.871129 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:29:45.871966 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:29:45.888433 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:29:45.909116 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:29:45.912167 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:29:45.918908 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:29:46.059453 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:29:46.061156 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:29:46.062679 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:29:46.068664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:29:46.082079 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:29:46.086540 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Feb 13 20:29:46.095809 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:29:46.096861 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (785) Feb 13 20:29:46.096988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:29:46.104887 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:29:46.104924 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:29:46.104938 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:29:46.097653 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:29:46.108435 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:29:46.121627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:29:46.123635 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:29:46.134939 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:29:46.210788 coreos-metadata[787]: Feb 13 20:29:46.210 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:29:46.216593 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:29:46.225471 coreos-metadata[787]: Feb 13 20:29:46.225 INFO Fetch successful Feb 13 20:29:46.226596 coreos-metadata[788]: Feb 13 20:29:46.226 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:29:46.231072 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:29:46.231285 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 13 20:29:46.231811 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Feb 13 20:29:46.238203 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:29:46.239150 coreos-metadata[788]: Feb 13 20:29:46.238 INFO Fetch successful Feb 13 20:29:46.245952 coreos-metadata[788]: Feb 13 20:29:46.245 INFO wrote hostname ci-4081.3.1-6-72a75d9253 to /sysroot/etc/hostname Feb 13 20:29:46.247023 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:29:46.251535 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:29:46.369625 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:29:46.380649 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:29:46.384722 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:29:46.396490 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:29:46.430023 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:29:46.437212 ignition[907]: INFO : Ignition 2.19.0 Feb 13 20:29:46.437212 ignition[907]: INFO : Stage: mount Feb 13 20:29:46.438885 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:29:46.438885 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:29:46.440198 ignition[907]: INFO : mount: mount passed Feb 13 20:29:46.440198 ignition[907]: INFO : Ignition finished successfully Feb 13 20:29:46.440293 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:29:46.443092 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:29:46.449698 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:29:46.477762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:29:46.488470 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Feb 13 20:29:46.488561 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:29:46.489452 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:29:46.490603 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:29:46.494505 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:29:46.497430 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:29:46.539649 ignition[936]: INFO : Ignition 2.19.0 Feb 13 20:29:46.541756 ignition[936]: INFO : Stage: files Feb 13 20:29:46.541756 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:29:46.541756 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:29:46.543328 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:29:46.544923 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:29:46.544923 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:29:46.550057 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:29:46.550829 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:29:46.551379 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:29:46.550972 unknown[936]: wrote ssh authorized keys file for user: core Feb 13 20:29:46.553638 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:29:46.554625 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:29:46.610488 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:29:46.745832 systemd-networkd[753]: eth0: Gained IPv6LL Feb 13 20:29:46.790624 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:29:46.792041 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:29:46.792041 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 20:29:47.257691 systemd-networkd[753]: eth1: Gained IPv6LL Feb 13 20:29:47.298293 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:29:47.371350 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:29:47.371350 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:29:47.371350 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:29:47.371350 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:29:47.377074 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:29:47.827146 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:29:48.083035 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:29:48.083035 ignition[936]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 20:29:48.085613 ignition[936]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:29:48.085613 ignition[936]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:29:48.085613 ignition[936]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 20:29:48.085613 ignition[936]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:29:48.085613 ignition[936]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:29:48.085613 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:29:48.085613 ignition[936]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:29:48.085613 ignition[936]: INFO : files: files passed Feb 13 20:29:48.085613 ignition[936]: INFO : Ignition finished successfully Feb 13 20:29:48.087747 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:29:48.099744 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:29:48.103519 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:29:48.106605 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:29:48.107191 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:29:48.132447 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:29:48.132447 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:29:48.134725 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:29:48.137156 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:29:48.137832 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:29:48.142757 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:29:48.193336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:29:48.193544 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:29:48.194719 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:29:48.195387 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:29:48.196601 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:29:48.201808 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:29:48.220033 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:29:48.224713 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:29:48.241318 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:29:48.242456 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:29:48.243587 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:29:48.244122 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:29:48.244270 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:29:48.245217 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:29:48.245710 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:29:48.246579 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:29:48.247315 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:29:48.248062 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:29:48.248695 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:29:48.249474 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:29:48.250188 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:29:48.250920 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:29:48.251708 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:29:48.252270 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:29:48.252404 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:29:48.253222 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:29:48.254142 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:29:48.254883 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:29:48.256068 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:29:48.256936 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:29:48.257077 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:29:48.258299 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:29:48.258620 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:29:48.259469 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:29:48.259574 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:29:48.260318 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:29:48.260459 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:29:48.276919 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:29:48.281756 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:29:48.282858 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:29:48.283642 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:29:48.284750 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:29:48.284869 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:29:48.293971 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:29:48.294734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:29:48.306189 ignition[988]: INFO : Ignition 2.19.0 Feb 13 20:29:48.306189 ignition[988]: INFO : Stage: umount Feb 13 20:29:48.308142 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:29:48.308142 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:29:48.314532 ignition[988]: INFO : umount: umount passed Feb 13 20:29:48.314532 ignition[988]: INFO : Ignition finished successfully Feb 13 20:29:48.312035 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:29:48.312175 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:29:48.315055 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:29:48.315231 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:29:48.316565 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:29:48.316632 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:29:48.317469 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:29:48.317529 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:29:48.318577 systemd[1]: Stopped target network.target - Network. Feb 13 20:29:48.320941 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:29:48.321049 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:29:48.321449 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:29:48.321753 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:29:48.333617 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:29:48.334229 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:29:48.334606 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:29:48.334943 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:29:48.335016 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:29:48.335528 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:29:48.335590 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:29:48.336272 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:29:48.336337 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:29:48.339637 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:29:48.339704 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:29:48.348720 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:29:48.349501 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:29:48.354586 systemd-networkd[753]: eth1: DHCPv6 lease lost Feb 13 20:29:48.356245 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:29:48.359543 systemd-networkd[753]: eth0: DHCPv6 lease lost Feb 13 20:29:48.362227 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:29:48.363748 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:29:48.367054 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:29:48.367280 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:29:48.368460 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:29:48.368601 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:29:48.371224 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:29:48.371298 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:29:48.372196 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:29:48.372286 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:29:48.378622 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:29:48.379183 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:29:48.379305 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:29:48.379808 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:29:48.379889 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:29:48.380255 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:29:48.380302 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:29:48.382897 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:29:48.382974 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:29:48.383687 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:29:48.401077 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:29:48.401342 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:29:48.403137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:29:48.403250 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:29:48.405149 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:29:48.405204 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:29:48.405885 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:29:48.405953 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:29:48.407181 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:29:48.407245 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:29:48.408088 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:29:48.408146 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:29:48.414925 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:29:48.417511 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:29:48.417664 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:29:48.419786 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:29:48.419865 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:29:48.422686 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:29:48.422807 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:29:48.423429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:29:48.423514 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:29:48.424748 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:29:48.425376 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:29:48.432135 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:29:48.432287 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:29:48.433982 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:29:48.440859 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:29:48.455175 systemd[1]: Switching root. Feb 13 20:29:48.485227 systemd-journald[184]: Journal stopped Feb 13 20:29:49.623690 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 20:29:49.623778 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:29:49.623801 kernel: SELinux: policy capability open_perms=1 Feb 13 20:29:49.623814 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:29:49.623832 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:29:49.623847 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:29:49.623860 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:29:49.623872 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:29:49.623888 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:29:49.623900 kernel: audit: type=1403 audit(1739478588.640:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:29:49.623913 systemd[1]: Successfully loaded SELinux policy in 39.210ms. Feb 13 20:29:49.623930 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.259ms. Feb 13 20:29:49.623943 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:29:49.623957 systemd[1]: Detected virtualization kvm. Feb 13 20:29:49.623972 systemd[1]: Detected architecture x86-64. Feb 13 20:29:49.623984 systemd[1]: Detected first boot. Feb 13 20:29:49.624001 systemd[1]: Hostname set to . Feb 13 20:29:49.624022 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:29:49.624035 zram_generator::config[1031]: No configuration found. Feb 13 20:29:49.624050 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:29:49.624062 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:29:49.624075 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:29:49.624087 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:29:49.624100 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:29:49.624112 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:29:49.624128 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:29:49.624141 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:29:49.624157 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:29:49.624170 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:29:49.624182 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:29:49.624196 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:29:49.624208 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:29:49.624221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:29:49.624234 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:29:49.624249 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:29:49.624264 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:29:49.624278 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:29:49.624291 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:29:49.624304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:29:49.624317 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:29:49.624329 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:29:49.624345 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:29:49.624358 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:29:49.624370 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:29:49.624383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:29:49.624396 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:29:49.626486 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:29:49.626547 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:29:49.626561 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:29:49.626584 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:29:49.626597 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:29:49.626610 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:29:49.626623 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:29:49.626636 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:29:49.626649 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:29:49.626661 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:29:49.626674 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:49.626686 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:29:49.626703 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:29:49.626715 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:29:49.626730 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:29:49.626743 systemd[1]: Reached target machines.target - Containers. Feb 13 20:29:49.626755 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:29:49.626768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:29:49.626780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:29:49.626793 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:29:49.626809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:29:49.626821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:29:49.626833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:29:49.626846 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:29:49.626859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:29:49.626873 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:29:49.626885 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:29:49.626898 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:29:49.626914 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:29:49.626927 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:29:49.626939 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:29:49.626952 kernel: fuse: init (API version 7.39) Feb 13 20:29:49.626966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:29:49.626978 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:29:49.626991 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:29:49.627003 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:29:49.627016 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:29:49.627031 systemd[1]: Stopped verity-setup.service. Feb 13 20:29:49.627044 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:49.627100 systemd-journald[1107]: Collecting audit messages is disabled. Feb 13 20:29:49.627128 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:29:49.627143 systemd-journald[1107]: Journal started Feb 13 20:29:49.627168 systemd-journald[1107]: Runtime Journal (/run/log/journal/269d4cfeed534a8a912d7fac116d4522) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:29:49.634490 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:29:49.305185 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:29:49.326241 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:29:49.326975 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:29:49.638543 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:29:49.640145 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:29:49.641837 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:29:49.643245 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:29:49.644024 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:29:49.646035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:29:49.647380 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:29:49.654382 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:29:49.656215 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:29:49.656591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:29:49.658193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:29:49.659525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:29:49.661625 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:29:49.661880 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:29:49.662963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:29:49.695536 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:29:49.705846 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:29:49.711285 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:29:49.713429 kernel: loop: module loaded Feb 13 20:29:49.734907 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:29:49.746599 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:29:49.747218 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:29:49.747277 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:29:49.751205 kernel: ACPI: bus type drm_connector registered Feb 13 20:29:49.750646 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:29:49.768687 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:29:49.775773 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:29:49.777227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:29:49.793985 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:29:49.796474 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:29:49.797015 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:29:49.812000 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:29:49.821830 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:29:49.828947 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:29:49.839730 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:29:49.844619 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:29:49.845549 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:29:49.845717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:29:49.847308 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:29:49.847508 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:29:49.849512 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:29:49.850325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:29:49.851938 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:29:49.854348 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:29:49.874789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:29:49.884761 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:29:49.899835 systemd-journald[1107]: Time spent on flushing to /var/log/journal/269d4cfeed534a8a912d7fac116d4522 is 63.878ms for 997 entries. Feb 13 20:29:49.899835 systemd-journald[1107]: System Journal (/var/log/journal/269d4cfeed534a8a912d7fac116d4522) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:29:49.992953 systemd-journald[1107]: Received client request to flush runtime journal. Feb 13 20:29:49.993054 kernel: loop0: detected capacity change from 0 to 140768 Feb 13 20:29:49.912201 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:29:49.913204 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:29:49.923790 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:29:49.987666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:29:50.003763 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:29:50.008054 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:29:50.010008 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:29:50.027080 udevadm[1157]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:29:50.046096 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:29:50.056849 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Feb 13 20:29:50.058010 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Feb 13 20:29:50.075073 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:29:50.076742 kernel: loop1: detected capacity change from 0 to 142488 Feb 13 20:29:50.088844 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:29:50.125125 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 20:29:50.172467 kernel: loop3: detected capacity change from 0 to 8 Feb 13 20:29:50.176739 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:29:50.193119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:29:50.215458 kernel: loop4: detected capacity change from 0 to 140768 Feb 13 20:29:50.247796 kernel: loop5: detected capacity change from 0 to 142488 Feb 13 20:29:50.256424 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 20:29:50.257037 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 20:29:50.280526 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:29:50.285582 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 20:29:50.320459 kernel: loop7: detected capacity change from 0 to 8 Feb 13 20:29:50.321061 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 20:29:50.322016 (sd-merge)[1177]: Merged extensions into '/usr'. Feb 13 20:29:50.333711 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:29:50.333752 systemd[1]: Reloading... Feb 13 20:29:50.508482 zram_generator::config[1205]: No configuration found. Feb 13 20:29:50.667440 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:29:50.766272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:29:50.834096 systemd[1]: Reloading finished in 497 ms. Feb 13 20:29:50.882040 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:29:50.886965 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:29:50.894929 systemd[1]: Starting ensure-sysext.service... Feb 13 20:29:50.903820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:29:50.919520 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:29:50.919836 systemd[1]: Reloading... Feb 13 20:29:50.964042 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:29:50.964708 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:29:50.966344 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:29:50.966861 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Feb 13 20:29:50.966956 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Feb 13 20:29:50.972381 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:29:50.972401 systemd-tmpfiles[1249]: Skipping /boot Feb 13 20:29:50.992375 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:29:50.992660 systemd-tmpfiles[1249]: Skipping /boot Feb 13 20:29:51.099448 zram_generator::config[1278]: No configuration found. Feb 13 20:29:51.257879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:29:51.327813 systemd[1]: Reloading finished in 407 ms. Feb 13 20:29:51.347439 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:29:51.353175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:29:51.364668 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:29:51.373840 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:29:51.379494 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:29:51.388308 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:29:51.395641 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:29:51.407649 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:29:51.415975 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:51.416283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:29:51.428621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:29:51.431890 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:29:51.446832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:29:51.449765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:29:51.449960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:51.456534 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:29:51.458079 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:29:51.460846 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:29:51.461011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:29:51.469698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:51.470068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:29:51.470390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:29:51.481807 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:29:51.485524 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:29:51.486047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:51.486894 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:29:51.487750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:29:51.493313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:51.495742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:29:51.501604 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:29:51.504485 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:29:51.508777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:29:51.509379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:29:51.509572 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:51.519342 systemd[1]: Finished ensure-sysext.service. Feb 13 20:29:51.531768 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:29:51.536524 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:29:51.538019 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:29:51.546511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:29:51.546718 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:29:51.557810 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:29:51.561873 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:29:51.562556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:29:51.564074 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:29:51.564227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:29:51.574912 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Feb 13 20:29:51.587073 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:29:51.588100 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:29:51.588896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:29:51.591200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:29:51.596812 augenrules[1361]: No rules Feb 13 20:29:51.601374 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:29:51.610793 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:29:51.627713 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:29:51.643721 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:29:51.768618 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:29:51.769196 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:29:51.779877 systemd-resolved[1325]: Positive Trust Anchors: Feb 13 20:29:51.779908 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:29:51.779960 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:29:51.785240 systemd-networkd[1373]: lo: Link UP Feb 13 20:29:51.787896 systemd-networkd[1373]: lo: Gained carrier Feb 13 20:29:51.789237 systemd-networkd[1373]: Enumeration completed Feb 13 20:29:51.789481 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:29:51.793363 systemd-resolved[1325]: Using system hostname 'ci-4081.3.1-6-72a75d9253'. Feb 13 20:29:51.799745 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:29:51.801319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:29:51.801916 systemd[1]: Reached target network.target - Network. Feb 13 20:29:51.803539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:29:51.836960 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:29:51.854652 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 20:29:51.855156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:51.855352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:29:51.857894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:29:51.862487 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:29:51.872123 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:29:51.872710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:29:51.872773 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:29:51.872801 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:29:51.908495 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 20:29:51.912742 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 20:29:51.915479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:29:51.917084 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:29:51.918234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:29:51.919152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:29:51.920100 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:29:51.920358 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:29:51.930555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:29:51.930728 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:29:51.942269 systemd-networkd[1373]: eth1: Configuring with /run/systemd/network/10-c2:99:54:d4:62:68.network. Feb 13 20:29:51.944601 systemd-networkd[1373]: eth1: Link UP Feb 13 20:29:51.944611 systemd-networkd[1373]: eth1: Gained carrier Feb 13 20:29:51.949460 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1378) Feb 13 20:29:51.951701 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:52.000178 systemd-networkd[1373]: eth0: Configuring with /run/systemd/network/10-86:6b:5e:97:9b:d6.network. Feb 13 20:29:52.003622 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:52.003841 systemd-networkd[1373]: eth0: Link UP Feb 13 20:29:52.003848 systemd-networkd[1373]: eth0: Gained carrier Feb 13 20:29:52.008499 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:52.010370 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:52.031449 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:29:52.040488 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:29:52.058451 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:29:52.059005 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:29:52.066797 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:29:52.083447 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:29:52.107548 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:29:52.147441 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:29:52.166036 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:29:52.188437 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:29:52.189429 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:29:52.197504 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:29:52.197605 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:29:52.197766 kernel: [drm] features: -context_init Feb 13 20:29:52.202551 kernel: [drm] number of scanouts: 1 Feb 13 20:29:52.202645 kernel: [drm] number of cap sets: 0 Feb 13 20:29:52.207470 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:29:52.219899 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:29:52.220007 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:29:52.234471 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:29:52.235951 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:29:52.236288 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:29:52.256870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:29:52.262332 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:29:52.262870 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:29:52.272719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:29:52.401539 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:29:52.416343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:29:52.432742 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:29:52.443897 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:29:52.460699 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:29:52.496848 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:29:52.498452 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:29:52.498656 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:29:52.498909 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:29:52.499072 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:29:52.499747 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:29:52.501633 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:29:52.501798 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:29:52.501910 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:29:52.501948 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:29:52.502032 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:29:52.505665 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:29:52.508890 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:29:52.516505 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:29:52.520760 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:29:52.523146 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:29:52.525030 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:29:52.526727 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:29:52.527264 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:29:52.527297 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:29:52.538668 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:29:52.542792 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:29:52.549716 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:29:52.561655 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:29:52.572706 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:29:52.581939 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:29:52.585522 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:29:52.595493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:29:52.607576 coreos-metadata[1439]: Feb 13 20:29:52.607 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:29:52.612735 jq[1443]: false Feb 13 20:29:52.609406 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:29:52.621467 coreos-metadata[1439]: Feb 13 20:29:52.620 INFO Fetch successful Feb 13 20:29:52.622257 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:29:52.632715 extend-filesystems[1444]: Found loop4 Feb 13 20:29:52.632715 extend-filesystems[1444]: Found loop5 Feb 13 20:29:52.632715 extend-filesystems[1444]: Found loop6 Feb 13 20:29:52.632715 extend-filesystems[1444]: Found loop7 Feb 13 20:29:52.632715 extend-filesystems[1444]: Found vda Feb 13 20:29:52.632715 extend-filesystems[1444]: Found vda1 Feb 13 20:29:52.649508 extend-filesystems[1444]: Found vda2 Feb 13 20:29:52.649508 extend-filesystems[1444]: Found vda3 Feb 13 20:29:52.649508 extend-filesystems[1444]: Found usr Feb 13 20:29:52.649508 extend-filesystems[1444]: Found vda4 Feb 13 20:29:52.649508 extend-filesystems[1444]: Found vda6 Feb 13 20:29:52.649508 extend-filesystems[1444]: Found vda7 Feb 13 20:29:52.649508 extend-filesystems[1444]: Found vda9 Feb 13 20:29:52.649508 extend-filesystems[1444]: Checking size of /dev/vda9 Feb 13 20:29:52.635828 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:29:52.652434 dbus-daemon[1440]: [system] SELinux support is enabled Feb 13 20:29:52.658204 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:29:52.659314 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:29:52.663802 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:29:52.669747 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:29:52.679666 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:29:52.683135 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:29:52.689044 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:29:52.701892 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:29:52.704523 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:29:52.706914 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:29:52.707568 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:29:52.712081 extend-filesystems[1444]: Resized partition /dev/vda9 Feb 13 20:29:52.730451 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:29:52.735648 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:29:52.735698 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:29:52.741230 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:29:52.741372 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 20:29:52.741403 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:29:52.748103 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 20:29:52.774022 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1381) Feb 13 20:29:52.774225 jq[1456]: true Feb 13 20:29:52.791352 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:29:52.793291 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:29:52.797652 update_engine[1453]: I20250213 20:29:52.796123 1453 main.cc:92] Flatcar Update Engine starting Feb 13 20:29:52.811194 tar[1465]: linux-amd64/helm Feb 13 20:29:52.808113 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:29:52.821016 update_engine[1453]: I20250213 20:29:52.808441 1453 update_check_scheduler.cc:74] Next update check in 6m6s Feb 13 20:29:52.818498 systemd-logind[1452]: New seat seat0. Feb 13 20:29:52.818675 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:29:52.826024 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:29:52.826047 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:29:52.826793 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:29:52.857324 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:29:52.854066 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:29:52.869402 jq[1475]: true Feb 13 20:29:52.874303 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:29:52.883086 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:29:52.883086 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:29:52.883086 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:29:52.882313 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:29:52.899301 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Feb 13 20:29:52.899301 extend-filesystems[1444]: Found vdb Feb 13 20:29:52.884221 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:29:52.918720 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:29:53.036896 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:29:53.033617 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:29:53.048739 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:29:53.050817 systemd[1]: Starting sshkeys.service... Feb 13 20:29:53.093709 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:29:53.109212 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:29:53.146909 systemd-networkd[1373]: eth0: Gained IPv6LL Feb 13 20:29:53.148179 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:53.169675 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:29:53.176091 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:29:53.187995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:29:53.200025 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:29:53.254728 coreos-metadata[1512]: Feb 13 20:29:53.254 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:29:53.271378 coreos-metadata[1512]: Feb 13 20:29:53.271 INFO Fetch successful Feb 13 20:29:53.295830 unknown[1512]: wrote ssh authorized keys file for user: core Feb 13 20:29:53.309507 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:29:53.357462 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:29:53.362694 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:29:53.375534 systemd[1]: Finished sshkeys.service. Feb 13 20:29:53.509067 containerd[1477]: time="2025-02-13T20:29:53.508894498Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:29:53.582527 containerd[1477]: time="2025-02-13T20:29:53.582448803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:29:53.593117 containerd[1477]: time="2025-02-13T20:29:53.593030999Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:29:53.593117 containerd[1477]: time="2025-02-13T20:29:53.593096872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:29:53.593117 containerd[1477]: time="2025-02-13T20:29:53.593118293Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:29:53.593851 containerd[1477]: time="2025-02-13T20:29:53.593429697Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:29:53.593851 containerd[1477]: time="2025-02-13T20:29:53.593457545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:29:53.593851 containerd[1477]: time="2025-02-13T20:29:53.593533105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:29:53.593851 containerd[1477]: time="2025-02-13T20:29:53.593545925Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:29:53.593851 containerd[1477]: time="2025-02-13T20:29:53.593773435Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:29:53.593851 containerd[1477]: time="2025-02-13T20:29:53.593790606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:29:53.593851 containerd[1477]: time="2025-02-13T20:29:53.593803581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:29:53.593851 containerd[1477]: time="2025-02-13T20:29:53.593823735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:29:53.594160 containerd[1477]: time="2025-02-13T20:29:53.593909142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:29:53.594206 containerd[1477]: time="2025-02-13T20:29:53.594158840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:29:53.594664 containerd[1477]: time="2025-02-13T20:29:53.594290331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:29:53.594664 containerd[1477]: time="2025-02-13T20:29:53.594310873Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:29:53.596519 containerd[1477]: time="2025-02-13T20:29:53.594401853Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:29:53.597007 containerd[1477]: time="2025-02-13T20:29:53.596606099Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:29:53.605513 containerd[1477]: time="2025-02-13T20:29:53.605449763Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:29:53.606109 containerd[1477]: time="2025-02-13T20:29:53.606070103Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:29:53.606419 containerd[1477]: time="2025-02-13T20:29:53.606166659Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:29:53.606419 containerd[1477]: time="2025-02-13T20:29:53.606242737Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:29:53.606419 containerd[1477]: time="2025-02-13T20:29:53.606267038Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:29:53.607702 containerd[1477]: time="2025-02-13T20:29:53.607490717Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:29:53.607841 containerd[1477]: time="2025-02-13T20:29:53.607821416Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:29:53.607977 containerd[1477]: time="2025-02-13T20:29:53.607959970Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:29:53.608005 containerd[1477]: time="2025-02-13T20:29:53.607981148Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:29:53.608005 containerd[1477]: time="2025-02-13T20:29:53.607997890Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:29:53.608048 containerd[1477]: time="2025-02-13T20:29:53.608013880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:29:53.608048 containerd[1477]: time="2025-02-13T20:29:53.608031088Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:29:53.608048 containerd[1477]: time="2025-02-13T20:29:53.608044131Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:29:53.608125 containerd[1477]: time="2025-02-13T20:29:53.608059723Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:29:53.608125 containerd[1477]: time="2025-02-13T20:29:53.608075751Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:29:53.608125 containerd[1477]: time="2025-02-13T20:29:53.608090249Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:29:53.608125 containerd[1477]: time="2025-02-13T20:29:53.608103185Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:29:53.608125 containerd[1477]: time="2025-02-13T20:29:53.608115294Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:29:53.608230 containerd[1477]: time="2025-02-13T20:29:53.608136865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608230 containerd[1477]: time="2025-02-13T20:29:53.608179074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608230 containerd[1477]: time="2025-02-13T20:29:53.608195211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608230 containerd[1477]: time="2025-02-13T20:29:53.608210268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608230 containerd[1477]: time="2025-02-13T20:29:53.608224223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608342 containerd[1477]: time="2025-02-13T20:29:53.608239102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608342 containerd[1477]: time="2025-02-13T20:29:53.608252224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608342 containerd[1477]: time="2025-02-13T20:29:53.608265661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608342 containerd[1477]: time="2025-02-13T20:29:53.608279920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608342 containerd[1477]: time="2025-02-13T20:29:53.608295205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608342 containerd[1477]: time="2025-02-13T20:29:53.608307327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608342 containerd[1477]: time="2025-02-13T20:29:53.608319816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608342 containerd[1477]: time="2025-02-13T20:29:53.608333079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.608609 containerd[1477]: time="2025-02-13T20:29:53.608360445Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:29:53.608609 containerd[1477]: time="2025-02-13T20:29:53.608386626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.611983 containerd[1477]: time="2025-02-13T20:29:53.608398825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.611983 containerd[1477]: time="2025-02-13T20:29:53.611110767Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:29:53.614009 containerd[1477]: time="2025-02-13T20:29:53.613960553Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:29:53.614009 containerd[1477]: time="2025-02-13T20:29:53.614013019Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:29:53.614176 containerd[1477]: time="2025-02-13T20:29:53.614031848Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:29:53.614176 containerd[1477]: time="2025-02-13T20:29:53.614046349Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:29:53.614176 containerd[1477]: time="2025-02-13T20:29:53.614057490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.614176 containerd[1477]: time="2025-02-13T20:29:53.614074522Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:29:53.614176 containerd[1477]: time="2025-02-13T20:29:53.614087508Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:29:53.614176 containerd[1477]: time="2025-02-13T20:29:53.614157576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:29:53.614963 containerd[1477]: time="2025-02-13T20:29:53.614499080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:29:53.614963 containerd[1477]: time="2025-02-13T20:29:53.614569684Z" level=info msg="Connect containerd service" Feb 13 20:29:53.614963 containerd[1477]: time="2025-02-13T20:29:53.614627578Z" level=info msg="using legacy CRI server" Feb 13 20:29:53.614963 containerd[1477]: time="2025-02-13T20:29:53.614635618Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:29:53.614963 containerd[1477]: time="2025-02-13T20:29:53.614798482Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:29:53.620730 containerd[1477]: time="2025-02-13T20:29:53.620673930Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:29:53.622611 containerd[1477]: time="2025-02-13T20:29:53.622547403Z" level=info msg="Start subscribing containerd event" Feb 13 20:29:53.622611 containerd[1477]: time="2025-02-13T20:29:53.622625032Z" level=info msg="Start recovering state" Feb 13 20:29:53.622800 containerd[1477]: time="2025-02-13T20:29:53.622719461Z" level=info msg="Start event monitor" Feb 13 20:29:53.622800 containerd[1477]: time="2025-02-13T20:29:53.622740358Z" level=info msg="Start snapshots syncer" Feb 13 20:29:53.622800 containerd[1477]: time="2025-02-13T20:29:53.622750787Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:29:53.622800 containerd[1477]: time="2025-02-13T20:29:53.622761794Z" level=info msg="Start streaming server" Feb 13 20:29:53.628863 containerd[1477]: time="2025-02-13T20:29:53.628562988Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:29:53.628863 containerd[1477]: time="2025-02-13T20:29:53.628748301Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:29:53.640367 containerd[1477]: time="2025-02-13T20:29:53.636669558Z" level=info msg="containerd successfully booted in 0.129743s" Feb 13 20:29:53.636896 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:29:53.659530 systemd-networkd[1373]: eth1: Gained IPv6LL Feb 13 20:29:53.659919 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:53.751578 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:29:53.801988 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:29:53.813059 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:29:53.843078 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:29:53.843330 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:29:53.853870 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:29:53.896293 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:29:53.909208 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:29:53.923892 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:29:53.925597 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:29:54.011298 tar[1465]: linux-amd64/LICENSE Feb 13 20:29:54.011875 tar[1465]: linux-amd64/README.md Feb 13 20:29:54.027521 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:29:54.587720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:29:54.590775 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:29:54.592110 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:29:54.593799 systemd[1]: Startup finished in 1.230s (kernel) + 5.939s (initrd) + 5.991s (userspace) = 13.161s. Feb 13 20:29:54.696452 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:29:54.704803 systemd[1]: Started sshd@0-64.23.133.101:22-147.75.109.163:33942.service - OpenSSH per-connection server daemon (147.75.109.163:33942). Feb 13 20:29:54.776181 sshd[1569]: Accepted publickey for core from 147.75.109.163 port 33942 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:29:54.778955 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:54.796274 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:29:54.806994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:29:54.816741 systemd-logind[1452]: New session 1 of user core. Feb 13 20:29:54.833088 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:29:54.844013 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:29:54.851669 (systemd)[1577]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:29:54.973430 systemd[1577]: Queued start job for default target default.target. Feb 13 20:29:54.978223 systemd[1577]: Created slice app.slice - User Application Slice. Feb 13 20:29:54.978272 systemd[1577]: Reached target paths.target - Paths. Feb 13 20:29:54.978293 systemd[1577]: Reached target timers.target - Timers. Feb 13 20:29:54.981052 systemd[1577]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:29:55.019737 systemd[1577]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:29:55.019811 systemd[1577]: Reached target sockets.target - Sockets. Feb 13 20:29:55.019826 systemd[1577]: Reached target basic.target - Basic System. Feb 13 20:29:55.019877 systemd[1577]: Reached target default.target - Main User Target. Feb 13 20:29:55.019910 systemd[1577]: Startup finished in 158ms. Feb 13 20:29:55.020260 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:29:55.025694 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:29:55.106895 systemd[1]: Started sshd@1-64.23.133.101:22-147.75.109.163:33954.service - OpenSSH per-connection server daemon (147.75.109.163:33954). Feb 13 20:29:55.163562 sshd[1588]: Accepted publickey for core from 147.75.109.163 port 33954 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:29:55.165487 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:55.170936 systemd-logind[1452]: New session 2 of user core. Feb 13 20:29:55.179818 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:29:55.249938 sshd[1588]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:55.261402 systemd[1]: sshd@1-64.23.133.101:22-147.75.109.163:33954.service: Deactivated successfully. Feb 13 20:29:55.265204 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:29:55.268031 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:29:55.276670 systemd[1]: Started sshd@2-64.23.133.101:22-147.75.109.163:33970.service - OpenSSH per-connection server daemon (147.75.109.163:33970). Feb 13 20:29:55.279534 systemd-logind[1452]: Removed session 2. Feb 13 20:29:55.318800 sshd[1596]: Accepted publickey for core from 147.75.109.163 port 33970 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:29:55.321267 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:55.329400 systemd-logind[1452]: New session 3 of user core. Feb 13 20:29:55.335656 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:29:55.336305 kubelet[1563]: E0213 20:29:55.336007 1563 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:29:55.339671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:29:55.339881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:29:55.340905 systemd[1]: kubelet.service: Consumed 1.170s CPU time. Feb 13 20:29:55.398691 sshd[1596]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:55.412395 systemd[1]: sshd@2-64.23.133.101:22-147.75.109.163:33970.service: Deactivated successfully. Feb 13 20:29:55.414374 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:29:55.415114 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:29:55.425863 systemd[1]: Started sshd@3-64.23.133.101:22-147.75.109.163:33982.service - OpenSSH per-connection server daemon (147.75.109.163:33982). Feb 13 20:29:55.427977 systemd-logind[1452]: Removed session 3. Feb 13 20:29:55.467112 sshd[1604]: Accepted publickey for core from 147.75.109.163 port 33982 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:29:55.469009 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:55.475219 systemd-logind[1452]: New session 4 of user core. Feb 13 20:29:55.483732 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:29:55.548076 sshd[1604]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:55.567726 systemd[1]: sshd@3-64.23.133.101:22-147.75.109.163:33982.service: Deactivated successfully. Feb 13 20:29:55.570258 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:29:55.571526 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:29:55.579823 systemd[1]: Started sshd@4-64.23.133.101:22-147.75.109.163:33996.service - OpenSSH per-connection server daemon (147.75.109.163:33996). Feb 13 20:29:55.582430 systemd-logind[1452]: Removed session 4. Feb 13 20:29:55.618307 sshd[1611]: Accepted publickey for core from 147.75.109.163 port 33996 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:29:55.620021 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:55.628432 systemd-logind[1452]: New session 5 of user core. Feb 13 20:29:55.635730 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:29:55.705925 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:29:55.706312 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:29:55.721384 sudo[1614]: pam_unix(sudo:session): session closed for user root Feb 13 20:29:55.725821 sshd[1611]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:55.739768 systemd[1]: sshd@4-64.23.133.101:22-147.75.109.163:33996.service: Deactivated successfully. Feb 13 20:29:55.742139 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:29:55.744659 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:29:55.752841 systemd[1]: Started sshd@5-64.23.133.101:22-147.75.109.163:33998.service - OpenSSH per-connection server daemon (147.75.109.163:33998). Feb 13 20:29:55.754475 systemd-logind[1452]: Removed session 5. Feb 13 20:29:55.812454 sshd[1619]: Accepted publickey for core from 147.75.109.163 port 33998 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:29:55.814269 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:55.820150 systemd-logind[1452]: New session 6 of user core. Feb 13 20:29:55.829725 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:29:55.892011 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:29:55.892356 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:29:55.896699 sudo[1623]: pam_unix(sudo:session): session closed for user root Feb 13 20:29:55.904098 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:29:55.904550 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:29:55.919774 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:29:55.924282 auditctl[1626]: No rules Feb 13 20:29:55.924766 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:29:55.925006 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:29:55.931894 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:29:55.973711 augenrules[1644]: No rules Feb 13 20:29:55.974848 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:29:55.976751 sudo[1622]: pam_unix(sudo:session): session closed for user root Feb 13 20:29:55.981769 sshd[1619]: pam_unix(sshd:session): session closed for user core Feb 13 20:29:55.990557 systemd[1]: sshd@5-64.23.133.101:22-147.75.109.163:33998.service: Deactivated successfully. Feb 13 20:29:55.992839 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:29:55.994508 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:29:55.998944 systemd[1]: Started sshd@6-64.23.133.101:22-147.75.109.163:34002.service - OpenSSH per-connection server daemon (147.75.109.163:34002). Feb 13 20:29:56.000820 systemd-logind[1452]: Removed session 6. Feb 13 20:29:56.058255 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 34002 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:29:56.060023 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:29:56.065996 systemd-logind[1452]: New session 7 of user core. Feb 13 20:29:56.072762 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:29:56.134929 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:29:56.135472 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:29:56.646774 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:29:56.655924 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:29:57.098475 dockerd[1670]: time="2025-02-13T20:29:57.098385448Z" level=info msg="Starting up" Feb 13 20:29:57.222667 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2577056675-merged.mount: Deactivated successfully. Feb 13 20:29:57.298347 dockerd[1670]: time="2025-02-13T20:29:57.298269747Z" level=info msg="Loading containers: start." Feb 13 20:29:57.442477 kernel: Initializing XFRM netlink socket Feb 13 20:29:57.478522 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:57.481342 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:57.491516 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:57.550910 systemd-networkd[1373]: docker0: Link UP Feb 13 20:29:57.552201 systemd-timesyncd[1350]: Network configuration changed, trying to establish connection. Feb 13 20:29:57.568750 dockerd[1670]: time="2025-02-13T20:29:57.568384910Z" level=info msg="Loading containers: done." Feb 13 20:29:57.589687 dockerd[1670]: time="2025-02-13T20:29:57.589086332Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:29:57.589687 dockerd[1670]: time="2025-02-13T20:29:57.589240009Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:29:57.589687 dockerd[1670]: time="2025-02-13T20:29:57.589403864Z" level=info msg="Daemon has completed initialization" Feb 13 20:29:57.590304 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1493680975-merged.mount: Deactivated successfully. Feb 13 20:29:57.626522 dockerd[1670]: time="2025-02-13T20:29:57.626298127Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:29:57.626956 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:29:58.568914 containerd[1477]: time="2025-02-13T20:29:58.568824110Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:29:59.134286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922334548.mount: Deactivated successfully. Feb 13 20:30:00.304642 containerd[1477]: time="2025-02-13T20:30:00.304553316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:00.307842 containerd[1477]: time="2025-02-13T20:30:00.307743910Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976588" Feb 13 20:30:00.309224 containerd[1477]: time="2025-02-13T20:30:00.309092344Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:00.314613 containerd[1477]: time="2025-02-13T20:30:00.314502146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:00.317014 containerd[1477]: time="2025-02-13T20:30:00.316709100Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.747792205s" Feb 13 20:30:00.317014 containerd[1477]: time="2025-02-13T20:30:00.316786573Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 20:30:00.320565 containerd[1477]: time="2025-02-13T20:30:00.320166489Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:30:03.104501 containerd[1477]: time="2025-02-13T20:30:03.104395862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:03.112330 containerd[1477]: time="2025-02-13T20:30:03.112229218Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708193" Feb 13 20:30:03.117379 containerd[1477]: time="2025-02-13T20:30:03.114209274Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:03.120500 containerd[1477]: time="2025-02-13T20:30:03.119922078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:03.124051 containerd[1477]: time="2025-02-13T20:30:03.122651763Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 2.802417099s" Feb 13 20:30:03.124051 containerd[1477]: time="2025-02-13T20:30:03.122720281Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 20:30:03.124645 containerd[1477]: time="2025-02-13T20:30:03.124601604Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:30:05.087050 containerd[1477]: time="2025-02-13T20:30:05.086957935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:05.089991 containerd[1477]: time="2025-02-13T20:30:05.089884036Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652425" Feb 13 20:30:05.091281 containerd[1477]: time="2025-02-13T20:30:05.091219078Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:05.099032 containerd[1477]: time="2025-02-13T20:30:05.098959266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:05.101337 containerd[1477]: time="2025-02-13T20:30:05.101085210Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.976269528s" Feb 13 20:30:05.101337 containerd[1477]: time="2025-02-13T20:30:05.101170556Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 20:30:05.103424 containerd[1477]: time="2025-02-13T20:30:05.103268178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:30:05.105431 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 20:30:05.563699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:30:05.579466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:30:05.815836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:30:05.834587 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:30:05.984458 kubelet[1884]: E0213 20:30:05.984314 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:30:05.989802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:30:05.990512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:30:06.591686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677790839.mount: Deactivated successfully. Feb 13 20:30:07.538334 containerd[1477]: time="2025-02-13T20:30:07.536931771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:07.538334 containerd[1477]: time="2025-02-13T20:30:07.538253930Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 20:30:07.539266 containerd[1477]: time="2025-02-13T20:30:07.539215636Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:07.543024 containerd[1477]: time="2025-02-13T20:30:07.542961226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:07.544771 containerd[1477]: time="2025-02-13T20:30:07.544679594Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.441339592s" Feb 13 20:30:07.545025 containerd[1477]: time="2025-02-13T20:30:07.544993910Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 20:30:07.546467 containerd[1477]: time="2025-02-13T20:30:07.546373144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:30:08.053548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826856606.mount: Deactivated successfully. Feb 13 20:30:08.188068 systemd-resolved[1325]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 20:30:09.304758 containerd[1477]: time="2025-02-13T20:30:09.304692892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:09.307191 containerd[1477]: time="2025-02-13T20:30:09.307118649Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:30:09.308373 containerd[1477]: time="2025-02-13T20:30:09.308300042Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:09.320660 containerd[1477]: time="2025-02-13T20:30:09.320576617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:09.323578 containerd[1477]: time="2025-02-13T20:30:09.322940348Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.776491444s" Feb 13 20:30:09.323578 containerd[1477]: time="2025-02-13T20:30:09.323006607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:30:09.324105 containerd[1477]: time="2025-02-13T20:30:09.324045089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:30:09.833893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627464651.mount: Deactivated successfully. Feb 13 20:30:09.840461 containerd[1477]: time="2025-02-13T20:30:09.839632223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:09.841650 containerd[1477]: time="2025-02-13T20:30:09.841352453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 20:30:09.843054 containerd[1477]: time="2025-02-13T20:30:09.842999786Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:09.846525 containerd[1477]: time="2025-02-13T20:30:09.846469959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:09.847497 containerd[1477]: time="2025-02-13T20:30:09.847247606Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 523.01585ms" Feb 13 20:30:09.847497 containerd[1477]: time="2025-02-13T20:30:09.847315961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:30:09.848759 containerd[1477]: time="2025-02-13T20:30:09.848697037Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:30:10.424964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754688351.mount: Deactivated successfully. Feb 13 20:30:12.477770 containerd[1477]: time="2025-02-13T20:30:12.477693977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:12.480221 containerd[1477]: time="2025-02-13T20:30:12.480136783Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Feb 13 20:30:12.481089 containerd[1477]: time="2025-02-13T20:30:12.480996538Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:12.489014 containerd[1477]: time="2025-02-13T20:30:12.488117939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:12.490243 containerd[1477]: time="2025-02-13T20:30:12.490174939Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.641407407s" Feb 13 20:30:12.490449 containerd[1477]: time="2025-02-13T20:30:12.490243511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 20:30:15.361507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:30:15.375883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:30:15.424462 systemd[1]: Reloading requested from client PID 2027 ('systemctl') (unit session-7.scope)... Feb 13 20:30:15.424485 systemd[1]: Reloading... Feb 13 20:30:15.585515 zram_generator::config[2072]: No configuration found. Feb 13 20:30:15.744439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:30:15.854647 systemd[1]: Reloading finished in 429 ms. Feb 13 20:30:15.921577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:30:15.929824 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:30:15.930812 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:30:15.931120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:30:15.936879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:30:16.089837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:30:16.090299 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:30:16.163858 kubelet[2122]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:30:16.164469 kubelet[2122]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:30:16.164532 kubelet[2122]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:30:16.166287 kubelet[2122]: I0213 20:30:16.166086 2122 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:30:16.576467 kubelet[2122]: I0213 20:30:16.576395 2122 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:30:16.577463 kubelet[2122]: I0213 20:30:16.576736 2122 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:30:16.577463 kubelet[2122]: I0213 20:30:16.577120 2122 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:30:16.604765 kubelet[2122]: I0213 20:30:16.604687 2122 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:30:16.605167 kubelet[2122]: E0213 20:30:16.605131 2122 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.23.133.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:16.615904 kubelet[2122]: E0213 20:30:16.615853 2122 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:30:16.616158 kubelet[2122]: I0213 20:30:16.616143 2122 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:30:16.623828 kubelet[2122]: I0213 20:30:16.623792 2122 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:30:16.625458 kubelet[2122]: I0213 20:30:16.625419 2122 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:30:16.626447 kubelet[2122]: I0213 20:30:16.625883 2122 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:30:16.626447 kubelet[2122]: I0213 20:30:16.625933 2122 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-6-72a75d9253","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:30:16.626447 kubelet[2122]: I0213 20:30:16.626180 2122 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:30:16.626447 kubelet[2122]: I0213 20:30:16.626194 2122 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:30:16.626730 kubelet[2122]: I0213 20:30:16.626347 2122 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:30:16.628781 kubelet[2122]: I0213 20:30:16.628749 2122 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:30:16.628971 kubelet[2122]: I0213 20:30:16.628957 2122 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:30:16.629075 kubelet[2122]: I0213 20:30:16.629067 2122 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:30:16.629165 kubelet[2122]: I0213 20:30:16.629154 2122 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:30:16.632622 kubelet[2122]: W0213 20:30:16.632535 2122 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.133.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-6-72a75d9253&limit=500&resourceVersion=0": dial tcp 64.23.133.101:6443: connect: connection refused Feb 13 20:30:16.632771 kubelet[2122]: E0213 20:30:16.632652 2122 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.133.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-6-72a75d9253&limit=500&resourceVersion=0\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:16.634443 kubelet[2122]: W0213 20:30:16.634341 2122 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.133.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.133.101:6443: connect: connection refused Feb 13 20:30:16.634741 kubelet[2122]: E0213 20:30:16.634586 2122 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.133.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:16.636204 kubelet[2122]: I0213 20:30:16.636024 2122 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:30:16.638148 kubelet[2122]: I0213 20:30:16.638120 2122 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:30:16.639456 kubelet[2122]: W0213 20:30:16.639165 2122 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:30:16.641439 kubelet[2122]: I0213 20:30:16.640027 2122 server.go:1269] "Started kubelet" Feb 13 20:30:16.644924 kubelet[2122]: I0213 20:30:16.644839 2122 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:30:16.646362 kubelet[2122]: I0213 20:30:16.646332 2122 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:30:16.649356 kubelet[2122]: I0213 20:30:16.649166 2122 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:30:16.649661 kubelet[2122]: I0213 20:30:16.649639 2122 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:30:16.654542 kubelet[2122]: I0213 20:30:16.654495 2122 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:30:16.655821 kubelet[2122]: E0213 20:30:16.652092 2122 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.133.101:6443/api/v1/namespaces/default/events\": dial tcp 64.23.133.101:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.1-6-72a75d9253.1823de95ef36f9b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.1-6-72a75d9253,UID:ci-4081.3.1-6-72a75d9253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.1-6-72a75d9253,},FirstTimestamp:2025-02-13 20:30:16.63999429 +0000 UTC m=+0.532541340,LastTimestamp:2025-02-13 20:30:16.63999429 +0000 UTC m=+0.532541340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.1-6-72a75d9253,}" Feb 13 20:30:16.660941 kubelet[2122]: I0213 20:30:16.660771 2122 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:30:16.665079 kubelet[2122]: I0213 20:30:16.665036 2122 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:30:16.669443 kubelet[2122]: E0213 20:30:16.667845 2122 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-6-72a75d9253\" not found" Feb 13 20:30:16.669443 kubelet[2122]: I0213 20:30:16.668237 2122 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:30:16.669443 kubelet[2122]: I0213 20:30:16.668329 2122 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:30:16.669443 kubelet[2122]: I0213 20:30:16.668582 2122 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:30:16.669443 kubelet[2122]: I0213 20:30:16.668713 2122 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:30:16.669443 kubelet[2122]: E0213 20:30:16.668779 2122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.133.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-6-72a75d9253?timeout=10s\": dial tcp 64.23.133.101:6443: connect: connection refused" interval="200ms" Feb 13 20:30:16.670444 kubelet[2122]: W0213 20:30:16.669391 2122 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.133.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.133.101:6443: connect: connection refused Feb 13 20:30:16.671203 kubelet[2122]: E0213 20:30:16.671166 2122 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.133.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:16.671899 kubelet[2122]: E0213 20:30:16.671542 2122 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:30:16.672053 kubelet[2122]: I0213 20:30:16.672013 2122 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:30:16.693042 kubelet[2122]: I0213 20:30:16.691164 2122 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:30:16.693042 kubelet[2122]: I0213 20:30:16.692996 2122 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:30:16.693042 kubelet[2122]: I0213 20:30:16.693050 2122 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:30:16.693390 kubelet[2122]: I0213 20:30:16.693084 2122 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:30:16.693390 kubelet[2122]: E0213 20:30:16.693152 2122 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:30:16.705108 kubelet[2122]: W0213 20:30:16.704693 2122 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.133.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.133.101:6443: connect: connection refused Feb 13 20:30:16.705108 kubelet[2122]: E0213 20:30:16.704824 2122 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.133.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:16.711100 kubelet[2122]: I0213 20:30:16.711031 2122 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:30:16.711100 kubelet[2122]: I0213 20:30:16.711051 2122 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:30:16.711424 kubelet[2122]: I0213 20:30:16.711175 2122 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:30:16.714106 kubelet[2122]: I0213 20:30:16.713623 2122 policy_none.go:49] "None policy: Start" Feb 13 20:30:16.715169 kubelet[2122]: I0213 20:30:16.715151 2122 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:30:16.715362 kubelet[2122]: I0213 20:30:16.715308 2122 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:30:16.728027 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:30:16.740162 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:30:16.744321 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:30:16.753677 kubelet[2122]: I0213 20:30:16.753631 2122 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:30:16.753949 kubelet[2122]: I0213 20:30:16.753935 2122 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:30:16.754002 kubelet[2122]: I0213 20:30:16.753952 2122 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:30:16.755031 kubelet[2122]: I0213 20:30:16.754551 2122 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:30:16.757980 kubelet[2122]: E0213 20:30:16.757826 2122 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.1-6-72a75d9253\" not found" Feb 13 20:30:16.804276 systemd[1]: Created slice kubepods-burstable-podb0341636e49ff202d81e104b13261fe2.slice - libcontainer container kubepods-burstable-podb0341636e49ff202d81e104b13261fe2.slice. Feb 13 20:30:16.818737 systemd[1]: Created slice kubepods-burstable-pod835370d479d609a18a08b8e7432e21be.slice - libcontainer container kubepods-burstable-pod835370d479d609a18a08b8e7432e21be.slice. Feb 13 20:30:16.831084 systemd[1]: Created slice kubepods-burstable-podd16a4adb40621657dad59a1d1595adc2.slice - libcontainer container kubepods-burstable-podd16a4adb40621657dad59a1d1595adc2.slice. Feb 13 20:30:16.856634 kubelet[2122]: I0213 20:30:16.856119 2122 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.857107 kubelet[2122]: E0213 20:30:16.857068 2122 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.133.101:6443/api/v1/nodes\": dial tcp 64.23.133.101:6443: connect: connection refused" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.869935 kubelet[2122]: I0213 20:30:16.869654 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d16a4adb40621657dad59a1d1595adc2-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-6-72a75d9253\" (UID: \"d16a4adb40621657dad59a1d1595adc2\") " pod="kube-system/kube-scheduler-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.869935 kubelet[2122]: I0213 20:30:16.869699 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.869935 kubelet[2122]: I0213 20:30:16.869732 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.869935 kubelet[2122]: I0213 20:30:16.869751 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.869935 kubelet[2122]: I0213 20:30:16.869768 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.870299 kubelet[2122]: I0213 20:30:16.869788 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0341636e49ff202d81e104b13261fe2-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-6-72a75d9253\" (UID: \"b0341636e49ff202d81e104b13261fe2\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.870299 kubelet[2122]: I0213 20:30:16.869802 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0341636e49ff202d81e104b13261fe2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-6-72a75d9253\" (UID: \"b0341636e49ff202d81e104b13261fe2\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.870299 kubelet[2122]: E0213 20:30:16.869789 2122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.133.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-6-72a75d9253?timeout=10s\": dial tcp 64.23.133.101:6443: connect: connection refused" interval="400ms" Feb 13 20:30:16.870299 kubelet[2122]: I0213 20:30:16.869817 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0341636e49ff202d81e104b13261fe2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-6-72a75d9253\" (UID: \"b0341636e49ff202d81e104b13261fe2\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:16.870299 kubelet[2122]: I0213 20:30:16.869875 2122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:17.059367 kubelet[2122]: I0213 20:30:17.058870 2122 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:17.059367 kubelet[2122]: E0213 20:30:17.059276 2122 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.133.101:6443/api/v1/nodes\": dial tcp 64.23.133.101:6443: connect: connection refused" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:17.118518 kubelet[2122]: E0213 20:30:17.116738 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:17.118670 containerd[1477]: time="2025-02-13T20:30:17.118059602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-6-72a75d9253,Uid:b0341636e49ff202d81e104b13261fe2,Namespace:kube-system,Attempt:0,}" Feb 13 20:30:17.120774 systemd-resolved[1325]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Feb 13 20:30:17.127891 kubelet[2122]: E0213 20:30:17.127839 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:17.135555 kubelet[2122]: E0213 20:30:17.134957 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:17.138709 containerd[1477]: time="2025-02-13T20:30:17.138211409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-6-72a75d9253,Uid:835370d479d609a18a08b8e7432e21be,Namespace:kube-system,Attempt:0,}" Feb 13 20:30:17.138709 containerd[1477]: time="2025-02-13T20:30:17.138526292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-6-72a75d9253,Uid:d16a4adb40621657dad59a1d1595adc2,Namespace:kube-system,Attempt:0,}" Feb 13 20:30:17.270786 kubelet[2122]: E0213 20:30:17.270722 2122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.133.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-6-72a75d9253?timeout=10s\": dial tcp 64.23.133.101:6443: connect: connection refused" interval="800ms" Feb 13 20:30:17.461578 kubelet[2122]: I0213 20:30:17.461439 2122 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:17.462005 kubelet[2122]: E0213 20:30:17.461953 2122 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.133.101:6443/api/v1/nodes\": dial tcp 64.23.133.101:6443: connect: connection refused" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:17.614681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994996470.mount: Deactivated successfully. Feb 13 20:30:17.620710 containerd[1477]: time="2025-02-13T20:30:17.620621301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:30:17.621901 containerd[1477]: time="2025-02-13T20:30:17.621845016Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:30:17.623143 containerd[1477]: time="2025-02-13T20:30:17.623094914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:30:17.623369 containerd[1477]: time="2025-02-13T20:30:17.623145876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:30:17.625443 containerd[1477]: time="2025-02-13T20:30:17.624295915Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:30:17.628428 containerd[1477]: time="2025-02-13T20:30:17.628359447Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:30:17.628627 containerd[1477]: time="2025-02-13T20:30:17.628596094Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:30:17.629891 containerd[1477]: time="2025-02-13T20:30:17.629857145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:30:17.630880 containerd[1477]: time="2025-02-13T20:30:17.630847374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.484389ms" Feb 13 20:30:17.633879 containerd[1477]: time="2025-02-13T20:30:17.633674207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.499016ms" Feb 13 20:30:17.636168 containerd[1477]: time="2025-02-13T20:30:17.636112332Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.483875ms" Feb 13 20:30:17.810822 containerd[1477]: time="2025-02-13T20:30:17.810518918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:17.810822 containerd[1477]: time="2025-02-13T20:30:17.810639115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:17.811167 containerd[1477]: time="2025-02-13T20:30:17.810662279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:17.812934 containerd[1477]: time="2025-02-13T20:30:17.812817937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:17.821592 containerd[1477]: time="2025-02-13T20:30:17.820236928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:17.823386 containerd[1477]: time="2025-02-13T20:30:17.823246017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:17.823386 containerd[1477]: time="2025-02-13T20:30:17.823288691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:17.823739 containerd[1477]: time="2025-02-13T20:30:17.823443566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:17.828643 containerd[1477]: time="2025-02-13T20:30:17.827044749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:17.828643 containerd[1477]: time="2025-02-13T20:30:17.827168753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:17.828643 containerd[1477]: time="2025-02-13T20:30:17.827186702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:17.828643 containerd[1477]: time="2025-02-13T20:30:17.827321714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:17.852954 systemd[1]: Started cri-containerd-2f7271f3b0fbe569299287cab51b73f75486cadf50d2d581a51e443ea5e9e07c.scope - libcontainer container 2f7271f3b0fbe569299287cab51b73f75486cadf50d2d581a51e443ea5e9e07c. Feb 13 20:30:17.865612 systemd[1]: Started cri-containerd-8ea1fe753296966f63111ba2ae787edbf8260f081df2c252a8eae1351ac23b47.scope - libcontainer container 8ea1fe753296966f63111ba2ae787edbf8260f081df2c252a8eae1351ac23b47. Feb 13 20:30:17.879778 systemd[1]: Started cri-containerd-0102f9317f2410fceb885190b52774c3e585ee11bbffd8e85110341fdbe45f4e.scope - libcontainer container 0102f9317f2410fceb885190b52774c3e585ee11bbffd8e85110341fdbe45f4e. Feb 13 20:30:17.928842 kubelet[2122]: W0213 20:30:17.928790 2122 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.23.133.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.23.133.101:6443: connect: connection refused Feb 13 20:30:17.928842 kubelet[2122]: E0213 20:30:17.928850 2122 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.23.133.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:17.976677 containerd[1477]: time="2025-02-13T20:30:17.976625392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.1-6-72a75d9253,Uid:835370d479d609a18a08b8e7432e21be,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f7271f3b0fbe569299287cab51b73f75486cadf50d2d581a51e443ea5e9e07c\"" Feb 13 20:30:17.980970 kubelet[2122]: E0213 20:30:17.980270 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:17.985197 containerd[1477]: time="2025-02-13T20:30:17.981682947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.1-6-72a75d9253,Uid:d16a4adb40621657dad59a1d1595adc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ea1fe753296966f63111ba2ae787edbf8260f081df2c252a8eae1351ac23b47\"" Feb 13 20:30:17.985400 kubelet[2122]: E0213 20:30:17.984110 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:17.990235 containerd[1477]: time="2025-02-13T20:30:17.990183654Z" level=info msg="CreateContainer within sandbox \"2f7271f3b0fbe569299287cab51b73f75486cadf50d2d581a51e443ea5e9e07c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:30:17.990630 containerd[1477]: time="2025-02-13T20:30:17.990602958Z" level=info msg="CreateContainer within sandbox \"8ea1fe753296966f63111ba2ae787edbf8260f081df2c252a8eae1351ac23b47\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:30:18.001939 containerd[1477]: time="2025-02-13T20:30:18.001892932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.1-6-72a75d9253,Uid:b0341636e49ff202d81e104b13261fe2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0102f9317f2410fceb885190b52774c3e585ee11bbffd8e85110341fdbe45f4e\"" Feb 13 20:30:18.005045 kubelet[2122]: E0213 20:30:18.005013 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:18.010143 containerd[1477]: time="2025-02-13T20:30:18.010091742Z" level=info msg="CreateContainer within sandbox \"0102f9317f2410fceb885190b52774c3e585ee11bbffd8e85110341fdbe45f4e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:30:18.035583 containerd[1477]: time="2025-02-13T20:30:18.035383278Z" level=info msg="CreateContainer within sandbox \"2f7271f3b0fbe569299287cab51b73f75486cadf50d2d581a51e443ea5e9e07c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6dfb08c916ea39cb9751c2c4ee3013c7610d362c0076c47a2ddad7f0b2d35c3a\"" Feb 13 20:30:18.036511 kubelet[2122]: W0213 20:30:18.036111 2122 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.23.133.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-6-72a75d9253&limit=500&resourceVersion=0": dial tcp 64.23.133.101:6443: connect: connection refused Feb 13 20:30:18.036511 kubelet[2122]: E0213 20:30:18.036222 2122 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.23.133.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.1-6-72a75d9253&limit=500&resourceVersion=0\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:18.037517 containerd[1477]: time="2025-02-13T20:30:18.037466798Z" level=info msg="StartContainer for \"6dfb08c916ea39cb9751c2c4ee3013c7610d362c0076c47a2ddad7f0b2d35c3a\"" Feb 13 20:30:18.042786 containerd[1477]: time="2025-02-13T20:30:18.041896679Z" level=info msg="CreateContainer within sandbox \"8ea1fe753296966f63111ba2ae787edbf8260f081df2c252a8eae1351ac23b47\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f31518ffad00b1cb85743234e013cbf890390c82c8e622c6f07eaf7931aa80e\"" Feb 13 20:30:18.042786 containerd[1477]: time="2025-02-13T20:30:18.042358611Z" level=info msg="CreateContainer within sandbox \"0102f9317f2410fceb885190b52774c3e585ee11bbffd8e85110341fdbe45f4e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c344eebdac51f52dc38609c6f041491897c2fb5f54d9b35a822fa2250ccf5f1b\"" Feb 13 20:30:18.043895 containerd[1477]: time="2025-02-13T20:30:18.043859554Z" level=info msg="StartContainer for \"c344eebdac51f52dc38609c6f041491897c2fb5f54d9b35a822fa2250ccf5f1b\"" Feb 13 20:30:18.044156 containerd[1477]: time="2025-02-13T20:30:18.044121987Z" level=info msg="StartContainer for \"9f31518ffad00b1cb85743234e013cbf890390c82c8e622c6f07eaf7931aa80e\"" Feb 13 20:30:18.072918 kubelet[2122]: E0213 20:30:18.072839 2122 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.133.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.1-6-72a75d9253?timeout=10s\": dial tcp 64.23.133.101:6443: connect: connection refused" interval="1.6s" Feb 13 20:30:18.092871 systemd[1]: Started cri-containerd-6dfb08c916ea39cb9751c2c4ee3013c7610d362c0076c47a2ddad7f0b2d35c3a.scope - libcontainer container 6dfb08c916ea39cb9751c2c4ee3013c7610d362c0076c47a2ddad7f0b2d35c3a. Feb 13 20:30:18.109689 systemd[1]: Started cri-containerd-9f31518ffad00b1cb85743234e013cbf890390c82c8e622c6f07eaf7931aa80e.scope - libcontainer container 9f31518ffad00b1cb85743234e013cbf890390c82c8e622c6f07eaf7931aa80e. Feb 13 20:30:18.121150 systemd[1]: Started cri-containerd-c344eebdac51f52dc38609c6f041491897c2fb5f54d9b35a822fa2250ccf5f1b.scope - libcontainer container c344eebdac51f52dc38609c6f041491897c2fb5f54d9b35a822fa2250ccf5f1b. Feb 13 20:30:18.199045 kubelet[2122]: W0213 20:30:18.198807 2122 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.23.133.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.23.133.101:6443: connect: connection refused Feb 13 20:30:18.199741 kubelet[2122]: E0213 20:30:18.199002 2122 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.23.133.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:18.200784 containerd[1477]: time="2025-02-13T20:30:18.200717808Z" level=info msg="StartContainer for \"6dfb08c916ea39cb9751c2c4ee3013c7610d362c0076c47a2ddad7f0b2d35c3a\" returns successfully" Feb 13 20:30:18.223015 kubelet[2122]: W0213 20:30:18.221588 2122 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.23.133.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.23.133.101:6443: connect: connection refused Feb 13 20:30:18.223015 kubelet[2122]: E0213 20:30:18.221703 2122 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.23.133.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.133.101:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:30:18.230457 containerd[1477]: time="2025-02-13T20:30:18.228534501Z" level=info msg="StartContainer for \"c344eebdac51f52dc38609c6f041491897c2fb5f54d9b35a822fa2250ccf5f1b\" returns successfully" Feb 13 20:30:18.243918 containerd[1477]: time="2025-02-13T20:30:18.243738187Z" level=info msg="StartContainer for \"9f31518ffad00b1cb85743234e013cbf890390c82c8e622c6f07eaf7931aa80e\" returns successfully" Feb 13 20:30:18.264855 kubelet[2122]: I0213 20:30:18.264698 2122 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:18.266133 kubelet[2122]: E0213 20:30:18.265964 2122 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.23.133.101:6443/api/v1/nodes\": dial tcp 64.23.133.101:6443: connect: connection refused" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:18.719916 kubelet[2122]: E0213 20:30:18.719778 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:18.727036 kubelet[2122]: E0213 20:30:18.725368 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:18.732672 kubelet[2122]: E0213 20:30:18.732391 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:19.732494 kubelet[2122]: E0213 20:30:19.732353 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:19.734553 kubelet[2122]: E0213 20:30:19.734515 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:19.867270 kubelet[2122]: I0213 20:30:19.867235 2122 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:20.440275 kubelet[2122]: E0213 20:30:20.440231 2122 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.1-6-72a75d9253\" not found" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:20.484584 kubelet[2122]: I0213 20:30:20.484532 2122 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:20.638440 kubelet[2122]: I0213 20:30:20.636790 2122 apiserver.go:52] "Watching apiserver" Feb 13 20:30:20.668806 kubelet[2122]: I0213 20:30:20.668741 2122 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:30:23.062901 systemd[1]: Reloading requested from client PID 2396 ('systemctl') (unit session-7.scope)... Feb 13 20:30:23.062921 systemd[1]: Reloading... Feb 13 20:30:23.192645 zram_generator::config[2441]: No configuration found. Feb 13 20:30:23.348568 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:30:23.485265 systemd[1]: Reloading finished in 421 ms. Feb 13 20:30:23.554505 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:30:23.569472 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:30:23.569753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:30:23.569855 systemd[1]: kubelet.service: Consumed 1.006s CPU time, 112.7M memory peak, 0B memory swap peak. Feb 13 20:30:23.581898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:30:23.743683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:30:23.752300 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:30:23.851779 kubelet[2487]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:30:23.851779 kubelet[2487]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:30:23.851779 kubelet[2487]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:30:23.853373 kubelet[2487]: I0213 20:30:23.853298 2487 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:30:23.869053 kubelet[2487]: I0213 20:30:23.868960 2487 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:30:23.869053 kubelet[2487]: I0213 20:30:23.869006 2487 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:30:23.869525 kubelet[2487]: I0213 20:30:23.869500 2487 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:30:23.874944 kubelet[2487]: I0213 20:30:23.874896 2487 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:30:23.883936 kubelet[2487]: I0213 20:30:23.883710 2487 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:30:23.891783 kubelet[2487]: E0213 20:30:23.891565 2487 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:30:23.893506 kubelet[2487]: I0213 20:30:23.892057 2487 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:30:23.895614 kubelet[2487]: I0213 20:30:23.895546 2487 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:30:23.895845 kubelet[2487]: I0213 20:30:23.895776 2487 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:30:23.896031 kubelet[2487]: I0213 20:30:23.895963 2487 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:30:23.896723 kubelet[2487]: I0213 20:30:23.896077 2487 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.1-6-72a75d9253","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:30:23.897030 kubelet[2487]: I0213 20:30:23.896744 2487 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:30:23.897030 kubelet[2487]: I0213 20:30:23.896764 2487 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:30:23.897030 kubelet[2487]: I0213 20:30:23.896827 2487 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:30:23.897030 kubelet[2487]: I0213 20:30:23.896991 2487 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:30:23.897030 kubelet[2487]: I0213 20:30:23.897009 2487 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:30:23.899021 kubelet[2487]: I0213 20:30:23.897052 2487 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:30:23.899021 kubelet[2487]: I0213 20:30:23.897083 2487 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:30:23.899790 kubelet[2487]: I0213 20:30:23.899755 2487 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:30:23.900521 kubelet[2487]: I0213 20:30:23.900491 2487 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:30:23.901193 kubelet[2487]: I0213 20:30:23.901162 2487 server.go:1269] "Started kubelet" Feb 13 20:30:23.912777 kubelet[2487]: I0213 20:30:23.912736 2487 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:30:23.929604 kubelet[2487]: I0213 20:30:23.929512 2487 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:30:23.930263 kubelet[2487]: I0213 20:30:23.930233 2487 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:30:23.953599 kubelet[2487]: I0213 20:30:23.930866 2487 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:30:23.953954 kubelet[2487]: I0213 20:30:23.933288 2487 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:30:23.954460 kubelet[2487]: I0213 20:30:23.933308 2487 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:30:23.954815 kubelet[2487]: I0213 20:30:23.954798 2487 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:30:23.954921 kubelet[2487]: I0213 20:30:23.951740 2487 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:30:23.955150 kubelet[2487]: I0213 20:30:23.955120 2487 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:30:23.955454 kubelet[2487]: E0213 20:30:23.933612 2487 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.1-6-72a75d9253\" not found" Feb 13 20:30:23.956130 kubelet[2487]: I0213 20:30:23.953472 2487 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:30:23.960002 kubelet[2487]: I0213 20:30:23.959964 2487 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:30:23.964635 kubelet[2487]: E0213 20:30:23.964301 2487 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:30:23.967438 kubelet[2487]: I0213 20:30:23.965800 2487 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:30:23.968728 kubelet[2487]: I0213 20:30:23.968671 2487 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:30:23.972100 kubelet[2487]: I0213 20:30:23.972057 2487 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:30:23.972100 kubelet[2487]: I0213 20:30:23.972106 2487 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:30:23.972357 kubelet[2487]: I0213 20:30:23.972138 2487 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:30:23.972357 kubelet[2487]: E0213 20:30:23.972205 2487 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:30:24.056347 kubelet[2487]: I0213 20:30:24.056204 2487 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:30:24.056347 kubelet[2487]: I0213 20:30:24.056228 2487 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:30:24.056347 kubelet[2487]: I0213 20:30:24.056256 2487 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:30:24.056592 kubelet[2487]: I0213 20:30:24.056518 2487 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:30:24.056592 kubelet[2487]: I0213 20:30:24.056531 2487 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:30:24.056592 kubelet[2487]: I0213 20:30:24.056555 2487 policy_none.go:49] "None policy: Start" Feb 13 20:30:24.058102 kubelet[2487]: I0213 20:30:24.058072 2487 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:30:24.058102 kubelet[2487]: I0213 20:30:24.058106 2487 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:30:24.058559 kubelet[2487]: I0213 20:30:24.058539 2487 state_mem.go:75] "Updated machine memory state" Feb 13 20:30:24.067927 kubelet[2487]: I0213 20:30:24.067694 2487 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:30:24.068087 kubelet[2487]: I0213 20:30:24.067952 2487 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:30:24.068087 kubelet[2487]: I0213 20:30:24.067966 2487 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:30:24.070336 kubelet[2487]: I0213 20:30:24.070165 2487 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:30:24.098428 sudo[2517]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 20:30:24.098795 sudo[2517]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 20:30:24.124629 kubelet[2487]: W0213 20:30:24.123856 2487 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:30:24.144002 kubelet[2487]: W0213 20:30:24.143620 2487 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:30:24.144002 kubelet[2487]: W0213 20:30:24.143899 2487 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:30:24.157248 kubelet[2487]: I0213 20:30:24.156510 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b0341636e49ff202d81e104b13261fe2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.1-6-72a75d9253\" (UID: \"b0341636e49ff202d81e104b13261fe2\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.157248 kubelet[2487]: I0213 20:30:24.156581 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.157248 kubelet[2487]: I0213 20:30:24.156623 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.157248 kubelet[2487]: I0213 20:30:24.156655 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.157248 kubelet[2487]: I0213 20:30:24.156685 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d16a4adb40621657dad59a1d1595adc2-kubeconfig\") pod \"kube-scheduler-ci-4081.3.1-6-72a75d9253\" (UID: \"d16a4adb40621657dad59a1d1595adc2\") " pod="kube-system/kube-scheduler-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.157801 kubelet[2487]: I0213 20:30:24.156718 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b0341636e49ff202d81e104b13261fe2-ca-certs\") pod \"kube-apiserver-ci-4081.3.1-6-72a75d9253\" (UID: \"b0341636e49ff202d81e104b13261fe2\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.157801 kubelet[2487]: I0213 20:30:24.156747 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b0341636e49ff202d81e104b13261fe2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.1-6-72a75d9253\" (UID: \"b0341636e49ff202d81e104b13261fe2\") " pod="kube-system/kube-apiserver-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.157801 kubelet[2487]: I0213 20:30:24.156777 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-ca-certs\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.157801 kubelet[2487]: I0213 20:30:24.156805 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/835370d479d609a18a08b8e7432e21be-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.1-6-72a75d9253\" (UID: \"835370d479d609a18a08b8e7432e21be\") " pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.180526 kubelet[2487]: I0213 20:30:24.179973 2487 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.210251 kubelet[2487]: I0213 20:30:24.210185 2487 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.210705 kubelet[2487]: I0213 20:30:24.210687 2487 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.1-6-72a75d9253" Feb 13 20:30:24.426620 kubelet[2487]: E0213 20:30:24.425687 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:24.445461 kubelet[2487]: E0213 20:30:24.444150 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:24.445461 kubelet[2487]: E0213 20:30:24.444218 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:24.847093 sudo[2517]: pam_unix(sudo:session): session closed for user root Feb 13 20:30:24.899353 kubelet[2487]: I0213 20:30:24.898208 2487 apiserver.go:52] "Watching apiserver" Feb 13 20:30:24.954885 kubelet[2487]: I0213 20:30:24.954830 2487 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:30:25.013159 kubelet[2487]: E0213 20:30:25.013117 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:25.015680 kubelet[2487]: E0213 20:30:25.015631 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:25.029644 kubelet[2487]: W0213 20:30:25.029592 2487 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 20:30:25.029939 kubelet[2487]: E0213 20:30:25.029786 2487 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.1-6-72a75d9253\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.1-6-72a75d9253" Feb 13 20:30:25.030567 kubelet[2487]: E0213 20:30:25.030436 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:25.086561 kubelet[2487]: I0213 20:30:25.086477 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.1-6-72a75d9253" podStartSLOduration=1.086450229 podStartE2EDuration="1.086450229s" podCreationTimestamp="2025-02-13 20:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:30:25.083814965 +0000 UTC m=+1.312972311" watchObservedRunningTime="2025-02-13 20:30:25.086450229 +0000 UTC m=+1.315607555" Feb 13 20:30:25.086865 kubelet[2487]: I0213 20:30:25.086665 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.1-6-72a75d9253" podStartSLOduration=1.08665507 podStartE2EDuration="1.08665507s" podCreationTimestamp="2025-02-13 20:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:30:25.062961971 +0000 UTC m=+1.292119353" watchObservedRunningTime="2025-02-13 20:30:25.08665507 +0000 UTC m=+1.315812417" Feb 13 20:30:26.016297 kubelet[2487]: E0213 20:30:26.015639 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:26.705962 sudo[1655]: pam_unix(sudo:session): session closed for user root Feb 13 20:30:26.710428 sshd[1652]: pam_unix(sshd:session): session closed for user core Feb 13 20:30:26.715247 systemd[1]: sshd@6-64.23.133.101:22-147.75.109.163:34002.service: Deactivated successfully. Feb 13 20:30:26.719336 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:30:26.720158 systemd[1]: session-7.scope: Consumed 5.452s CPU time, 146.2M memory peak, 0B memory swap peak. Feb 13 20:30:26.721535 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:30:26.723138 systemd-logind[1452]: Removed session 7. Feb 13 20:30:27.018753 kubelet[2487]: E0213 20:30:27.018512 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:27.773349 systemd-timesyncd[1350]: Contacted time server 198.30.92.2:123 (2.flatcar.pool.ntp.org). Feb 13 20:30:27.773461 systemd-timesyncd[1350]: Initial clock synchronization to Thu 2025-02-13 20:30:27.578088 UTC. Feb 13 20:30:27.991759 kubelet[2487]: I0213 20:30:27.991562 2487 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:30:27.992438 containerd[1477]: time="2025-02-13T20:30:27.992279264Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:30:27.994323 kubelet[2487]: I0213 20:30:27.994059 2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:30:28.627892 kubelet[2487]: I0213 20:30:28.627822 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.1-6-72a75d9253" podStartSLOduration=4.627798808 podStartE2EDuration="4.627798808s" podCreationTimestamp="2025-02-13 20:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:30:25.126519294 +0000 UTC m=+1.355676641" watchObservedRunningTime="2025-02-13 20:30:28.627798808 +0000 UTC m=+4.856956153" Feb 13 20:30:28.648305 systemd[1]: Created slice kubepods-besteffort-poda4f521a4_5c7e_4f29_a490_d1f806459d8f.slice - libcontainer container kubepods-besteffort-poda4f521a4_5c7e_4f29_a490_d1f806459d8f.slice. Feb 13 20:30:28.665959 systemd[1]: Created slice kubepods-burstable-pod4be3c56f_503f_4d34_80a0_3421bb9ca63c.slice - libcontainer container kubepods-burstable-pod4be3c56f_503f_4d34_80a0_3421bb9ca63c.slice. Feb 13 20:30:28.693870 kubelet[2487]: I0213 20:30:28.693810 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqdvf\" (UniqueName: \"kubernetes.io/projected/4be3c56f-503f-4d34-80a0-3421bb9ca63c-kube-api-access-tqdvf\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.693870 kubelet[2487]: I0213 20:30:28.693865 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4f521a4-5c7e-4f29-a490-d1f806459d8f-xtables-lock\") pod \"kube-proxy-4hnn5\" (UID: \"a4f521a4-5c7e-4f29-a490-d1f806459d8f\") " pod="kube-system/kube-proxy-4hnn5" Feb 13 20:30:28.693870 kubelet[2487]: I0213 20:30:28.693883 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4be3c56f-503f-4d34-80a0-3421bb9ca63c-hubble-tls\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694120 kubelet[2487]: I0213 20:30:28.693901 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-etc-cni-netd\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694120 kubelet[2487]: I0213 20:30:28.693916 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-lib-modules\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694120 kubelet[2487]: I0213 20:30:28.693933 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk584\" (UniqueName: \"kubernetes.io/projected/a4f521a4-5c7e-4f29-a490-d1f806459d8f-kube-api-access-dk584\") pod \"kube-proxy-4hnn5\" (UID: \"a4f521a4-5c7e-4f29-a490-d1f806459d8f\") " pod="kube-system/kube-proxy-4hnn5" Feb 13 20:30:28.694120 kubelet[2487]: I0213 20:30:28.693948 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-cgroup\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694120 kubelet[2487]: I0213 20:30:28.693965 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-host-proc-sys-kernel\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694295 kubelet[2487]: I0213 20:30:28.693981 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4f521a4-5c7e-4f29-a490-d1f806459d8f-kube-proxy\") pod \"kube-proxy-4hnn5\" (UID: \"a4f521a4-5c7e-4f29-a490-d1f806459d8f\") " pod="kube-system/kube-proxy-4hnn5" Feb 13 20:30:28.694295 kubelet[2487]: I0213 20:30:28.693998 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-config-path\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694295 kubelet[2487]: I0213 20:30:28.694016 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-run\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694295 kubelet[2487]: I0213 20:30:28.694038 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-xtables-lock\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694295 kubelet[2487]: I0213 20:30:28.694060 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cni-path\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694295 kubelet[2487]: I0213 20:30:28.694101 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-bpf-maps\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694532 kubelet[2487]: I0213 20:30:28.694122 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-hostproc\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694532 kubelet[2487]: I0213 20:30:28.694144 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4f521a4-5c7e-4f29-a490-d1f806459d8f-lib-modules\") pod \"kube-proxy-4hnn5\" (UID: \"a4f521a4-5c7e-4f29-a490-d1f806459d8f\") " pod="kube-system/kube-proxy-4hnn5" Feb 13 20:30:28.694532 kubelet[2487]: I0213 20:30:28.694168 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4be3c56f-503f-4d34-80a0-3421bb9ca63c-clustermesh-secrets\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.694532 kubelet[2487]: I0213 20:30:28.694194 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-host-proc-sys-net\") pod \"cilium-rbfqg\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " pod="kube-system/cilium-rbfqg" Feb 13 20:30:28.959267 kubelet[2487]: E0213 20:30:28.959111 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:28.961909 containerd[1477]: time="2025-02-13T20:30:28.961860138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4hnn5,Uid:a4f521a4-5c7e-4f29-a490-d1f806459d8f,Namespace:kube-system,Attempt:0,}" Feb 13 20:30:28.971444 kubelet[2487]: E0213 20:30:28.970397 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:28.978446 containerd[1477]: time="2025-02-13T20:30:28.974787203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rbfqg,Uid:4be3c56f-503f-4d34-80a0-3421bb9ca63c,Namespace:kube-system,Attempt:0,}" Feb 13 20:30:29.048340 containerd[1477]: time="2025-02-13T20:30:29.047729849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:29.048340 containerd[1477]: time="2025-02-13T20:30:29.047837988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:29.048340 containerd[1477]: time="2025-02-13T20:30:29.047863265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:29.049949 containerd[1477]: time="2025-02-13T20:30:29.049376412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:29.053689 containerd[1477]: time="2025-02-13T20:30:29.053518563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:29.053689 containerd[1477]: time="2025-02-13T20:30:29.053633835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:29.054204 containerd[1477]: time="2025-02-13T20:30:29.053850158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:29.054497 containerd[1477]: time="2025-02-13T20:30:29.054315210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:29.093706 systemd[1]: Created slice kubepods-besteffort-pod76fa51ba_d835_48cc_9bb8_223b5f4d5047.slice - libcontainer container kubepods-besteffort-pod76fa51ba_d835_48cc_9bb8_223b5f4d5047.slice. Feb 13 20:30:29.100010 kubelet[2487]: I0213 20:30:29.098292 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5xs\" (UniqueName: \"kubernetes.io/projected/76fa51ba-d835-48cc-9bb8-223b5f4d5047-kube-api-access-hj5xs\") pod \"cilium-operator-5d85765b45-42797\" (UID: \"76fa51ba-d835-48cc-9bb8-223b5f4d5047\") " pod="kube-system/cilium-operator-5d85765b45-42797" Feb 13 20:30:29.100010 kubelet[2487]: I0213 20:30:29.098338 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76fa51ba-d835-48cc-9bb8-223b5f4d5047-cilium-config-path\") pod \"cilium-operator-5d85765b45-42797\" (UID: \"76fa51ba-d835-48cc-9bb8-223b5f4d5047\") " pod="kube-system/cilium-operator-5d85765b45-42797" Feb 13 20:30:29.104704 systemd[1]: Started cri-containerd-d57dead7ad3ba278e6d367275474009e42585c5b8de5c7b7f62ab50c6d763e21.scope - libcontainer container d57dead7ad3ba278e6d367275474009e42585c5b8de5c7b7f62ab50c6d763e21. Feb 13 20:30:29.130656 systemd[1]: Started cri-containerd-911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb.scope - libcontainer container 911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb. Feb 13 20:30:29.212597 containerd[1477]: time="2025-02-13T20:30:29.211549935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rbfqg,Uid:4be3c56f-503f-4d34-80a0-3421bb9ca63c,Namespace:kube-system,Attempt:0,} returns sandbox id \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\"" Feb 13 20:30:29.214150 containerd[1477]: time="2025-02-13T20:30:29.214096448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4hnn5,Uid:a4f521a4-5c7e-4f29-a490-d1f806459d8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d57dead7ad3ba278e6d367275474009e42585c5b8de5c7b7f62ab50c6d763e21\"" Feb 13 20:30:29.216658 kubelet[2487]: E0213 20:30:29.216617 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:29.217004 kubelet[2487]: E0213 20:30:29.216971 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:29.222464 containerd[1477]: time="2025-02-13T20:30:29.222013782Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 20:30:29.224007 containerd[1477]: time="2025-02-13T20:30:29.223161613Z" level=info msg="CreateContainer within sandbox \"d57dead7ad3ba278e6d367275474009e42585c5b8de5c7b7f62ab50c6d763e21\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:30:29.240848 containerd[1477]: time="2025-02-13T20:30:29.240790534Z" level=info msg="CreateContainer within sandbox \"d57dead7ad3ba278e6d367275474009e42585c5b8de5c7b7f62ab50c6d763e21\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"528729dcec90fa084e6306d51e57b556267d67bd2a0ee8f4aec27373c8f42ec2\"" Feb 13 20:30:29.242021 containerd[1477]: time="2025-02-13T20:30:29.241987607Z" level=info msg="StartContainer for \"528729dcec90fa084e6306d51e57b556267d67bd2a0ee8f4aec27373c8f42ec2\"" Feb 13 20:30:29.280695 systemd[1]: Started cri-containerd-528729dcec90fa084e6306d51e57b556267d67bd2a0ee8f4aec27373c8f42ec2.scope - libcontainer container 528729dcec90fa084e6306d51e57b556267d67bd2a0ee8f4aec27373c8f42ec2. Feb 13 20:30:29.326003 containerd[1477]: time="2025-02-13T20:30:29.325724697Z" level=info msg="StartContainer for \"528729dcec90fa084e6306d51e57b556267d67bd2a0ee8f4aec27373c8f42ec2\" returns successfully" Feb 13 20:30:29.399244 kubelet[2487]: E0213 20:30:29.399188 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:29.402237 containerd[1477]: time="2025-02-13T20:30:29.401277319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-42797,Uid:76fa51ba-d835-48cc-9bb8-223b5f4d5047,Namespace:kube-system,Attempt:0,}" Feb 13 20:30:29.431381 containerd[1477]: time="2025-02-13T20:30:29.431003989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:29.431381 containerd[1477]: time="2025-02-13T20:30:29.431111235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:29.431381 containerd[1477]: time="2025-02-13T20:30:29.431130435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:29.431381 containerd[1477]: time="2025-02-13T20:30:29.431249209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:29.453687 systemd[1]: Started cri-containerd-3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e.scope - libcontainer container 3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e. Feb 13 20:30:29.518108 containerd[1477]: time="2025-02-13T20:30:29.517900710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-42797,Uid:76fa51ba-d835-48cc-9bb8-223b5f4d5047,Namespace:kube-system,Attempt:0,} returns sandbox id \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\"" Feb 13 20:30:29.522101 kubelet[2487]: E0213 20:30:29.521038 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:29.831157 kubelet[2487]: E0213 20:30:29.830644 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:30.031077 kubelet[2487]: E0213 20:30:30.029768 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:30.031077 kubelet[2487]: E0213 20:30:30.029769 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:30.066591 kubelet[2487]: I0213 20:30:30.066093 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4hnn5" podStartSLOduration=2.06606695 podStartE2EDuration="2.06606695s" podCreationTimestamp="2025-02-13 20:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:30:30.048961401 +0000 UTC m=+6.278118748" watchObservedRunningTime="2025-02-13 20:30:30.06606695 +0000 UTC m=+6.295224298" Feb 13 20:30:32.756885 kubelet[2487]: E0213 20:30:32.756804 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:33.036807 kubelet[2487]: E0213 20:30:33.036659 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:35.194720 kubelet[2487]: E0213 20:30:35.194672 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:37.117047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707364683.mount: Deactivated successfully. Feb 13 20:30:38.433645 update_engine[1453]: I20250213 20:30:38.433522 1453 update_attempter.cc:509] Updating boot flags... Feb 13 20:30:38.506780 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2877) Feb 13 20:30:38.614476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2880) Feb 13 20:30:38.730623 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2880) Feb 13 20:30:39.927779 containerd[1477]: time="2025-02-13T20:30:39.927670658Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 20:30:39.936078 containerd[1477]: time="2025-02-13T20:30:39.935620076Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.713544279s" Feb 13 20:30:39.936078 containerd[1477]: time="2025-02-13T20:30:39.935696709Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 20:30:39.947916 containerd[1477]: time="2025-02-13T20:30:39.947624093Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 20:30:39.953700 containerd[1477]: time="2025-02-13T20:30:39.952786185Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:30:40.037442 containerd[1477]: time="2025-02-13T20:30:40.037376505Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:40.042213 containerd[1477]: time="2025-02-13T20:30:40.042150153Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:40.128854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659805852.mount: Deactivated successfully. Feb 13 20:30:40.139917 containerd[1477]: time="2025-02-13T20:30:40.139853212Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\"" Feb 13 20:30:40.143598 containerd[1477]: time="2025-02-13T20:30:40.142306815Z" level=info msg="StartContainer for \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\"" Feb 13 20:30:40.244691 systemd[1]: Started cri-containerd-b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7.scope - libcontainer container b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7. Feb 13 20:30:40.279012 containerd[1477]: time="2025-02-13T20:30:40.278966408Z" level=info msg="StartContainer for \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\" returns successfully" Feb 13 20:30:40.296885 systemd[1]: cri-containerd-b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7.scope: Deactivated successfully. Feb 13 20:30:40.389093 containerd[1477]: time="2025-02-13T20:30:40.364088812Z" level=info msg="shim disconnected" id=b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7 namespace=k8s.io Feb 13 20:30:40.389093 containerd[1477]: time="2025-02-13T20:30:40.389073084Z" level=warning msg="cleaning up after shim disconnected" id=b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7 namespace=k8s.io Feb 13 20:30:40.389093 containerd[1477]: time="2025-02-13T20:30:40.389098293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:30:41.096230 kubelet[2487]: E0213 20:30:41.096185 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:41.107563 containerd[1477]: time="2025-02-13T20:30:41.107160683Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:30:41.133672 systemd[1]: run-containerd-runc-k8s.io-b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7-runc.09O7wq.mount: Deactivated successfully. Feb 13 20:30:41.134115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7-rootfs.mount: Deactivated successfully. Feb 13 20:30:41.143808 containerd[1477]: time="2025-02-13T20:30:41.143701441Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\"" Feb 13 20:30:41.144569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717335682.mount: Deactivated successfully. Feb 13 20:30:41.145189 containerd[1477]: time="2025-02-13T20:30:41.145149640Z" level=info msg="StartContainer for \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\"" Feb 13 20:30:41.239713 systemd[1]: Started cri-containerd-b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd.scope - libcontainer container b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd. Feb 13 20:30:41.305570 containerd[1477]: time="2025-02-13T20:30:41.305506110Z" level=info msg="StartContainer for \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\" returns successfully" Feb 13 20:30:41.323354 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:30:41.323718 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:30:41.323863 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:30:41.334930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:30:41.335221 systemd[1]: cri-containerd-b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd.scope: Deactivated successfully. Feb 13 20:30:41.395743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:30:41.412803 containerd[1477]: time="2025-02-13T20:30:41.412596366Z" level=info msg="shim disconnected" id=b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd namespace=k8s.io Feb 13 20:30:41.412803 containerd[1477]: time="2025-02-13T20:30:41.412807957Z" level=warning msg="cleaning up after shim disconnected" id=b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd namespace=k8s.io Feb 13 20:30:41.413322 containerd[1477]: time="2025-02-13T20:30:41.412826374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:30:42.104530 kubelet[2487]: E0213 20:30:42.104451 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:42.113159 containerd[1477]: time="2025-02-13T20:30:42.112282008Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:30:42.129658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330414421.mount: Deactivated successfully. Feb 13 20:30:42.129805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd-rootfs.mount: Deactivated successfully. Feb 13 20:30:42.151534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202187222.mount: Deactivated successfully. Feb 13 20:30:42.164129 containerd[1477]: time="2025-02-13T20:30:42.163399823Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\"" Feb 13 20:30:42.165670 containerd[1477]: time="2025-02-13T20:30:42.165469763Z" level=info msg="StartContainer for \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\"" Feb 13 20:30:42.239540 systemd[1]: Started cri-containerd-68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4.scope - libcontainer container 68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4. Feb 13 20:30:42.302793 containerd[1477]: time="2025-02-13T20:30:42.301841882Z" level=info msg="StartContainer for \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\" returns successfully" Feb 13 20:30:42.309766 systemd[1]: cri-containerd-68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4.scope: Deactivated successfully. Feb 13 20:30:42.342967 containerd[1477]: time="2025-02-13T20:30:42.342842491Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:42.348887 containerd[1477]: time="2025-02-13T20:30:42.348278208Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:30:42.348887 containerd[1477]: time="2025-02-13T20:30:42.348353656Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 20:30:42.355891 containerd[1477]: time="2025-02-13T20:30:42.355629282Z" level=info msg="shim disconnected" id=68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4 namespace=k8s.io Feb 13 20:30:42.355891 containerd[1477]: time="2025-02-13T20:30:42.355686417Z" level=warning msg="cleaning up after shim disconnected" id=68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4 namespace=k8s.io Feb 13 20:30:42.355891 containerd[1477]: time="2025-02-13T20:30:42.355695048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:30:42.360351 containerd[1477]: time="2025-02-13T20:30:42.359438667Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.411733908s" Feb 13 20:30:42.360351 containerd[1477]: time="2025-02-13T20:30:42.359519171Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 20:30:42.375070 containerd[1477]: time="2025-02-13T20:30:42.375018229Z" level=info msg="CreateContainer within sandbox \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 20:30:42.390075 containerd[1477]: time="2025-02-13T20:30:42.390014122Z" level=info msg="CreateContainer within sandbox \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\"" Feb 13 20:30:42.391049 containerd[1477]: time="2025-02-13T20:30:42.390959490Z" level=info msg="StartContainer for \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\"" Feb 13 20:30:42.425699 systemd[1]: Started cri-containerd-9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8.scope - libcontainer container 9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8. Feb 13 20:30:42.460691 containerd[1477]: time="2025-02-13T20:30:42.460635890Z" level=info msg="StartContainer for \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\" returns successfully" Feb 13 20:30:43.108231 kubelet[2487]: E0213 20:30:43.108193 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:43.115458 containerd[1477]: time="2025-02-13T20:30:43.114778719Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:30:43.116257 kubelet[2487]: E0213 20:30:43.115603 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:43.132928 systemd[1]: run-containerd-runc-k8s.io-68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4-runc.VBh8FJ.mount: Deactivated successfully. Feb 13 20:30:43.133878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4-rootfs.mount: Deactivated successfully. Feb 13 20:30:43.144726 containerd[1477]: time="2025-02-13T20:30:43.144671145Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\"" Feb 13 20:30:43.146906 containerd[1477]: time="2025-02-13T20:30:43.146869210Z" level=info msg="StartContainer for \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\"" Feb 13 20:30:43.218711 systemd[1]: Started cri-containerd-968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61.scope - libcontainer container 968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61. Feb 13 20:30:43.315443 containerd[1477]: time="2025-02-13T20:30:43.315143431Z" level=info msg="StartContainer for \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\" returns successfully" Feb 13 20:30:43.319433 systemd[1]: cri-containerd-968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61.scope: Deactivated successfully. Feb 13 20:30:43.368684 containerd[1477]: time="2025-02-13T20:30:43.368235337Z" level=info msg="shim disconnected" id=968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61 namespace=k8s.io Feb 13 20:30:43.368684 containerd[1477]: time="2025-02-13T20:30:43.368350326Z" level=warning msg="cleaning up after shim disconnected" id=968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61 namespace=k8s.io Feb 13 20:30:43.368684 containerd[1477]: time="2025-02-13T20:30:43.368364705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:30:44.121631 kubelet[2487]: E0213 20:30:44.121009 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:44.121631 kubelet[2487]: E0213 20:30:44.121194 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:44.132305 containerd[1477]: time="2025-02-13T20:30:44.130812044Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:30:44.130940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61-rootfs.mount: Deactivated successfully. Feb 13 20:30:44.163544 kubelet[2487]: I0213 20:30:44.158879 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-42797" podStartSLOduration=2.319699385 podStartE2EDuration="15.158727982s" podCreationTimestamp="2025-02-13 20:30:29 +0000 UTC" firstStartedPulling="2025-02-13 20:30:29.522245527 +0000 UTC m=+5.751402851" lastFinishedPulling="2025-02-13 20:30:42.361274106 +0000 UTC m=+18.590431448" observedRunningTime="2025-02-13 20:30:43.456949589 +0000 UTC m=+19.686106935" watchObservedRunningTime="2025-02-13 20:30:44.158727982 +0000 UTC m=+20.387885351" Feb 13 20:30:44.177682 containerd[1477]: time="2025-02-13T20:30:44.177617840Z" level=info msg="CreateContainer within sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\"" Feb 13 20:30:44.178701 containerd[1477]: time="2025-02-13T20:30:44.178661573Z" level=info msg="StartContainer for \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\"" Feb 13 20:30:44.247079 systemd[1]: Started cri-containerd-e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49.scope - libcontainer container e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49. Feb 13 20:30:44.296134 containerd[1477]: time="2025-02-13T20:30:44.295937382Z" level=info msg="StartContainer for \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\" returns successfully" Feb 13 20:30:44.542028 kubelet[2487]: I0213 20:30:44.540229 2487 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:30:44.654545 systemd[1]: Created slice kubepods-burstable-pod96cc06d6_7ae5_4be0_9f3e_30963fc03cb3.slice - libcontainer container kubepods-burstable-pod96cc06d6_7ae5_4be0_9f3e_30963fc03cb3.slice. Feb 13 20:30:44.674635 systemd[1]: Created slice kubepods-burstable-poded8de0ce_6ce8_44cf_9423_08b8e4a9e5b0.slice - libcontainer container kubepods-burstable-poded8de0ce_6ce8_44cf_9423_08b8e4a9e5b0.slice. Feb 13 20:30:44.723829 kubelet[2487]: I0213 20:30:44.723783 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96cc06d6-7ae5-4be0-9f3e-30963fc03cb3-config-volume\") pod \"coredns-6f6b679f8f-kjct8\" (UID: \"96cc06d6-7ae5-4be0-9f3e-30963fc03cb3\") " pod="kube-system/coredns-6f6b679f8f-kjct8" Feb 13 20:30:44.724209 kubelet[2487]: I0213 20:30:44.724098 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk2bq\" (UniqueName: \"kubernetes.io/projected/96cc06d6-7ae5-4be0-9f3e-30963fc03cb3-kube-api-access-mk2bq\") pod \"coredns-6f6b679f8f-kjct8\" (UID: \"96cc06d6-7ae5-4be0-9f3e-30963fc03cb3\") " pod="kube-system/coredns-6f6b679f8f-kjct8" Feb 13 20:30:44.724209 kubelet[2487]: I0213 20:30:44.724145 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed8de0ce-6ce8-44cf-9423-08b8e4a9e5b0-config-volume\") pod \"coredns-6f6b679f8f-28sq7\" (UID: \"ed8de0ce-6ce8-44cf-9423-08b8e4a9e5b0\") " pod="kube-system/coredns-6f6b679f8f-28sq7" Feb 13 20:30:44.724209 kubelet[2487]: I0213 20:30:44.724170 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggfgk\" (UniqueName: \"kubernetes.io/projected/ed8de0ce-6ce8-44cf-9423-08b8e4a9e5b0-kube-api-access-ggfgk\") pod \"coredns-6f6b679f8f-28sq7\" (UID: \"ed8de0ce-6ce8-44cf-9423-08b8e4a9e5b0\") " pod="kube-system/coredns-6f6b679f8f-28sq7" Feb 13 20:30:44.962708 kubelet[2487]: E0213 20:30:44.962660 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:44.966524 containerd[1477]: time="2025-02-13T20:30:44.964782564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kjct8,Uid:96cc06d6-7ae5-4be0-9f3e-30963fc03cb3,Namespace:kube-system,Attempt:0,}" Feb 13 20:30:44.984207 kubelet[2487]: E0213 20:30:44.982714 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:44.986398 containerd[1477]: time="2025-02-13T20:30:44.986056700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-28sq7,Uid:ed8de0ce-6ce8-44cf-9423-08b8e4a9e5b0,Namespace:kube-system,Attempt:0,}" Feb 13 20:30:45.161817 kubelet[2487]: E0213 20:30:45.159799 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:45.195072 kubelet[2487]: I0213 20:30:45.194862 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rbfqg" podStartSLOduration=6.468408609 podStartE2EDuration="17.194836227s" podCreationTimestamp="2025-02-13 20:30:28 +0000 UTC" firstStartedPulling="2025-02-13 20:30:29.219580011 +0000 UTC m=+5.448737339" lastFinishedPulling="2025-02-13 20:30:39.946007617 +0000 UTC m=+16.175164957" observedRunningTime="2025-02-13 20:30:45.19385048 +0000 UTC m=+21.423007863" watchObservedRunningTime="2025-02-13 20:30:45.194836227 +0000 UTC m=+21.423993575" Feb 13 20:30:46.162015 kubelet[2487]: E0213 20:30:46.161951 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:46.868686 systemd-networkd[1373]: cilium_host: Link UP Feb 13 20:30:46.868820 systemd-networkd[1373]: cilium_net: Link UP Feb 13 20:30:46.868824 systemd-networkd[1373]: cilium_net: Gained carrier Feb 13 20:30:46.869048 systemd-networkd[1373]: cilium_host: Gained carrier Feb 13 20:30:47.045238 systemd-networkd[1373]: cilium_vxlan: Link UP Feb 13 20:30:47.045250 systemd-networkd[1373]: cilium_vxlan: Gained carrier Feb 13 20:30:47.082434 systemd-networkd[1373]: cilium_host: Gained IPv6LL Feb 13 20:30:47.164237 kubelet[2487]: E0213 20:30:47.164076 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:47.354118 systemd-networkd[1373]: cilium_net: Gained IPv6LL Feb 13 20:30:47.531464 kernel: NET: Registered PF_ALG protocol family Feb 13 20:30:48.448949 systemd-networkd[1373]: lxc_health: Link UP Feb 13 20:30:48.460973 systemd-networkd[1373]: lxc_health: Gained carrier Feb 13 20:30:48.629503 systemd-networkd[1373]: lxc37f2660f7331: Link UP Feb 13 20:30:48.635483 kernel: eth0: renamed from tmpee045 Feb 13 20:30:48.651530 systemd-networkd[1373]: lxc37f2660f7331: Gained carrier Feb 13 20:30:48.764547 systemd-networkd[1373]: cilium_vxlan: Gained IPv6LL Feb 13 20:30:48.976092 kubelet[2487]: E0213 20:30:48.976045 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:49.112645 systemd-networkd[1373]: lxc54e3ed134501: Link UP Feb 13 20:30:49.119363 kernel: eth0: renamed from tmp92982 Feb 13 20:30:49.130635 systemd-networkd[1373]: lxc54e3ed134501: Gained carrier Feb 13 20:30:49.171879 kubelet[2487]: E0213 20:30:49.171838 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:49.785631 systemd-networkd[1373]: lxc_health: Gained IPv6LL Feb 13 20:30:50.105793 systemd-networkd[1373]: lxc37f2660f7331: Gained IPv6LL Feb 13 20:30:50.174305 kubelet[2487]: E0213 20:30:50.174257 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:50.426205 systemd-networkd[1373]: lxc54e3ed134501: Gained IPv6LL Feb 13 20:30:54.010698 containerd[1477]: time="2025-02-13T20:30:54.010509440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:54.010698 containerd[1477]: time="2025-02-13T20:30:54.010656679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:54.013852 containerd[1477]: time="2025-02-13T20:30:54.013463948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:54.013852 containerd[1477]: time="2025-02-13T20:30:54.013712963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:54.061076 systemd[1]: Started cri-containerd-929827fe3413e2f06ac4d36e4379754c0e89f6458c2fb52d7580e6d0386ae2c2.scope - libcontainer container 929827fe3413e2f06ac4d36e4379754c0e89f6458c2fb52d7580e6d0386ae2c2. Feb 13 20:30:54.187358 containerd[1477]: time="2025-02-13T20:30:54.185038958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:30:54.187358 containerd[1477]: time="2025-02-13T20:30:54.185119077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:30:54.187358 containerd[1477]: time="2025-02-13T20:30:54.185134697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:54.187358 containerd[1477]: time="2025-02-13T20:30:54.185226848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:30:54.202531 containerd[1477]: time="2025-02-13T20:30:54.202333824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-28sq7,Uid:ed8de0ce-6ce8-44cf-9423-08b8e4a9e5b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"929827fe3413e2f06ac4d36e4379754c0e89f6458c2fb52d7580e6d0386ae2c2\"" Feb 13 20:30:54.207891 kubelet[2487]: E0213 20:30:54.207199 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:54.212839 containerd[1477]: time="2025-02-13T20:30:54.212751725Z" level=info msg="CreateContainer within sandbox \"929827fe3413e2f06ac4d36e4379754c0e89f6458c2fb52d7580e6d0386ae2c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:30:54.226843 systemd[1]: Started cri-containerd-ee0459812c82272a1e363f77aecac63eebb069699dcc257c1407d39e72f7fdd2.scope - libcontainer container ee0459812c82272a1e363f77aecac63eebb069699dcc257c1407d39e72f7fdd2. Feb 13 20:30:54.257319 containerd[1477]: time="2025-02-13T20:30:54.257266825Z" level=info msg="CreateContainer within sandbox \"929827fe3413e2f06ac4d36e4379754c0e89f6458c2fb52d7580e6d0386ae2c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b39a7702b0d5993c309ac05aaaa7794da78ce81baf818b9f5f00210aec4e77ed\"" Feb 13 20:30:54.262040 containerd[1477]: time="2025-02-13T20:30:54.259293129Z" level=info msg="StartContainer for \"b39a7702b0d5993c309ac05aaaa7794da78ce81baf818b9f5f00210aec4e77ed\"" Feb 13 20:30:54.314711 systemd[1]: Started cri-containerd-b39a7702b0d5993c309ac05aaaa7794da78ce81baf818b9f5f00210aec4e77ed.scope - libcontainer container b39a7702b0d5993c309ac05aaaa7794da78ce81baf818b9f5f00210aec4e77ed. Feb 13 20:30:54.318248 containerd[1477]: time="2025-02-13T20:30:54.318169986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kjct8,Uid:96cc06d6-7ae5-4be0-9f3e-30963fc03cb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee0459812c82272a1e363f77aecac63eebb069699dcc257c1407d39e72f7fdd2\"" Feb 13 20:30:54.322048 kubelet[2487]: E0213 20:30:54.322018 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:54.327281 containerd[1477]: time="2025-02-13T20:30:54.327165974Z" level=info msg="CreateContainer within sandbox \"ee0459812c82272a1e363f77aecac63eebb069699dcc257c1407d39e72f7fdd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:30:54.337230 containerd[1477]: time="2025-02-13T20:30:54.337083864Z" level=info msg="CreateContainer within sandbox \"ee0459812c82272a1e363f77aecac63eebb069699dcc257c1407d39e72f7fdd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f989d9465e6ea71488d478369661c41ab7f591b9125f5d5b4a5906dbf5135c3d\"" Feb 13 20:30:54.338200 containerd[1477]: time="2025-02-13T20:30:54.337990562Z" level=info msg="StartContainer for \"f989d9465e6ea71488d478369661c41ab7f591b9125f5d5b4a5906dbf5135c3d\"" Feb 13 20:30:54.382272 containerd[1477]: time="2025-02-13T20:30:54.382215518Z" level=info msg="StartContainer for \"b39a7702b0d5993c309ac05aaaa7794da78ce81baf818b9f5f00210aec4e77ed\" returns successfully" Feb 13 20:30:54.400704 systemd[1]: Started cri-containerd-f989d9465e6ea71488d478369661c41ab7f591b9125f5d5b4a5906dbf5135c3d.scope - libcontainer container f989d9465e6ea71488d478369661c41ab7f591b9125f5d5b4a5906dbf5135c3d. Feb 13 20:30:54.464071 containerd[1477]: time="2025-02-13T20:30:54.463879694Z" level=info msg="StartContainer for \"f989d9465e6ea71488d478369661c41ab7f591b9125f5d5b4a5906dbf5135c3d\" returns successfully" Feb 13 20:30:55.020458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161256734.mount: Deactivated successfully. Feb 13 20:30:55.196737 kubelet[2487]: E0213 20:30:55.196097 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:55.201753 kubelet[2487]: E0213 20:30:55.201520 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:55.216002 kubelet[2487]: I0213 20:30:55.215937 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-28sq7" podStartSLOduration=26.215914696 podStartE2EDuration="26.215914696s" podCreationTimestamp="2025-02-13 20:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:30:55.214809021 +0000 UTC m=+31.443966426" watchObservedRunningTime="2025-02-13 20:30:55.215914696 +0000 UTC m=+31.445072035" Feb 13 20:30:55.247151 kubelet[2487]: I0213 20:30:55.246214 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kjct8" podStartSLOduration=26.246195824 podStartE2EDuration="26.246195824s" podCreationTimestamp="2025-02-13 20:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:30:55.24582183 +0000 UTC m=+31.474979177" watchObservedRunningTime="2025-02-13 20:30:55.246195824 +0000 UTC m=+31.475353169" Feb 13 20:30:56.203624 kubelet[2487]: E0213 20:30:56.203499 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:56.203624 kubelet[2487]: E0213 20:30:56.203511 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:57.205272 kubelet[2487]: E0213 20:30:57.205216 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:30:57.206584 kubelet[2487]: E0213 20:30:57.206161 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:31:15.986210 systemd[1]: Started sshd@7-64.23.133.101:22-147.75.109.163:38104.service - OpenSSH per-connection server daemon (147.75.109.163:38104). Feb 13 20:31:16.067821 sshd[3870]: Accepted publickey for core from 147.75.109.163 port 38104 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:16.070209 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:16.076706 systemd-logind[1452]: New session 8 of user core. Feb 13 20:31:16.086731 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:31:16.701488 sshd[3870]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:16.707092 systemd[1]: sshd@7-64.23.133.101:22-147.75.109.163:38104.service: Deactivated successfully. Feb 13 20:31:16.710440 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:31:16.711779 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:31:16.713200 systemd-logind[1452]: Removed session 8. Feb 13 20:31:21.724300 systemd[1]: Started sshd@8-64.23.133.101:22-147.75.109.163:60138.service - OpenSSH per-connection server daemon (147.75.109.163:60138). Feb 13 20:31:21.766390 sshd[3884]: Accepted publickey for core from 147.75.109.163 port 60138 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:21.766762 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:21.778237 systemd-logind[1452]: New session 9 of user core. Feb 13 20:31:21.785983 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:31:21.936766 sshd[3884]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:21.941724 systemd[1]: sshd@8-64.23.133.101:22-147.75.109.163:60138.service: Deactivated successfully. Feb 13 20:31:21.945100 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:31:21.946398 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:31:21.947692 systemd-logind[1452]: Removed session 9. Feb 13 20:31:26.956812 systemd[1]: Started sshd@9-64.23.133.101:22-147.75.109.163:60146.service - OpenSSH per-connection server daemon (147.75.109.163:60146). Feb 13 20:31:26.996103 sshd[3900]: Accepted publickey for core from 147.75.109.163 port 60146 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:26.998370 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:27.003711 systemd-logind[1452]: New session 10 of user core. Feb 13 20:31:27.015729 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:31:27.151765 sshd[3900]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:27.156562 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:31:27.157781 systemd[1]: sshd@9-64.23.133.101:22-147.75.109.163:60146.service: Deactivated successfully. Feb 13 20:31:27.160793 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:31:27.163782 systemd-logind[1452]: Removed session 10. Feb 13 20:31:32.171430 systemd[1]: Started sshd@10-64.23.133.101:22-147.75.109.163:58480.service - OpenSSH per-connection server daemon (147.75.109.163:58480). Feb 13 20:31:32.234018 sshd[3916]: Accepted publickey for core from 147.75.109.163 port 58480 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:32.234839 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:32.240962 systemd-logind[1452]: New session 11 of user core. Feb 13 20:31:32.247783 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:31:32.398121 sshd[3916]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:32.403881 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:31:32.404314 systemd[1]: sshd@10-64.23.133.101:22-147.75.109.163:58480.service: Deactivated successfully. Feb 13 20:31:32.408146 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:31:32.409937 systemd-logind[1452]: Removed session 11. Feb 13 20:31:37.419342 systemd[1]: Started sshd@11-64.23.133.101:22-147.75.109.163:58486.service - OpenSSH per-connection server daemon (147.75.109.163:58486). Feb 13 20:31:37.461261 sshd[3930]: Accepted publickey for core from 147.75.109.163 port 58486 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:37.463089 sshd[3930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:37.469215 systemd-logind[1452]: New session 12 of user core. Feb 13 20:31:37.474707 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:31:37.640675 sshd[3930]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:37.654220 systemd[1]: sshd@11-64.23.133.101:22-147.75.109.163:58486.service: Deactivated successfully. Feb 13 20:31:37.656489 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:31:37.657452 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:31:37.663897 systemd[1]: Started sshd@12-64.23.133.101:22-147.75.109.163:58500.service - OpenSSH per-connection server daemon (147.75.109.163:58500). Feb 13 20:31:37.668429 systemd-logind[1452]: Removed session 12. Feb 13 20:31:37.733400 sshd[3944]: Accepted publickey for core from 147.75.109.163 port 58500 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:37.736319 sshd[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:37.745564 systemd-logind[1452]: New session 13 of user core. Feb 13 20:31:37.749709 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:31:37.979170 sshd[3944]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:37.990563 systemd[1]: sshd@12-64.23.133.101:22-147.75.109.163:58500.service: Deactivated successfully. Feb 13 20:31:37.994201 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:31:37.999271 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:31:38.009083 systemd[1]: Started sshd@13-64.23.133.101:22-147.75.109.163:58512.service - OpenSSH per-connection server daemon (147.75.109.163:58512). Feb 13 20:31:38.013144 systemd-logind[1452]: Removed session 13. Feb 13 20:31:38.081294 sshd[3955]: Accepted publickey for core from 147.75.109.163 port 58512 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:38.085586 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:38.096761 systemd-logind[1452]: New session 14 of user core. Feb 13 20:31:38.103658 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:31:38.270141 sshd[3955]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:38.286366 systemd[1]: sshd@13-64.23.133.101:22-147.75.109.163:58512.service: Deactivated successfully. Feb 13 20:31:38.290210 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:31:38.291984 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:31:38.293959 systemd-logind[1452]: Removed session 14. Feb 13 20:31:43.291907 systemd[1]: Started sshd@14-64.23.133.101:22-147.75.109.163:41312.service - OpenSSH per-connection server daemon (147.75.109.163:41312). Feb 13 20:31:43.331819 sshd[3968]: Accepted publickey for core from 147.75.109.163 port 41312 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:43.333897 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:43.340921 systemd-logind[1452]: New session 15 of user core. Feb 13 20:31:43.348825 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:31:43.490988 sshd[3968]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:43.494783 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:31:43.495907 systemd[1]: sshd@14-64.23.133.101:22-147.75.109.163:41312.service: Deactivated successfully. Feb 13 20:31:43.499058 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:31:43.502296 systemd-logind[1452]: Removed session 15. Feb 13 20:31:48.510826 systemd[1]: Started sshd@15-64.23.133.101:22-147.75.109.163:41326.service - OpenSSH per-connection server daemon (147.75.109.163:41326). Feb 13 20:31:48.552840 sshd[3982]: Accepted publickey for core from 147.75.109.163 port 41326 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:48.554913 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:48.561725 systemd-logind[1452]: New session 16 of user core. Feb 13 20:31:48.571789 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:31:48.703814 sshd[3982]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:48.708230 systemd[1]: sshd@15-64.23.133.101:22-147.75.109.163:41326.service: Deactivated successfully. Feb 13 20:31:48.711729 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:31:48.713289 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:31:48.714501 systemd-logind[1452]: Removed session 16. Feb 13 20:31:48.973514 kubelet[2487]: E0213 20:31:48.973463 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:31:49.974496 kubelet[2487]: E0213 20:31:49.972962 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:31:49.974496 kubelet[2487]: E0213 20:31:49.973935 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:31:53.717678 systemd[1]: Started sshd@16-64.23.133.101:22-147.75.109.163:41890.service - OpenSSH per-connection server daemon (147.75.109.163:41890). Feb 13 20:31:53.769123 sshd[3995]: Accepted publickey for core from 147.75.109.163 port 41890 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:53.770008 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:53.775027 systemd-logind[1452]: New session 17 of user core. Feb 13 20:31:53.778635 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:31:53.916504 sshd[3995]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:53.928873 systemd[1]: sshd@16-64.23.133.101:22-147.75.109.163:41890.service: Deactivated successfully. Feb 13 20:31:53.931865 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:31:53.934591 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:31:53.945367 systemd[1]: Started sshd@17-64.23.133.101:22-147.75.109.163:41906.service - OpenSSH per-connection server daemon (147.75.109.163:41906). Feb 13 20:31:53.947532 systemd-logind[1452]: Removed session 17. Feb 13 20:31:53.983478 sshd[4008]: Accepted publickey for core from 147.75.109.163 port 41906 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:53.985294 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:53.991540 systemd-logind[1452]: New session 18 of user core. Feb 13 20:31:53.998727 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:31:54.347068 sshd[4008]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:54.361908 systemd[1]: sshd@17-64.23.133.101:22-147.75.109.163:41906.service: Deactivated successfully. Feb 13 20:31:54.364906 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:31:54.365699 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:31:54.370782 systemd[1]: Started sshd@18-64.23.133.101:22-147.75.109.163:41912.service - OpenSSH per-connection server daemon (147.75.109.163:41912). Feb 13 20:31:54.375449 systemd-logind[1452]: Removed session 18. Feb 13 20:31:54.435453 sshd[4019]: Accepted publickey for core from 147.75.109.163 port 41912 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:54.437824 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:54.447561 systemd-logind[1452]: New session 19 of user core. Feb 13 20:31:54.453728 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:31:54.974602 kubelet[2487]: E0213 20:31:54.973675 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:31:56.421191 sshd[4019]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:56.442833 systemd[1]: Started sshd@19-64.23.133.101:22-147.75.109.163:41926.service - OpenSSH per-connection server daemon (147.75.109.163:41926). Feb 13 20:31:56.444059 systemd[1]: sshd@18-64.23.133.101:22-147.75.109.163:41912.service: Deactivated successfully. Feb 13 20:31:56.451927 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:31:56.458992 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:31:56.465503 systemd-logind[1452]: Removed session 19. Feb 13 20:31:56.505465 sshd[4035]: Accepted publickey for core from 147.75.109.163 port 41926 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:56.505479 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:56.517596 systemd-logind[1452]: New session 20 of user core. Feb 13 20:31:56.522797 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:31:56.881279 sshd[4035]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:56.892933 systemd[1]: sshd@19-64.23.133.101:22-147.75.109.163:41926.service: Deactivated successfully. Feb 13 20:31:56.898333 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:31:56.900054 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:31:56.909090 systemd[1]: Started sshd@20-64.23.133.101:22-147.75.109.163:41934.service - OpenSSH per-connection server daemon (147.75.109.163:41934). Feb 13 20:31:56.913920 systemd-logind[1452]: Removed session 20. Feb 13 20:31:56.951438 sshd[4047]: Accepted publickey for core from 147.75.109.163 port 41934 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:31:56.953883 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:31:56.960867 systemd-logind[1452]: New session 21 of user core. Feb 13 20:31:56.976675 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:31:57.131123 sshd[4047]: pam_unix(sshd:session): session closed for user core Feb 13 20:31:57.135136 systemd[1]: sshd@20-64.23.133.101:22-147.75.109.163:41934.service: Deactivated successfully. Feb 13 20:31:57.138833 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:31:57.143082 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:31:57.144833 systemd-logind[1452]: Removed session 21. Feb 13 20:31:58.973333 kubelet[2487]: E0213 20:31:58.973173 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:31:58.973333 kubelet[2487]: E0213 20:31:58.973200 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:02.157714 systemd[1]: Started sshd@21-64.23.133.101:22-147.75.109.163:42564.service - OpenSSH per-connection server daemon (147.75.109.163:42564). Feb 13 20:32:02.220839 sshd[4063]: Accepted publickey for core from 147.75.109.163 port 42564 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:32:02.222436 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:02.234837 systemd-logind[1452]: New session 22 of user core. Feb 13 20:32:02.238858 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:32:02.414058 sshd[4063]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:02.420748 systemd[1]: sshd@21-64.23.133.101:22-147.75.109.163:42564.service: Deactivated successfully. Feb 13 20:32:02.424126 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:32:02.426175 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:32:02.428281 systemd-logind[1452]: Removed session 22. Feb 13 20:32:06.972882 kubelet[2487]: E0213 20:32:06.972759 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:07.434776 systemd[1]: Started sshd@22-64.23.133.101:22-147.75.109.163:42580.service - OpenSSH per-connection server daemon (147.75.109.163:42580). Feb 13 20:32:07.478148 sshd[4080]: Accepted publickey for core from 147.75.109.163 port 42580 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:32:07.480184 sshd[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:07.485640 systemd-logind[1452]: New session 23 of user core. Feb 13 20:32:07.493706 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:32:07.652497 sshd[4080]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:07.657118 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:32:07.657367 systemd[1]: sshd@22-64.23.133.101:22-147.75.109.163:42580.service: Deactivated successfully. Feb 13 20:32:07.659544 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:32:07.663235 systemd-logind[1452]: Removed session 23. Feb 13 20:32:12.677340 systemd[1]: Started sshd@23-64.23.133.101:22-147.75.109.163:39178.service - OpenSSH per-connection server daemon (147.75.109.163:39178). Feb 13 20:32:12.717379 sshd[4093]: Accepted publickey for core from 147.75.109.163 port 39178 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:32:12.719765 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:12.727555 systemd-logind[1452]: New session 24 of user core. Feb 13 20:32:12.736720 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:32:12.866831 sshd[4093]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:12.872026 systemd[1]: sshd@23-64.23.133.101:22-147.75.109.163:39178.service: Deactivated successfully. Feb 13 20:32:12.875019 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:32:12.876021 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:32:12.877640 systemd-logind[1452]: Removed session 24. Feb 13 20:32:12.973536 kubelet[2487]: E0213 20:32:12.973141 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:17.890920 systemd[1]: Started sshd@24-64.23.133.101:22-147.75.109.163:39186.service - OpenSSH per-connection server daemon (147.75.109.163:39186). Feb 13 20:32:17.934513 sshd[4105]: Accepted publickey for core from 147.75.109.163 port 39186 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:32:17.936712 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:17.942292 systemd-logind[1452]: New session 25 of user core. Feb 13 20:32:17.949736 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:32:18.111863 sshd[4105]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:18.125067 systemd[1]: sshd@24-64.23.133.101:22-147.75.109.163:39186.service: Deactivated successfully. Feb 13 20:32:18.127677 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:32:18.129916 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:32:18.134853 systemd[1]: Started sshd@25-64.23.133.101:22-147.75.109.163:39188.service - OpenSSH per-connection server daemon (147.75.109.163:39188). Feb 13 20:32:18.137511 systemd-logind[1452]: Removed session 25. Feb 13 20:32:18.200784 sshd[4117]: Accepted publickey for core from 147.75.109.163 port 39188 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:32:18.202700 sshd[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:18.209206 systemd-logind[1452]: New session 26 of user core. Feb 13 20:32:18.216748 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:32:19.717561 systemd[1]: run-containerd-runc-k8s.io-e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49-runc.xE0OGW.mount: Deactivated successfully. Feb 13 20:32:19.723170 containerd[1477]: time="2025-02-13T20:32:19.723092532Z" level=info msg="StopContainer for \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\" with timeout 30 (s)" Feb 13 20:32:19.730139 containerd[1477]: time="2025-02-13T20:32:19.730087300Z" level=info msg="Stop container \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\" with signal terminated" Feb 13 20:32:19.743569 containerd[1477]: time="2025-02-13T20:32:19.743398061Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:32:19.757088 containerd[1477]: time="2025-02-13T20:32:19.757032350Z" level=info msg="StopContainer for \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\" with timeout 2 (s)" Feb 13 20:32:19.757944 containerd[1477]: time="2025-02-13T20:32:19.757898211Z" level=info msg="Stop container \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\" with signal terminated" Feb 13 20:32:19.759094 systemd[1]: cri-containerd-9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8.scope: Deactivated successfully. Feb 13 20:32:19.773364 systemd-networkd[1373]: lxc_health: Link DOWN Feb 13 20:32:19.773377 systemd-networkd[1373]: lxc_health: Lost carrier Feb 13 20:32:19.805887 systemd[1]: cri-containerd-e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49.scope: Deactivated successfully. Feb 13 20:32:19.806144 systemd[1]: cri-containerd-e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49.scope: Consumed 9.216s CPU time. Feb 13 20:32:19.825228 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8-rootfs.mount: Deactivated successfully. Feb 13 20:32:19.828725 containerd[1477]: time="2025-02-13T20:32:19.828370413Z" level=info msg="shim disconnected" id=9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8 namespace=k8s.io Feb 13 20:32:19.828725 containerd[1477]: time="2025-02-13T20:32:19.828455640Z" level=warning msg="cleaning up after shim disconnected" id=9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8 namespace=k8s.io Feb 13 20:32:19.828725 containerd[1477]: time="2025-02-13T20:32:19.828468763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:32:19.851100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49-rootfs.mount: Deactivated successfully. Feb 13 20:32:19.858791 containerd[1477]: time="2025-02-13T20:32:19.857436967Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:32:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:32:19.860033 containerd[1477]: time="2025-02-13T20:32:19.859997041Z" level=info msg="StopContainer for \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\" returns successfully" Feb 13 20:32:19.860701 containerd[1477]: time="2025-02-13T20:32:19.860467011Z" level=info msg="shim disconnected" id=e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49 namespace=k8s.io Feb 13 20:32:19.860701 containerd[1477]: time="2025-02-13T20:32:19.860710849Z" level=warning msg="cleaning up after shim disconnected" id=e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49 namespace=k8s.io Feb 13 20:32:19.860855 containerd[1477]: time="2025-02-13T20:32:19.860729929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:32:19.862297 containerd[1477]: time="2025-02-13T20:32:19.862270174Z" level=info msg="StopPodSandbox for \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\"" Feb 13 20:32:19.862698 containerd[1477]: time="2025-02-13T20:32:19.862660681Z" level=info msg="Container to stop \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:32:19.867406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e-shm.mount: Deactivated successfully. Feb 13 20:32:19.879907 systemd[1]: cri-containerd-3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e.scope: Deactivated successfully. Feb 13 20:32:19.896175 containerd[1477]: time="2025-02-13T20:32:19.896113674Z" level=info msg="StopContainer for \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\" returns successfully" Feb 13 20:32:19.896911 containerd[1477]: time="2025-02-13T20:32:19.896874105Z" level=info msg="StopPodSandbox for \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\"" Feb 13 20:32:19.897044 containerd[1477]: time="2025-02-13T20:32:19.896935300Z" level=info msg="Container to stop \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:32:19.897044 containerd[1477]: time="2025-02-13T20:32:19.896955441Z" level=info msg="Container to stop \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:32:19.897044 containerd[1477]: time="2025-02-13T20:32:19.896970673Z" level=info msg="Container to stop \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:32:19.897044 containerd[1477]: time="2025-02-13T20:32:19.896985864Z" level=info msg="Container to stop \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:32:19.897044 containerd[1477]: time="2025-02-13T20:32:19.897003415Z" level=info msg="Container to stop \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:32:19.911683 systemd[1]: cri-containerd-911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb.scope: Deactivated successfully. Feb 13 20:32:19.932812 containerd[1477]: time="2025-02-13T20:32:19.932698360Z" level=info msg="shim disconnected" id=3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e namespace=k8s.io Feb 13 20:32:19.933249 containerd[1477]: time="2025-02-13T20:32:19.933113101Z" level=warning msg="cleaning up after shim disconnected" id=3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e namespace=k8s.io Feb 13 20:32:19.933249 containerd[1477]: time="2025-02-13T20:32:19.933140228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:32:19.951395 containerd[1477]: time="2025-02-13T20:32:19.951250722Z" level=info msg="shim disconnected" id=911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb namespace=k8s.io Feb 13 20:32:19.951395 containerd[1477]: time="2025-02-13T20:32:19.951328615Z" level=warning msg="cleaning up after shim disconnected" id=911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb namespace=k8s.io Feb 13 20:32:19.951395 containerd[1477]: time="2025-02-13T20:32:19.951338152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:32:19.975492 containerd[1477]: time="2025-02-13T20:32:19.973762276Z" level=info msg="TearDown network for sandbox \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\" successfully" Feb 13 20:32:19.975492 containerd[1477]: time="2025-02-13T20:32:19.973856200Z" level=info msg="StopPodSandbox for \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\" returns successfully" Feb 13 20:32:19.982644 containerd[1477]: time="2025-02-13T20:32:19.982587390Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:32:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:32:19.990155 containerd[1477]: time="2025-02-13T20:32:19.990028339Z" level=info msg="TearDown network for sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" successfully" Feb 13 20:32:19.990155 containerd[1477]: time="2025-02-13T20:32:19.990070439Z" level=info msg="StopPodSandbox for \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" returns successfully" Feb 13 20:32:20.105460 kubelet[2487]: I0213 20:32:20.105171 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4be3c56f-503f-4d34-80a0-3421bb9ca63c-clustermesh-secrets\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.105460 kubelet[2487]: I0213 20:32:20.105257 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-lib-modules\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.105460 kubelet[2487]: I0213 20:32:20.105280 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-host-proc-sys-kernel\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.105460 kubelet[2487]: I0213 20:32:20.105299 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-run\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.105460 kubelet[2487]: I0213 20:32:20.105316 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cni-path\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.105460 kubelet[2487]: I0213 20:32:20.105332 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-bpf-maps\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.106198 kubelet[2487]: I0213 20:32:20.105352 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-etc-cni-netd\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.106198 kubelet[2487]: I0213 20:32:20.105376 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj5xs\" (UniqueName: \"kubernetes.io/projected/76fa51ba-d835-48cc-9bb8-223b5f4d5047-kube-api-access-hj5xs\") pod \"76fa51ba-d835-48cc-9bb8-223b5f4d5047\" (UID: \"76fa51ba-d835-48cc-9bb8-223b5f4d5047\") " Feb 13 20:32:20.107446 kubelet[2487]: I0213 20:32:20.105404 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-host-proc-sys-net\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.107446 kubelet[2487]: I0213 20:32:20.106379 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4be3c56f-503f-4d34-80a0-3421bb9ca63c-hubble-tls\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.107446 kubelet[2487]: I0213 20:32:20.106425 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqdvf\" (UniqueName: \"kubernetes.io/projected/4be3c56f-503f-4d34-80a0-3421bb9ca63c-kube-api-access-tqdvf\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.107446 kubelet[2487]: I0213 20:32:20.106445 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-hostproc\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.107446 kubelet[2487]: I0213 20:32:20.106461 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-xtables-lock\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.107446 kubelet[2487]: I0213 20:32:20.106478 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-cgroup\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.107722 kubelet[2487]: I0213 20:32:20.106496 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-config-path\") pod \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\" (UID: \"4be3c56f-503f-4d34-80a0-3421bb9ca63c\") " Feb 13 20:32:20.107722 kubelet[2487]: I0213 20:32:20.106513 2487 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76fa51ba-d835-48cc-9bb8-223b5f4d5047-cilium-config-path\") pod \"76fa51ba-d835-48cc-9bb8-223b5f4d5047\" (UID: \"76fa51ba-d835-48cc-9bb8-223b5f4d5047\") " Feb 13 20:32:20.109319 kubelet[2487]: I0213 20:32:20.109273 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fa51ba-d835-48cc-9bb8-223b5f4d5047-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "76fa51ba-d835-48cc-9bb8-223b5f4d5047" (UID: "76fa51ba-d835-48cc-9bb8-223b5f4d5047"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:32:20.109570 kubelet[2487]: I0213 20:32:20.109452 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be3c56f-503f-4d34-80a0-3421bb9ca63c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:32:20.109726 kubelet[2487]: I0213 20:32:20.109704 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.112847 kubelet[2487]: I0213 20:32:20.112790 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fa51ba-d835-48cc-9bb8-223b5f4d5047-kube-api-access-hj5xs" (OuterVolumeSpecName: "kube-api-access-hj5xs") pod "76fa51ba-d835-48cc-9bb8-223b5f4d5047" (UID: "76fa51ba-d835-48cc-9bb8-223b5f4d5047"). InnerVolumeSpecName "kube-api-access-hj5xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:32:20.112969 kubelet[2487]: I0213 20:32:20.112880 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.112969 kubelet[2487]: I0213 20:32:20.112903 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.112969 kubelet[2487]: I0213 20:32:20.112921 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.112969 kubelet[2487]: I0213 20:32:20.112938 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cni-path" (OuterVolumeSpecName: "cni-path") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.112969 kubelet[2487]: I0213 20:32:20.112953 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.113105 kubelet[2487]: I0213 20:32:20.112975 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.113105 kubelet[2487]: I0213 20:32:20.113014 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.113325 kubelet[2487]: I0213 20:32:20.113292 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be3c56f-503f-4d34-80a0-3421bb9ca63c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:32:20.113475 kubelet[2487]: I0213 20:32:20.113459 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.115818 kubelet[2487]: I0213 20:32:20.115780 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:32:20.116047 kubelet[2487]: I0213 20:32:20.116021 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-hostproc" (OuterVolumeSpecName: "hostproc") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:32:20.116159 kubelet[2487]: I0213 20:32:20.116049 2487 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be3c56f-503f-4d34-80a0-3421bb9ca63c-kube-api-access-tqdvf" (OuterVolumeSpecName: "kube-api-access-tqdvf") pod "4be3c56f-503f-4d34-80a0-3421bb9ca63c" (UID: "4be3c56f-503f-4d34-80a0-3421bb9ca63c"). InnerVolumeSpecName "kube-api-access-tqdvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:32:20.207392 kubelet[2487]: I0213 20:32:20.207114 2487 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-bpf-maps\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207392 kubelet[2487]: I0213 20:32:20.207170 2487 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4be3c56f-503f-4d34-80a0-3421bb9ca63c-clustermesh-secrets\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207392 kubelet[2487]: I0213 20:32:20.207195 2487 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-lib-modules\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207392 kubelet[2487]: I0213 20:32:20.207211 2487 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-host-proc-sys-kernel\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207392 kubelet[2487]: I0213 20:32:20.207225 2487 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-run\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207392 kubelet[2487]: I0213 20:32:20.207237 2487 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cni-path\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207392 kubelet[2487]: I0213 20:32:20.207247 2487 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-etc-cni-netd\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207392 kubelet[2487]: I0213 20:32:20.207260 2487 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hj5xs\" (UniqueName: \"kubernetes.io/projected/76fa51ba-d835-48cc-9bb8-223b5f4d5047-kube-api-access-hj5xs\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207958 kubelet[2487]: I0213 20:32:20.207271 2487 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-host-proc-sys-net\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207958 kubelet[2487]: I0213 20:32:20.207284 2487 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4be3c56f-503f-4d34-80a0-3421bb9ca63c-hubble-tls\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207958 kubelet[2487]: I0213 20:32:20.207296 2487 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tqdvf\" (UniqueName: \"kubernetes.io/projected/4be3c56f-503f-4d34-80a0-3421bb9ca63c-kube-api-access-tqdvf\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207958 kubelet[2487]: I0213 20:32:20.207307 2487 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-hostproc\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207958 kubelet[2487]: I0213 20:32:20.207319 2487 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-xtables-lock\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207958 kubelet[2487]: I0213 20:32:20.207330 2487 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-cgroup\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207958 kubelet[2487]: I0213 20:32:20.207344 2487 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4be3c56f-503f-4d34-80a0-3421bb9ca63c-cilium-config-path\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.207958 kubelet[2487]: I0213 20:32:20.207356 2487 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76fa51ba-d835-48cc-9bb8-223b5f4d5047-cilium-config-path\") on node \"ci-4081.3.1-6-72a75d9253\" DevicePath \"\"" Feb 13 20:32:20.402911 kubelet[2487]: I0213 20:32:20.402765 2487 scope.go:117] "RemoveContainer" containerID="9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8" Feb 13 20:32:20.405979 containerd[1477]: time="2025-02-13T20:32:20.405607031Z" level=info msg="RemoveContainer for \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\"" Feb 13 20:32:20.412198 containerd[1477]: time="2025-02-13T20:32:20.412080911Z" level=info msg="RemoveContainer for \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\" returns successfully" Feb 13 20:32:20.412641 systemd[1]: Removed slice kubepods-besteffort-pod76fa51ba_d835_48cc_9bb8_223b5f4d5047.slice - libcontainer container kubepods-besteffort-pod76fa51ba_d835_48cc_9bb8_223b5f4d5047.slice. Feb 13 20:32:20.417368 kubelet[2487]: I0213 20:32:20.417306 2487 scope.go:117] "RemoveContainer" containerID="9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8" Feb 13 20:32:20.420230 systemd[1]: Removed slice kubepods-burstable-pod4be3c56f_503f_4d34_80a0_3421bb9ca63c.slice - libcontainer container kubepods-burstable-pod4be3c56f_503f_4d34_80a0_3421bb9ca63c.slice. Feb 13 20:32:20.420602 systemd[1]: kubepods-burstable-pod4be3c56f_503f_4d34_80a0_3421bb9ca63c.slice: Consumed 9.330s CPU time. Feb 13 20:32:20.440795 containerd[1477]: time="2025-02-13T20:32:20.422038396Z" level=error msg="ContainerStatus for \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\": not found" Feb 13 20:32:20.441217 kubelet[2487]: E0213 20:32:20.441117 2487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\": not found" containerID="9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8" Feb 13 20:32:20.452203 kubelet[2487]: I0213 20:32:20.441160 2487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8"} err="failed to get container status \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e4d642834a65271546a361ce7c51449da2bb50e1a6867b2209db4d1d168f6a8\": not found" Feb 13 20:32:20.452203 kubelet[2487]: I0213 20:32:20.452206 2487 scope.go:117] "RemoveContainer" containerID="e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49" Feb 13 20:32:20.454924 containerd[1477]: time="2025-02-13T20:32:20.454607215Z" level=info msg="RemoveContainer for \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\"" Feb 13 20:32:20.460020 containerd[1477]: time="2025-02-13T20:32:20.459965396Z" level=info msg="RemoveContainer for \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\" returns successfully" Feb 13 20:32:20.460683 kubelet[2487]: I0213 20:32:20.460507 2487 scope.go:117] "RemoveContainer" containerID="968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61" Feb 13 20:32:20.464314 containerd[1477]: time="2025-02-13T20:32:20.464041210Z" level=info msg="RemoveContainer for \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\"" Feb 13 20:32:20.467215 containerd[1477]: time="2025-02-13T20:32:20.467174566Z" level=info msg="RemoveContainer for \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\" returns successfully" Feb 13 20:32:20.468477 kubelet[2487]: I0213 20:32:20.468176 2487 scope.go:117] "RemoveContainer" containerID="68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4" Feb 13 20:32:20.470374 containerd[1477]: time="2025-02-13T20:32:20.470316455Z" level=info msg="RemoveContainer for \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\"" Feb 13 20:32:20.472762 containerd[1477]: time="2025-02-13T20:32:20.472719456Z" level=info msg="RemoveContainer for \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\" returns successfully" Feb 13 20:32:20.473274 kubelet[2487]: I0213 20:32:20.473138 2487 scope.go:117] "RemoveContainer" containerID="b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd" Feb 13 20:32:20.474696 containerd[1477]: time="2025-02-13T20:32:20.474642030Z" level=info msg="RemoveContainer for \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\"" Feb 13 20:32:20.482622 containerd[1477]: time="2025-02-13T20:32:20.482455764Z" level=info msg="RemoveContainer for \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\" returns successfully" Feb 13 20:32:20.482972 kubelet[2487]: I0213 20:32:20.482934 2487 scope.go:117] "RemoveContainer" containerID="b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7" Feb 13 20:32:20.484736 containerd[1477]: time="2025-02-13T20:32:20.484699106Z" level=info msg="RemoveContainer for \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\"" Feb 13 20:32:20.486978 containerd[1477]: time="2025-02-13T20:32:20.486929176Z" level=info msg="RemoveContainer for \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\" returns successfully" Feb 13 20:32:20.487469 kubelet[2487]: I0213 20:32:20.487344 2487 scope.go:117] "RemoveContainer" containerID="e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49" Feb 13 20:32:20.488240 containerd[1477]: time="2025-02-13T20:32:20.487840321Z" level=error msg="ContainerStatus for \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\": not found" Feb 13 20:32:20.488474 kubelet[2487]: E0213 20:32:20.488065 2487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\": not found" containerID="e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49" Feb 13 20:32:20.488474 kubelet[2487]: I0213 20:32:20.488103 2487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49"} err="failed to get container status \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4cfbdce74360945d9b96f31597fb48a60cc805b352d1fad6c74f1b607fedc49\": not found" Feb 13 20:32:20.488474 kubelet[2487]: I0213 20:32:20.488142 2487 scope.go:117] "RemoveContainer" containerID="968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61" Feb 13 20:32:20.488902 containerd[1477]: time="2025-02-13T20:32:20.488818865Z" level=error msg="ContainerStatus for \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\": not found" Feb 13 20:32:20.489022 kubelet[2487]: E0213 20:32:20.488988 2487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\": not found" containerID="968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61" Feb 13 20:32:20.489091 kubelet[2487]: I0213 20:32:20.489025 2487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61"} err="failed to get container status \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\": rpc error: code = NotFound desc = an error occurred when try to find container \"968d60d061e4dc132c8cc574abb813d43489c8b91038ca748085d3a6a652ec61\": not found" Feb 13 20:32:20.489091 kubelet[2487]: I0213 20:32:20.489049 2487 scope.go:117] "RemoveContainer" containerID="68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4" Feb 13 20:32:20.489360 containerd[1477]: time="2025-02-13T20:32:20.489233973Z" level=error msg="ContainerStatus for \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\": not found" Feb 13 20:32:20.489544 kubelet[2487]: E0213 20:32:20.489517 2487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\": not found" containerID="68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4" Feb 13 20:32:20.489604 kubelet[2487]: I0213 20:32:20.489555 2487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4"} err="failed to get container status \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"68ba3c165f544d1eaf0f0ed13e21971029de7648970b0f77b9aff63e31bec7a4\": not found" Feb 13 20:32:20.489604 kubelet[2487]: I0213 20:32:20.489590 2487 scope.go:117] "RemoveContainer" containerID="b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd" Feb 13 20:32:20.489842 containerd[1477]: time="2025-02-13T20:32:20.489808354Z" level=error msg="ContainerStatus for \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\": not found" Feb 13 20:32:20.489949 kubelet[2487]: E0213 20:32:20.489929 2487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\": not found" containerID="b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd" Feb 13 20:32:20.490006 kubelet[2487]: I0213 20:32:20.489952 2487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd"} err="failed to get container status \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6a7982f3ed91e4edefad3683e7296eecf3b64ebe72b1ba836a63f6fac5e94fd\": not found" Feb 13 20:32:20.490006 kubelet[2487]: I0213 20:32:20.489979 2487 scope.go:117] "RemoveContainer" containerID="b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7" Feb 13 20:32:20.490263 containerd[1477]: time="2025-02-13T20:32:20.490185571Z" level=error msg="ContainerStatus for \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\": not found" Feb 13 20:32:20.490304 kubelet[2487]: E0213 20:32:20.490285 2487 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\": not found" containerID="b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7" Feb 13 20:32:20.490332 kubelet[2487]: I0213 20:32:20.490310 2487 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7"} err="failed to get container status \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"b267b5b30e0c0ef562f6c496d86a6a053fbe417c1eb687bf4fc31adf067b76d7\": not found" Feb 13 20:32:20.713557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e-rootfs.mount: Deactivated successfully. Feb 13 20:32:20.714553 systemd[1]: var-lib-kubelet-pods-76fa51ba\x2dd835\x2d48cc\x2d9bb8\x2d223b5f4d5047-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhj5xs.mount: Deactivated successfully. Feb 13 20:32:20.714667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb-rootfs.mount: Deactivated successfully. Feb 13 20:32:20.714748 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb-shm.mount: Deactivated successfully. Feb 13 20:32:20.714834 systemd[1]: var-lib-kubelet-pods-4be3c56f\x2d503f\x2d4d34\x2d80a0\x2d3421bb9ca63c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtqdvf.mount: Deactivated successfully. Feb 13 20:32:20.714921 systemd[1]: var-lib-kubelet-pods-4be3c56f\x2d503f\x2d4d34\x2d80a0\x2d3421bb9ca63c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 20:32:20.715001 systemd[1]: var-lib-kubelet-pods-4be3c56f\x2d503f\x2d4d34\x2d80a0\x2d3421bb9ca63c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 20:32:21.598547 sshd[4117]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:21.608181 systemd[1]: sshd@25-64.23.133.101:22-147.75.109.163:39188.service: Deactivated successfully. Feb 13 20:32:21.611545 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:32:21.614668 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:32:21.620535 systemd[1]: Started sshd@26-64.23.133.101:22-147.75.109.163:41626.service - OpenSSH per-connection server daemon (147.75.109.163:41626). Feb 13 20:32:21.622565 systemd-logind[1452]: Removed session 26. Feb 13 20:32:21.672227 sshd[4278]: Accepted publickey for core from 147.75.109.163 port 41626 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:32:21.676375 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:21.686292 systemd-logind[1452]: New session 27 of user core. Feb 13 20:32:21.691750 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:32:21.975741 kubelet[2487]: I0213 20:32:21.975641 2487 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be3c56f-503f-4d34-80a0-3421bb9ca63c" path="/var/lib/kubelet/pods/4be3c56f-503f-4d34-80a0-3421bb9ca63c/volumes" Feb 13 20:32:21.977757 kubelet[2487]: I0213 20:32:21.977429 2487 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76fa51ba-d835-48cc-9bb8-223b5f4d5047" path="/var/lib/kubelet/pods/76fa51ba-d835-48cc-9bb8-223b5f4d5047/volumes" Feb 13 20:32:22.265628 sshd[4278]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:22.279122 systemd[1]: sshd@26-64.23.133.101:22-147.75.109.163:41626.service: Deactivated successfully. Feb 13 20:32:22.282568 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:32:22.285539 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:32:22.295891 systemd[1]: Started sshd@27-64.23.133.101:22-147.75.109.163:41630.service - OpenSSH per-connection server daemon (147.75.109.163:41630). Feb 13 20:32:22.301162 systemd-logind[1452]: Removed session 27. Feb 13 20:32:22.334832 kubelet[2487]: E0213 20:32:22.334788 2487 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be3c56f-503f-4d34-80a0-3421bb9ca63c" containerName="mount-cgroup" Feb 13 20:32:22.334832 kubelet[2487]: E0213 20:32:22.334815 2487 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be3c56f-503f-4d34-80a0-3421bb9ca63c" containerName="clean-cilium-state" Feb 13 20:32:22.334832 kubelet[2487]: E0213 20:32:22.334823 2487 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be3c56f-503f-4d34-80a0-3421bb9ca63c" containerName="apply-sysctl-overwrites" Feb 13 20:32:22.334832 kubelet[2487]: E0213 20:32:22.334829 2487 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be3c56f-503f-4d34-80a0-3421bb9ca63c" containerName="mount-bpf-fs" Feb 13 20:32:22.334832 kubelet[2487]: E0213 20:32:22.334835 2487 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76fa51ba-d835-48cc-9bb8-223b5f4d5047" containerName="cilium-operator" Feb 13 20:32:22.334832 kubelet[2487]: E0213 20:32:22.334842 2487 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4be3c56f-503f-4d34-80a0-3421bb9ca63c" containerName="cilium-agent" Feb 13 20:32:22.335225 kubelet[2487]: I0213 20:32:22.334866 2487 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fa51ba-d835-48cc-9bb8-223b5f4d5047" containerName="cilium-operator" Feb 13 20:32:22.335225 kubelet[2487]: I0213 20:32:22.334874 2487 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be3c56f-503f-4d34-80a0-3421bb9ca63c" containerName="cilium-agent" Feb 13 20:32:22.349673 systemd[1]: Created slice kubepods-burstable-pod62bab04b_cb70_4bd5_8127_db06e2654b5b.slice - libcontainer container kubepods-burstable-pod62bab04b_cb70_4bd5_8127_db06e2654b5b.slice. Feb 13 20:32:22.368448 sshd[4289]: Accepted publickey for core from 147.75.109.163 port 41630 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:32:22.370922 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:22.383702 systemd-logind[1452]: New session 28 of user core. Feb 13 20:32:22.389734 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:32:22.421008 kubelet[2487]: I0213 20:32:22.420973 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-cni-path\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421008 kubelet[2487]: I0213 20:32:22.421007 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62bab04b-cb70-4bd5-8127-db06e2654b5b-clustermesh-secrets\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421008 kubelet[2487]: I0213 20:32:22.421027 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-host-proc-sys-kernel\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421279 kubelet[2487]: I0213 20:32:22.421049 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-etc-cni-netd\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421279 kubelet[2487]: I0213 20:32:22.421064 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/62bab04b-cb70-4bd5-8127-db06e2654b5b-cilium-ipsec-secrets\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421279 kubelet[2487]: I0213 20:32:22.421082 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-host-proc-sys-net\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421279 kubelet[2487]: I0213 20:32:22.421097 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-lib-modules\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421279 kubelet[2487]: I0213 20:32:22.421111 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-xtables-lock\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421279 kubelet[2487]: I0213 20:32:22.421126 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62bab04b-cb70-4bd5-8127-db06e2654b5b-cilium-config-path\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421519 kubelet[2487]: I0213 20:32:22.421139 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62bab04b-cb70-4bd5-8127-db06e2654b5b-hubble-tls\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421519 kubelet[2487]: I0213 20:32:22.421152 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7phbl\" (UniqueName: \"kubernetes.io/projected/62bab04b-cb70-4bd5-8127-db06e2654b5b-kube-api-access-7phbl\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421519 kubelet[2487]: I0213 20:32:22.421168 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-cilium-run\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421519 kubelet[2487]: I0213 20:32:22.421181 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-bpf-maps\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421519 kubelet[2487]: I0213 20:32:22.421198 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-hostproc\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.421519 kubelet[2487]: I0213 20:32:22.421211 2487 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62bab04b-cb70-4bd5-8127-db06e2654b5b-cilium-cgroup\") pod \"cilium-5lszg\" (UID: \"62bab04b-cb70-4bd5-8127-db06e2654b5b\") " pod="kube-system/cilium-5lszg" Feb 13 20:32:22.453601 sshd[4289]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:22.466059 systemd[1]: sshd@27-64.23.133.101:22-147.75.109.163:41630.service: Deactivated successfully. Feb 13 20:32:22.471852 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:32:22.473376 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:32:22.489569 systemd[1]: Started sshd@28-64.23.133.101:22-147.75.109.163:41642.service - OpenSSH per-connection server daemon (147.75.109.163:41642). Feb 13 20:32:22.492406 systemd-logind[1452]: Removed session 28. Feb 13 20:32:22.535535 sshd[4297]: Accepted publickey for core from 147.75.109.163 port 41642 ssh2: RSA SHA256:4SCJERdY5B7tDQnoBYrrgc0V/0XAwBoo8khzPBgeVxg Feb 13 20:32:22.542318 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:32:22.582533 systemd-logind[1452]: New session 29 of user core. Feb 13 20:32:22.590770 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:32:22.658310 kubelet[2487]: E0213 20:32:22.656446 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:22.658566 containerd[1477]: time="2025-02-13T20:32:22.658519888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lszg,Uid:62bab04b-cb70-4bd5-8127-db06e2654b5b,Namespace:kube-system,Attempt:0,}" Feb 13 20:32:22.704047 containerd[1477]: time="2025-02-13T20:32:22.703256954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:32:22.704047 containerd[1477]: time="2025-02-13T20:32:22.703376881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:32:22.704047 containerd[1477]: time="2025-02-13T20:32:22.703400543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:32:22.704047 containerd[1477]: time="2025-02-13T20:32:22.703546545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:32:22.743703 systemd[1]: Started cri-containerd-6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88.scope - libcontainer container 6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88. Feb 13 20:32:22.797283 containerd[1477]: time="2025-02-13T20:32:22.795334973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5lszg,Uid:62bab04b-cb70-4bd5-8127-db06e2654b5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\"" Feb 13 20:32:22.797757 kubelet[2487]: E0213 20:32:22.796772 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:22.800703 containerd[1477]: time="2025-02-13T20:32:22.800599046Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:32:22.816794 containerd[1477]: time="2025-02-13T20:32:22.816734525Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4c427f74ce0e4433691337bdbe3d3cca266053f7f7c798f61333d4274513b440\"" Feb 13 20:32:22.817898 containerd[1477]: time="2025-02-13T20:32:22.817737237Z" level=info msg="StartContainer for \"4c427f74ce0e4433691337bdbe3d3cca266053f7f7c798f61333d4274513b440\"" Feb 13 20:32:22.850709 systemd[1]: Started cri-containerd-4c427f74ce0e4433691337bdbe3d3cca266053f7f7c798f61333d4274513b440.scope - libcontainer container 4c427f74ce0e4433691337bdbe3d3cca266053f7f7c798f61333d4274513b440. Feb 13 20:32:22.885447 containerd[1477]: time="2025-02-13T20:32:22.883739917Z" level=info msg="StartContainer for \"4c427f74ce0e4433691337bdbe3d3cca266053f7f7c798f61333d4274513b440\" returns successfully" Feb 13 20:32:22.895965 systemd[1]: cri-containerd-4c427f74ce0e4433691337bdbe3d3cca266053f7f7c798f61333d4274513b440.scope: Deactivated successfully. Feb 13 20:32:22.930200 containerd[1477]: time="2025-02-13T20:32:22.930129076Z" level=info msg="shim disconnected" id=4c427f74ce0e4433691337bdbe3d3cca266053f7f7c798f61333d4274513b440 namespace=k8s.io Feb 13 20:32:22.930200 containerd[1477]: time="2025-02-13T20:32:22.930185838Z" level=warning msg="cleaning up after shim disconnected" id=4c427f74ce0e4433691337bdbe3d3cca266053f7f7c798f61333d4274513b440 namespace=k8s.io Feb 13 20:32:22.930492 containerd[1477]: time="2025-02-13T20:32:22.930217246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:32:23.424106 kubelet[2487]: E0213 20:32:23.424062 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:23.426324 containerd[1477]: time="2025-02-13T20:32:23.426283523Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:32:23.438672 containerd[1477]: time="2025-02-13T20:32:23.438610392Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0\"" Feb 13 20:32:23.440945 containerd[1477]: time="2025-02-13T20:32:23.439431719Z" level=info msg="StartContainer for \"f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0\"" Feb 13 20:32:23.491752 systemd[1]: Started cri-containerd-f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0.scope - libcontainer container f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0. Feb 13 20:32:23.523655 containerd[1477]: time="2025-02-13T20:32:23.523613203Z" level=info msg="StartContainer for \"f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0\" returns successfully" Feb 13 20:32:23.536545 systemd[1]: cri-containerd-f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0.scope: Deactivated successfully. Feb 13 20:32:23.569261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0-rootfs.mount: Deactivated successfully. Feb 13 20:32:23.574485 containerd[1477]: time="2025-02-13T20:32:23.574367106Z" level=info msg="shim disconnected" id=f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0 namespace=k8s.io Feb 13 20:32:23.574485 containerd[1477]: time="2025-02-13T20:32:23.574478069Z" level=warning msg="cleaning up after shim disconnected" id=f6d8ce7fa06e0b3467aa2f801cd7434ffa2f88f25e71c7f51a3e2a7c5355d5a0 namespace=k8s.io Feb 13 20:32:23.574485 containerd[1477]: time="2025-02-13T20:32:23.574487591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:32:23.960542 containerd[1477]: time="2025-02-13T20:32:23.960498980Z" level=info msg="StopPodSandbox for \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\"" Feb 13 20:32:23.960978 containerd[1477]: time="2025-02-13T20:32:23.960618822Z" level=info msg="TearDown network for sandbox \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\" successfully" Feb 13 20:32:23.960978 containerd[1477]: time="2025-02-13T20:32:23.960637940Z" level=info msg="StopPodSandbox for \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\" returns successfully" Feb 13 20:32:23.961340 containerd[1477]: time="2025-02-13T20:32:23.961304668Z" level=info msg="RemovePodSandbox for \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\"" Feb 13 20:32:23.961450 containerd[1477]: time="2025-02-13T20:32:23.961361806Z" level=info msg="Forcibly stopping sandbox \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\"" Feb 13 20:32:23.961526 containerd[1477]: time="2025-02-13T20:32:23.961502425Z" level=info msg="TearDown network for sandbox \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\" successfully" Feb 13 20:32:23.964510 containerd[1477]: time="2025-02-13T20:32:23.964459225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:32:23.964671 containerd[1477]: time="2025-02-13T20:32:23.964533010Z" level=info msg="RemovePodSandbox \"3710ddc725a1871fcfd2bad6af524a6510efcc6cd74068281fba7a93a0d2256e\" returns successfully" Feb 13 20:32:23.965169 containerd[1477]: time="2025-02-13T20:32:23.965144564Z" level=info msg="StopPodSandbox for \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\"" Feb 13 20:32:23.965243 containerd[1477]: time="2025-02-13T20:32:23.965227539Z" level=info msg="TearDown network for sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" successfully" Feb 13 20:32:23.965243 containerd[1477]: time="2025-02-13T20:32:23.965240814Z" level=info msg="StopPodSandbox for \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" returns successfully" Feb 13 20:32:23.965765 containerd[1477]: time="2025-02-13T20:32:23.965728968Z" level=info msg="RemovePodSandbox for \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\"" Feb 13 20:32:23.966019 containerd[1477]: time="2025-02-13T20:32:23.965887334Z" level=info msg="Forcibly stopping sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\"" Feb 13 20:32:23.966019 containerd[1477]: time="2025-02-13T20:32:23.965964034Z" level=info msg="TearDown network for sandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" successfully" Feb 13 20:32:23.968601 containerd[1477]: time="2025-02-13T20:32:23.968501339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:32:23.968601 containerd[1477]: time="2025-02-13T20:32:23.968560687Z" level=info msg="RemovePodSandbox \"911d7fd409fd411ac2e5a327d16ff6a1cf40987232f9a6ab21ace859e895ceeb\" returns successfully" Feb 13 20:32:24.104065 kubelet[2487]: E0213 20:32:24.104016 2487 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:32:24.431507 kubelet[2487]: E0213 20:32:24.429591 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:24.434194 containerd[1477]: time="2025-02-13T20:32:24.434148921Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:32:24.451594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301169377.mount: Deactivated successfully. Feb 13 20:32:24.459296 containerd[1477]: time="2025-02-13T20:32:24.459232216Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124\"" Feb 13 20:32:24.460690 containerd[1477]: time="2025-02-13T20:32:24.460641124Z" level=info msg="StartContainer for \"786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124\"" Feb 13 20:32:24.498724 systemd[1]: Started cri-containerd-786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124.scope - libcontainer container 786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124. Feb 13 20:32:24.536708 containerd[1477]: time="2025-02-13T20:32:24.535948854Z" level=info msg="StartContainer for \"786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124\" returns successfully" Feb 13 20:32:24.547299 systemd[1]: cri-containerd-786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124.scope: Deactivated successfully. Feb 13 20:32:24.578040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124-rootfs.mount: Deactivated successfully. Feb 13 20:32:24.581068 containerd[1477]: time="2025-02-13T20:32:24.580609860Z" level=info msg="shim disconnected" id=786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124 namespace=k8s.io Feb 13 20:32:24.581068 containerd[1477]: time="2025-02-13T20:32:24.580681137Z" level=warning msg="cleaning up after shim disconnected" id=786e7c24635bdf22c0f04e8ff483f3c837b7a45cfefa20c70c66922b42634124 namespace=k8s.io Feb 13 20:32:24.581068 containerd[1477]: time="2025-02-13T20:32:24.580696510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:32:25.434342 kubelet[2487]: E0213 20:32:25.434053 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:25.437846 containerd[1477]: time="2025-02-13T20:32:25.437808293Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:32:25.464921 containerd[1477]: time="2025-02-13T20:32:25.464815000Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075\"" Feb 13 20:32:25.466504 containerd[1477]: time="2025-02-13T20:32:25.466074032Z" level=info msg="StartContainer for \"2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075\"" Feb 13 20:32:25.466494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378637039.mount: Deactivated successfully. Feb 13 20:32:25.505885 systemd[1]: Started cri-containerd-2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075.scope - libcontainer container 2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075. Feb 13 20:32:25.548370 systemd[1]: cri-containerd-2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075.scope: Deactivated successfully. Feb 13 20:32:25.556491 containerd[1477]: time="2025-02-13T20:32:25.555362109Z" level=info msg="StartContainer for \"2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075\" returns successfully" Feb 13 20:32:25.558930 containerd[1477]: time="2025-02-13T20:32:25.550136481Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62bab04b_cb70_4bd5_8127_db06e2654b5b.slice/cri-containerd-2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075.scope/memory.events\": no such file or directory" Feb 13 20:32:25.583943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075-rootfs.mount: Deactivated successfully. Feb 13 20:32:25.588463 containerd[1477]: time="2025-02-13T20:32:25.588246320Z" level=info msg="shim disconnected" id=2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075 namespace=k8s.io Feb 13 20:32:25.588739 containerd[1477]: time="2025-02-13T20:32:25.588497358Z" level=warning msg="cleaning up after shim disconnected" id=2edb4e833101744b808c3a7f1488d1d178d3f921e262fef0aa7ce67a300c3075 namespace=k8s.io Feb 13 20:32:25.588739 containerd[1477]: time="2025-02-13T20:32:25.588526306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:32:25.609019 containerd[1477]: time="2025-02-13T20:32:25.608952604Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:32:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:32:26.439693 kubelet[2487]: E0213 20:32:26.439379 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:26.442630 containerd[1477]: time="2025-02-13T20:32:26.442263913Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:32:26.460076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3619643361.mount: Deactivated successfully. Feb 13 20:32:26.461608 containerd[1477]: time="2025-02-13T20:32:26.460990582Z" level=info msg="CreateContainer within sandbox \"6e62fa25633da4f0f1d0ddabe25531270514a96d720ab333ecc2ee05cce7cc88\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"506a3bdd86f80c8f56ea2e6e53ca8c1e209e07ce83fb9fa1564629bc9fb40e8c\"" Feb 13 20:32:26.463527 containerd[1477]: time="2025-02-13T20:32:26.463201411Z" level=info msg="StartContainer for \"506a3bdd86f80c8f56ea2e6e53ca8c1e209e07ce83fb9fa1564629bc9fb40e8c\"" Feb 13 20:32:26.512814 systemd[1]: Started cri-containerd-506a3bdd86f80c8f56ea2e6e53ca8c1e209e07ce83fb9fa1564629bc9fb40e8c.scope - libcontainer container 506a3bdd86f80c8f56ea2e6e53ca8c1e209e07ce83fb9fa1564629bc9fb40e8c. Feb 13 20:32:26.610199 containerd[1477]: time="2025-02-13T20:32:26.609823845Z" level=info msg="StartContainer for \"506a3bdd86f80c8f56ea2e6e53ca8c1e209e07ce83fb9fa1564629bc9fb40e8c\" returns successfully" Feb 13 20:32:26.710137 kubelet[2487]: I0213 20:32:26.710003 2487 setters.go:600] "Node became not ready" node="ci-4081.3.1-6-72a75d9253" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T20:32:26Z","lastTransitionTime":"2025-02-13T20:32:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 20:32:27.131445 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 20:32:27.446137 kubelet[2487]: E0213 20:32:27.445945 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:27.472944 kubelet[2487]: I0213 20:32:27.472216 2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5lszg" podStartSLOduration=5.472177202 podStartE2EDuration="5.472177202s" podCreationTimestamp="2025-02-13 20:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:32:27.471876552 +0000 UTC m=+123.701033897" watchObservedRunningTime="2025-02-13 20:32:27.472177202 +0000 UTC m=+123.701334564" Feb 13 20:32:28.659344 kubelet[2487]: E0213 20:32:28.659292 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:30.522717 systemd-networkd[1373]: lxc_health: Link UP Feb 13 20:32:30.528757 systemd-networkd[1373]: lxc_health: Gained carrier Feb 13 20:32:30.660450 kubelet[2487]: E0213 20:32:30.659511 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:31.457911 kubelet[2487]: E0213 20:32:31.457845 2487 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Feb 13 20:32:31.865637 systemd-networkd[1373]: lxc_health: Gained IPv6LL Feb 13 20:32:33.815086 systemd[1]: run-containerd-runc-k8s.io-506a3bdd86f80c8f56ea2e6e53ca8c1e209e07ce83fb9fa1564629bc9fb40e8c-runc.QHxr54.mount: Deactivated successfully. Feb 13 20:32:36.167949 systemd[1]: run-containerd-runc-k8s.io-506a3bdd86f80c8f56ea2e6e53ca8c1e209e07ce83fb9fa1564629bc9fb40e8c-runc.IplDQO.mount: Deactivated successfully. Feb 13 20:32:36.226250 kubelet[2487]: E0213 20:32:36.225957 2487 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38028->127.0.0.1:45173: write tcp 127.0.0.1:38028->127.0.0.1:45173: write: broken pipe Feb 13 20:32:38.316772 systemd[1]: run-containerd-runc-k8s.io-506a3bdd86f80c8f56ea2e6e53ca8c1e209e07ce83fb9fa1564629bc9fb40e8c-runc.1OKhlb.mount: Deactivated successfully. Feb 13 20:32:38.394720 sshd[4297]: pam_unix(sshd:session): session closed for user core Feb 13 20:32:38.400290 systemd[1]: sshd@28-64.23.133.101:22-147.75.109.163:41642.service: Deactivated successfully. Feb 13 20:32:38.404007 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:32:38.406784 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:32:38.409011 systemd-logind[1452]: Removed session 29.