Jan 30 13:45:24.879152 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:45:24.879173 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:45:24.879184 kernel: BIOS-provided physical RAM map: Jan 30 13:45:24.879190 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:45:24.879196 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:45:24.879202 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:45:24.879210 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:45:24.879216 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:45:24.879222 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 30 13:45:24.879228 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 30 13:45:24.879237 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 30 13:45:24.879243 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 30 13:45:24.879249 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 30 13:45:24.879255 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 30 13:45:24.879263 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 30 13:45:24.879270 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:45:24.879279 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 30 13:45:24.879285 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 30 13:45:24.879292 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:45:24.879299 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:45:24.879305 kernel: NX (Execute Disable) protection: active Jan 30 13:45:24.879312 kernel: APIC: Static calls initialized Jan 30 13:45:24.879318 kernel: efi: EFI v2.7 by EDK II Jan 30 13:45:24.879325 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 30 13:45:24.879331 kernel: SMBIOS 2.8 present. Jan 30 13:45:24.879338 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 30 13:45:24.879345 kernel: Hypervisor detected: KVM Jan 30 13:45:24.879353 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:45:24.879360 kernel: kvm-clock: using sched offset of 4275246042 cycles Jan 30 13:45:24.879367 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:45:24.879374 kernel: tsc: Detected 2794.750 MHz processor Jan 30 13:45:24.879381 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:45:24.879388 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:45:24.879395 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 30 13:45:24.879402 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:45:24.879409 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:45:24.879418 kernel: Using GB pages for direct mapping Jan 30 13:45:24.879424 kernel: Secure boot disabled Jan 30 13:45:24.879431 kernel: ACPI: Early table checksum verification disabled Jan 30 13:45:24.879438 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:45:24.879449 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:45:24.879456 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:24.879463 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:24.879472 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:45:24.879480 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:24.879487 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:24.879494 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:24.879501 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:45:24.879508 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:45:24.879515 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:45:24.879524 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:45:24.879531 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:45:24.879539 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:45:24.879546 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:45:24.879553 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:45:24.879560 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:45:24.879567 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:45:24.879574 kernel: No NUMA configuration found Jan 30 13:45:24.879581 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 30 13:45:24.879603 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 30 13:45:24.879610 kernel: Zone ranges: Jan 30 13:45:24.879617 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:45:24.879624 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 30 13:45:24.879631 kernel: Normal empty Jan 30 13:45:24.879638 kernel: Movable zone start for each node Jan 30 13:45:24.879645 kernel: Early memory node ranges Jan 30 13:45:24.879652 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:45:24.879659 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:45:24.879666 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:45:24.879676 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 30 13:45:24.879683 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 30 13:45:24.879690 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 30 13:45:24.879697 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 30 13:45:24.879704 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:45:24.879711 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:45:24.879718 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:45:24.879725 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:45:24.879732 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 30 13:45:24.879741 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 30 13:45:24.879749 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 30 13:45:24.879756 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:45:24.879763 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:45:24.879770 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:45:24.879777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:45:24.879784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:45:24.879791 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:45:24.879798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:45:24.879805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:45:24.879814 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:45:24.879821 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:45:24.879828 kernel: TSC deadline timer available Jan 30 13:45:24.879835 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:45:24.879843 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:45:24.879849 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:45:24.879856 kernel: kvm-guest: setup PV sched yield Jan 30 13:45:24.879863 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 13:45:24.879870 kernel: Booting paravirtualized kernel on KVM Jan 30 13:45:24.879880 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:45:24.879887 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:45:24.879894 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:45:24.879901 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:45:24.879908 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:45:24.879915 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:45:24.879922 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:45:24.879930 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:45:24.879940 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:45:24.879947 kernel: random: crng init done Jan 30 13:45:24.879954 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:45:24.879962 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:45:24.879969 kernel: Fallback order for Node 0: 0 Jan 30 13:45:24.879976 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 30 13:45:24.879983 kernel: Policy zone: DMA32 Jan 30 13:45:24.879990 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:45:24.879998 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 171124K reserved, 0K cma-reserved) Jan 30 13:45:24.880008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:45:24.880015 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:45:24.880022 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:45:24.880029 kernel: Dynamic Preempt: voluntary Jan 30 13:45:24.880043 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:45:24.880065 kernel: rcu: RCU event tracing is enabled. Jan 30 13:45:24.880073 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:45:24.880081 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:45:24.880128 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:45:24.880136 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:45:24.880143 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:45:24.880151 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:45:24.880161 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:45:24.880168 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:45:24.880176 kernel: Console: colour dummy device 80x25 Jan 30 13:45:24.880183 kernel: printk: console [ttyS0] enabled Jan 30 13:45:24.880191 kernel: ACPI: Core revision 20230628 Jan 30 13:45:24.880200 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:45:24.880208 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:45:24.880216 kernel: x2apic enabled Jan 30 13:45:24.880223 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:45:24.880231 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:45:24.880238 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:45:24.880246 kernel: kvm-guest: setup PV IPIs Jan 30 13:45:24.880253 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:45:24.880261 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:45:24.880271 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 30 13:45:24.880278 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:45:24.880286 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:45:24.880293 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:45:24.880301 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:45:24.880308 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:45:24.880316 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:45:24.880324 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:45:24.880331 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:45:24.880341 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:45:24.880348 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:45:24.880356 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:45:24.880363 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:45:24.880372 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:45:24.880379 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:45:24.880387 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:45:24.880394 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:45:24.880404 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:45:24.880411 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:45:24.880419 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:45:24.880426 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:45:24.880434 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:45:24.880442 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:45:24.880449 kernel: landlock: Up and running. Jan 30 13:45:24.880457 kernel: SELinux: Initializing. Jan 30 13:45:24.880464 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:45:24.880474 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:45:24.880482 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:45:24.880489 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:45:24.880497 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:45:24.880505 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:45:24.880512 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:45:24.880520 kernel: ... version: 0 Jan 30 13:45:24.880527 kernel: ... bit width: 48 Jan 30 13:45:24.880534 kernel: ... generic registers: 6 Jan 30 13:45:24.880544 kernel: ... value mask: 0000ffffffffffff Jan 30 13:45:24.880552 kernel: ... max period: 00007fffffffffff Jan 30 13:45:24.880559 kernel: ... fixed-purpose events: 0 Jan 30 13:45:24.880567 kernel: ... event mask: 000000000000003f Jan 30 13:45:24.880574 kernel: signal: max sigframe size: 1776 Jan 30 13:45:24.880581 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:45:24.880615 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:45:24.880622 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:45:24.880630 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:45:24.880640 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:45:24.880647 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:45:24.880655 kernel: smpboot: Max logical packages: 1 Jan 30 13:45:24.880663 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 30 13:45:24.880670 kernel: devtmpfs: initialized Jan 30 13:45:24.880687 kernel: x86/mm: Memory block size: 128MB Jan 30 13:45:24.880695 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:45:24.880702 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:45:24.880717 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 30 13:45:24.880735 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:45:24.880751 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:45:24.880766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:45:24.880781 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:45:24.880796 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:45:24.880811 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:45:24.880834 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:45:24.880849 kernel: audit: type=2000 audit(1738244725.282:1): state=initialized audit_enabled=0 res=1 Jan 30 13:45:24.880857 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:45:24.880881 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:45:24.880889 kernel: cpuidle: using governor menu Jan 30 13:45:24.880896 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:45:24.880904 kernel: dca service started, version 1.12.1 Jan 30 13:45:24.880912 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:45:24.880919 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:45:24.880927 kernel: PCI: Using configuration type 1 for base access Jan 30 13:45:24.880935 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:45:24.880942 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:45:24.880952 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:45:24.880959 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:45:24.880967 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:45:24.880974 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:45:24.880982 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:45:24.880989 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:45:24.880996 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:45:24.881004 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:45:24.881011 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:45:24.881021 kernel: ACPI: Interpreter enabled Jan 30 13:45:24.881028 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:45:24.881035 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:45:24.881043 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:45:24.881050 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:45:24.881057 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:45:24.881065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:45:24.881244 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:45:24.881375 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:45:24.881496 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:45:24.881506 kernel: PCI host bridge to bus 0000:00 Jan 30 13:45:24.881649 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:45:24.881760 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:45:24.881870 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:45:24.882023 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:45:24.882195 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:45:24.882347 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 30 13:45:24.882481 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:45:24.882651 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:45:24.882784 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:45:24.882909 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:45:24.883033 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:45:24.883165 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:45:24.883286 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:45:24.883405 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:45:24.883574 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:45:24.883756 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:45:24.883879 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:45:24.884003 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 30 13:45:24.884143 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:45:24.884264 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:45:24.884383 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:45:24.884539 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 30 13:45:24.884697 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:45:24.884819 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:45:24.884942 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:45:24.885062 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 30 13:45:24.885191 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:45:24.885318 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:45:24.885440 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:45:24.885569 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:45:24.885732 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:45:24.885855 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:45:24.885984 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:45:24.886110 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:45:24.886121 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:45:24.886129 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:45:24.886136 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:45:24.886144 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:45:24.886155 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:45:24.886163 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:45:24.886170 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:45:24.886178 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:45:24.886185 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:45:24.886193 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:45:24.886200 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:45:24.886208 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:45:24.886215 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:45:24.886225 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:45:24.886232 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:45:24.886240 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:45:24.886247 kernel: iommu: Default domain type: Translated Jan 30 13:45:24.886255 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:45:24.886262 kernel: efivars: Registered efivars operations Jan 30 13:45:24.886270 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:45:24.886277 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:45:24.886285 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:45:24.886294 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 30 13:45:24.886301 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 30 13:45:24.886309 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 30 13:45:24.886428 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:45:24.886546 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:45:24.886695 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:45:24.886706 kernel: vgaarb: loaded Jan 30 13:45:24.886714 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:45:24.886722 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:45:24.886733 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:45:24.886741 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:45:24.886748 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:45:24.886756 kernel: pnp: PnP ACPI init Jan 30 13:45:24.886882 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:45:24.886893 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:45:24.886901 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:45:24.886908 kernel: NET: Registered PF_INET protocol family Jan 30 13:45:24.886919 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:45:24.886927 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:45:24.886934 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:45:24.886942 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:45:24.886950 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:45:24.886957 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:45:24.886965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:45:24.886973 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:45:24.886980 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:45:24.886990 kernel: NET: Registered PF_XDP protocol family Jan 30 13:45:24.887119 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:45:24.887240 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:45:24.887353 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:45:24.887464 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:45:24.887573 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:45:24.887710 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:45:24.887821 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:45:24.887935 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 30 13:45:24.887945 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:45:24.887952 kernel: Initialise system trusted keyrings Jan 30 13:45:24.887960 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:45:24.887968 kernel: Key type asymmetric registered Jan 30 13:45:24.887975 kernel: Asymmetric key parser 'x509' registered Jan 30 13:45:24.887982 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:45:24.887990 kernel: io scheduler mq-deadline registered Jan 30 13:45:24.887997 kernel: io scheduler kyber registered Jan 30 13:45:24.888008 kernel: io scheduler bfq registered Jan 30 13:45:24.888015 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:45:24.888023 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:45:24.888031 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:45:24.888039 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:45:24.888047 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:45:24.888054 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:45:24.888062 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:45:24.888070 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:45:24.888079 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:45:24.888087 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:45:24.888224 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:45:24.888339 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:45:24.888452 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:45:24 UTC (1738244724) Jan 30 13:45:24.888562 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:45:24.888572 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:45:24.888633 kernel: efifb: probing for efifb Jan 30 13:45:24.888642 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 30 13:45:24.888649 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 30 13:45:24.888657 kernel: efifb: scrolling: redraw Jan 30 13:45:24.888665 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 30 13:45:24.888672 kernel: Console: switching to colour frame buffer device 100x37 Jan 30 13:45:24.888699 kernel: fb0: EFI VGA frame buffer device Jan 30 13:45:24.888718 kernel: pstore: Using crash dump compression: deflate Jan 30 13:45:24.888729 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:45:24.888743 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:45:24.888753 kernel: Segment Routing with IPv6 Jan 30 13:45:24.888763 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:45:24.888774 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:45:24.888784 kernel: Key type dns_resolver registered Jan 30 13:45:24.888791 kernel: IPI shorthand broadcast: enabled Jan 30 13:45:24.888799 kernel: sched_clock: Marking stable (594002800, 114266540)->(723482830, -15213490) Jan 30 13:45:24.888807 kernel: registered taskstats version 1 Jan 30 13:45:24.888815 kernel: Loading compiled-in X.509 certificates Jan 30 13:45:24.888823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:45:24.888833 kernel: Key type .fscrypt registered Jan 30 13:45:24.888840 kernel: Key type fscrypt-provisioning registered Jan 30 13:45:24.888848 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:45:24.888856 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:45:24.888864 kernel: ima: No architecture policies found Jan 30 13:45:24.888871 kernel: clk: Disabling unused clocks Jan 30 13:45:24.888879 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:45:24.888887 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:45:24.888897 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:45:24.888904 kernel: Run /init as init process Jan 30 13:45:24.888912 kernel: with arguments: Jan 30 13:45:24.888920 kernel: /init Jan 30 13:45:24.888927 kernel: with environment: Jan 30 13:45:24.888935 kernel: HOME=/ Jan 30 13:45:24.888942 kernel: TERM=linux Jan 30 13:45:24.888950 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:45:24.888960 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:45:24.888972 systemd[1]: Detected virtualization kvm. Jan 30 13:45:24.888980 systemd[1]: Detected architecture x86-64. Jan 30 13:45:24.888988 systemd[1]: Running in initrd. Jan 30 13:45:24.888999 systemd[1]: No hostname configured, using default hostname. Jan 30 13:45:24.889011 systemd[1]: Hostname set to . Jan 30 13:45:24.889020 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:45:24.889028 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:45:24.889036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:45:24.889044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:45:24.889053 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:45:24.889062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:45:24.889071 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:45:24.889081 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:45:24.889100 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:45:24.889109 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:45:24.889118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:45:24.889126 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:45:24.889134 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:45:24.889143 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:45:24.889153 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:45:24.889161 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:45:24.889170 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:45:24.889178 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:45:24.889186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:45:24.889195 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:45:24.889203 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:45:24.889212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:45:24.889222 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:45:24.889230 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:45:24.889239 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:45:24.889247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:45:24.889256 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:45:24.889264 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:45:24.889272 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:45:24.889280 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:45:24.889289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:45:24.889299 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:45:24.889308 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:45:24.889316 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:45:24.889345 systemd-journald[191]: Collecting audit messages is disabled. Jan 30 13:45:24.889367 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:45:24.889375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:24.889384 systemd-journald[191]: Journal started Jan 30 13:45:24.889404 systemd-journald[191]: Runtime Journal (/run/log/journal/8020e0cce0684e539b16a8b5905437d7) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:45:24.882434 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:45:24.893124 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:45:24.893536 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:45:24.905708 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:45:24.909192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:45:24.911837 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:45:24.915217 kernel: Bridge firewalling registered Jan 30 13:45:24.913668 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:45:24.915222 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:45:24.916499 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:45:24.919974 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:45:24.922780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:45:24.929143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:45:24.933770 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:45:24.934045 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:45:24.943785 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:45:24.947204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:45:24.956030 dracut-cmdline[229]: dracut-dracut-053 Jan 30 13:45:24.959347 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:45:24.980459 systemd-resolved[232]: Positive Trust Anchors: Jan 30 13:45:24.980476 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:45:24.980508 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:45:24.983000 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 30 13:45:24.983998 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:45:24.990239 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:45:25.045625 kernel: SCSI subsystem initialized Jan 30 13:45:25.054615 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:45:25.064615 kernel: iscsi: registered transport (tcp) Jan 30 13:45:25.085620 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:45:25.085678 kernel: QLogic iSCSI HBA Driver Jan 30 13:45:25.131412 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:45:25.141705 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:45:25.166046 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:45:25.166080 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:45:25.167088 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:45:25.207605 kernel: raid6: avx2x4 gen() 30077 MB/s Jan 30 13:45:25.224606 kernel: raid6: avx2x2 gen() 30266 MB/s Jan 30 13:45:25.241716 kernel: raid6: avx2x1 gen() 26052 MB/s Jan 30 13:45:25.241730 kernel: raid6: using algorithm avx2x2 gen() 30266 MB/s Jan 30 13:45:25.259690 kernel: raid6: .... xor() 19951 MB/s, rmw enabled Jan 30 13:45:25.259707 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:45:25.279609 kernel: xor: automatically using best checksumming function avx Jan 30 13:45:25.431609 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:45:25.442529 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:45:25.455733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:45:25.467304 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 30 13:45:25.471986 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:45:25.479722 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:45:25.495552 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 30 13:45:25.526102 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:45:25.538727 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:45:25.601350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:45:25.607759 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:45:25.619986 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:45:25.623124 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:45:25.625974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:45:25.627544 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:45:25.639783 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:45:25.647626 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:45:25.662147 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:45:25.662296 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:45:25.662308 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:45:25.662320 kernel: GPT:9289727 != 19775487 Jan 30 13:45:25.662330 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:45:25.662341 kernel: GPT:9289727 != 19775487 Jan 30 13:45:25.662351 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:45:25.662361 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:45:25.656944 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:45:25.670921 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:45:25.670943 kernel: AES CTR mode by8 optimization enabled Jan 30 13:45:25.670988 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:45:25.671167 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:45:25.675912 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:45:25.679128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:45:25.681225 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:25.683759 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:45:25.691609 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (463) Jan 30 13:45:25.693612 kernel: libata version 3.00 loaded. Jan 30 13:45:25.693637 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (481) Jan 30 13:45:25.700601 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:45:25.721641 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:45:25.721658 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:45:25.721817 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:45:25.721962 kernel: scsi host0: ahci Jan 30 13:45:25.722143 kernel: scsi host1: ahci Jan 30 13:45:25.722293 kernel: scsi host2: ahci Jan 30 13:45:25.722439 kernel: scsi host3: ahci Jan 30 13:45:25.722641 kernel: scsi host4: ahci Jan 30 13:45:25.722790 kernel: scsi host5: ahci Jan 30 13:45:25.722932 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:45:25.722943 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:45:25.722954 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:45:25.722964 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:45:25.722974 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:45:25.722989 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:45:25.694835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:45:25.717026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:45:25.725120 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:45:25.726954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:25.736156 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:45:25.736244 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:45:25.743859 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:45:25.758804 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:45:25.761846 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:45:25.767296 disk-uuid[566]: Primary Header is updated. Jan 30 13:45:25.767296 disk-uuid[566]: Secondary Entries is updated. Jan 30 13:45:25.767296 disk-uuid[566]: Secondary Header is updated. Jan 30 13:45:25.771659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:45:25.776629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:45:25.784455 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:45:26.033317 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:26.033371 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:26.033382 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:26.033608 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:45:26.034611 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:26.035612 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:45:26.036733 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:45:26.036745 kernel: ata3.00: applying bridge limits Jan 30 13:45:26.037608 kernel: ata3.00: configured for UDMA/100 Jan 30 13:45:26.037620 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:45:26.090138 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:45:26.102182 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:45:26.102195 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:45:26.776619 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:45:26.777095 disk-uuid[569]: The operation has completed successfully. Jan 30 13:45:26.798555 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:45:26.798693 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:45:26.831707 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:45:26.836801 sh[593]: Success Jan 30 13:45:26.848615 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:45:26.878004 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:45:26.895032 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:45:26.899580 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:45:26.910362 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:45:26.910393 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:45:26.910404 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:45:26.912131 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:45:26.912147 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:45:26.916664 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:45:26.917367 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:45:26.933749 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:45:26.935947 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:45:26.943825 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:26.943859 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:45:26.943880 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:45:26.946613 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:45:26.956012 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:45:26.957873 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:26.966645 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:45:26.974736 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:45:27.028439 ignition[681]: Ignition 2.19.0 Jan 30 13:45:27.028457 ignition[681]: Stage: fetch-offline Jan 30 13:45:27.028515 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:27.028529 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:27.028664 ignition[681]: parsed url from cmdline: "" Jan 30 13:45:27.028669 ignition[681]: no config URL provided Jan 30 13:45:27.028675 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:45:27.028687 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:45:27.028717 ignition[681]: op(1): [started] loading QEMU firmware config module Jan 30 13:45:27.028724 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:45:27.037156 ignition[681]: op(1): [finished] loading QEMU firmware config module Jan 30 13:45:27.055309 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:45:27.063716 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:45:27.081710 ignition[681]: parsing config with SHA512: 0966310789a479fab3d9195bb3fde7f49342885d8f92da67929ea82cc9e11467ee3af5e5cb861203b5a8b07a0fc643d544955a2e2f0259bb3cd5986a604ee5d9 Jan 30 13:45:27.083756 systemd-networkd[781]: lo: Link UP Jan 30 13:45:27.083763 systemd-networkd[781]: lo: Gained carrier Jan 30 13:45:27.085339 systemd-networkd[781]: Enumeration completed Jan 30 13:45:27.085722 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:45:27.085725 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:45:27.089876 ignition[681]: fetch-offline: fetch-offline passed Jan 30 13:45:27.085827 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:45:27.089959 ignition[681]: Ignition finished successfully Jan 30 13:45:27.086510 systemd-networkd[781]: eth0: Link UP Jan 30 13:45:27.086514 systemd-networkd[781]: eth0: Gained carrier Jan 30 13:45:27.086520 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:45:27.089314 unknown[681]: fetched base config from "system" Jan 30 13:45:27.089324 unknown[681]: fetched user config from "qemu" Jan 30 13:45:27.091160 systemd[1]: Reached target network.target - Network. Jan 30 13:45:27.093388 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:45:27.095728 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:45:27.101732 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:45:27.103637 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:45:27.114162 ignition[784]: Ignition 2.19.0 Jan 30 13:45:27.114171 ignition[784]: Stage: kargs Jan 30 13:45:27.114356 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:27.114368 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:27.115214 ignition[784]: kargs: kargs passed Jan 30 13:45:27.115254 ignition[784]: Ignition finished successfully Jan 30 13:45:27.118456 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:45:27.129700 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:45:27.144122 ignition[793]: Ignition 2.19.0 Jan 30 13:45:27.144134 ignition[793]: Stage: disks Jan 30 13:45:27.144335 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:27.144349 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:27.145491 ignition[793]: disks: disks passed Jan 30 13:45:27.148009 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:45:27.145544 ignition[793]: Ignition finished successfully Jan 30 13:45:27.148277 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:45:27.148605 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:45:27.148928 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:45:27.149265 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:45:27.149430 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:45:27.159741 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:45:27.171301 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:45:27.178573 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:45:27.182719 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:45:27.265613 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:45:27.265753 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:45:27.266533 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:45:27.273675 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:45:27.276245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:45:27.276653 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:45:27.282302 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 30 13:45:27.276700 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:45:27.287534 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:27.287550 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:45:27.287560 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:45:27.276724 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:45:27.289446 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:45:27.291574 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:45:27.311526 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:45:27.313528 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:45:27.350025 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:45:27.354240 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:45:27.357709 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:45:27.362694 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:45:27.445556 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:45:27.456823 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:45:27.460236 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:45:27.464630 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:27.485128 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:45:27.487149 ignition[924]: INFO : Ignition 2.19.0 Jan 30 13:45:27.487149 ignition[924]: INFO : Stage: mount Jan 30 13:45:27.487149 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:27.487149 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:27.487149 ignition[924]: INFO : mount: mount passed Jan 30 13:45:27.487149 ignition[924]: INFO : Ignition finished successfully Jan 30 13:45:27.493049 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:45:27.509686 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:45:27.909728 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:45:27.922718 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:45:27.929608 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Jan 30 13:45:27.929634 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:45:27.930908 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:45:27.930930 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:45:27.933615 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:45:27.935183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:45:27.955938 ignition[957]: INFO : Ignition 2.19.0 Jan 30 13:45:27.955938 ignition[957]: INFO : Stage: files Jan 30 13:45:27.957612 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:27.957612 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:27.960516 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:45:27.962262 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:45:27.962262 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:45:27.965623 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:45:27.967047 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:45:27.968676 unknown[957]: wrote ssh authorized keys file for user: core Jan 30 13:45:27.969729 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:45:27.972061 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:45:27.974011 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:45:27.974011 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:45:27.974011 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:45:28.011353 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:45:28.109389 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:45:28.111484 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:45:28.111484 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 30 13:45:28.473771 systemd-networkd[781]: eth0: Gained IPv6LL Jan 30 13:45:28.529813 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 30 13:45:28.729296 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:45:28.729296 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:45:28.733032 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:45:28.733032 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:45:28.736449 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:45:28.736449 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:45:28.739868 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:45:28.741607 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:45:28.743383 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:45:28.745307 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:45:28.747170 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:45:28.748963 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:45:28.751539 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:45:28.754019 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:45:28.756266 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:45:29.170360 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 30 13:45:29.520133 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:45:29.520133 ignition[957]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 30 13:45:29.523830 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:45:29.526408 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:45:29.526408 ignition[957]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 30 13:45:29.526408 ignition[957]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 30 13:45:29.531116 ignition[957]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:45:29.532995 ignition[957]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:45:29.532995 ignition[957]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 30 13:45:29.536099 ignition[957]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 30 13:45:29.536099 ignition[957]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:45:29.539323 ignition[957]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:45:29.539323 ignition[957]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 30 13:45:29.539323 ignition[957]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:45:29.564746 ignition[957]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:45:29.571538 ignition[957]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:45:29.573191 ignition[957]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:45:29.573191 ignition[957]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:45:29.575941 ignition[957]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:45:29.577454 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:45:29.579216 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:45:29.580866 ignition[957]: INFO : files: files passed Jan 30 13:45:29.581601 ignition[957]: INFO : Ignition finished successfully Jan 30 13:45:29.584572 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:45:29.593768 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:45:29.595506 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:45:29.601666 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:45:29.601785 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:45:29.605027 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:45:29.606864 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:45:29.606864 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:45:29.610645 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:45:29.613763 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:45:29.616405 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:45:29.628696 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:45:29.651452 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:45:29.652516 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:45:29.655153 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:45:29.657193 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:45:29.659223 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:45:29.661352 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:45:29.678928 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:45:29.691715 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:45:29.700471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:45:29.702790 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:45:29.705149 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:45:29.706996 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:45:29.707999 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:45:29.710556 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:45:29.712628 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:45:29.714452 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:45:29.716655 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:45:29.718960 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:45:29.721187 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:45:29.723262 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:45:29.725855 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:45:29.727998 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:45:29.730026 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:45:29.731647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:45:29.732643 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:45:29.735054 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:45:29.737240 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:45:29.739596 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:45:29.740633 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:45:29.743205 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:45:29.744202 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:45:29.746418 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:45:29.747489 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:45:29.749852 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:45:29.751614 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:45:29.752731 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:45:29.755403 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:45:29.755528 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:45:29.757186 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:45:29.757272 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:45:29.758894 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:45:29.758989 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:45:29.760570 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:45:29.760696 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:45:29.761081 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:45:29.761180 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:45:29.776712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:45:29.778275 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:45:29.779734 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:45:29.779849 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:45:29.784630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:45:29.784813 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:45:29.790834 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:45:29.790986 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:45:29.794816 ignition[1011]: INFO : Ignition 2.19.0 Jan 30 13:45:29.794816 ignition[1011]: INFO : Stage: umount Jan 30 13:45:29.794816 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:45:29.794816 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:45:29.801403 ignition[1011]: INFO : umount: umount passed Jan 30 13:45:29.801403 ignition[1011]: INFO : Ignition finished successfully Jan 30 13:45:29.801993 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:45:29.802142 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:45:29.803372 systemd[1]: Stopped target network.target - Network. Jan 30 13:45:29.804896 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:45:29.804952 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:45:29.806826 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:45:29.806872 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:45:29.809383 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:45:29.809429 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:45:29.811538 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:45:29.811612 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:45:29.813888 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:45:29.816013 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:45:29.819238 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:45:29.820645 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 30 13:45:29.823541 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:45:29.823715 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:45:29.825641 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:45:29.825751 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:45:29.828729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:45:29.828773 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:45:29.836739 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:45:29.837707 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:45:29.837778 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:45:29.840294 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:45:29.840356 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:45:29.842569 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:45:29.842642 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:45:29.843877 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:45:29.843937 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:45:29.845172 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:45:29.862262 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:45:29.862439 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:45:29.864844 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:45:29.864949 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:45:29.866295 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:45:29.866357 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:45:29.867842 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:45:29.867881 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:45:29.868142 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:45:29.868187 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:45:29.871741 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:45:29.871788 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:45:29.876108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:45:29.876155 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:45:29.889772 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:45:29.892019 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:45:29.893107 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:45:29.895814 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:45:29.896934 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:45:29.899650 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:45:29.900615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:45:29.903042 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:45:29.904060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:29.906687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:45:29.907805 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:45:30.112891 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:45:30.113872 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:45:30.115865 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:45:30.117842 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:45:30.117898 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:45:30.137716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:45:30.143512 systemd[1]: Switching root. Jan 30 13:45:30.170820 systemd-journald[191]: Journal stopped Jan 30 13:45:31.439440 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jan 30 13:45:31.439513 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:45:31.439527 kernel: SELinux: policy capability open_perms=1 Jan 30 13:45:31.439538 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:45:31.439550 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:45:31.439566 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:45:31.439581 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:45:31.439611 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:45:31.439622 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:45:31.439634 kernel: audit: type=1403 audit(1738244730.687:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:45:31.439650 systemd[1]: Successfully loaded SELinux policy in 44.210ms. Jan 30 13:45:31.439670 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.274ms. Jan 30 13:45:31.439685 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:45:31.439701 systemd[1]: Detected virtualization kvm. Jan 30 13:45:31.439717 systemd[1]: Detected architecture x86-64. Jan 30 13:45:31.439736 systemd[1]: Detected first boot. Jan 30 13:45:31.439753 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:45:31.439769 zram_generator::config[1073]: No configuration found. Jan 30 13:45:31.439786 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:45:31.439799 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:45:31.439811 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:45:31.439823 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:45:31.439835 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:45:31.439850 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:45:31.439866 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:45:31.439878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:45:31.439890 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:45:31.439902 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:45:31.439914 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:45:31.439926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:45:31.439945 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:45:31.439957 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:45:31.439971 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:45:31.439985 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:45:31.439997 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:45:31.440008 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:45:31.440020 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:45:31.440032 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:45:31.440044 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:45:31.440055 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:45:31.440067 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:45:31.440081 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:45:31.440093 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:45:31.440104 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:45:31.440116 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:45:31.440128 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:45:31.440139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:45:31.440151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:45:31.440162 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:45:31.440176 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:45:31.440188 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:45:31.440200 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:45:31.440211 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:45:31.440223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:45:31.440235 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:45:31.440247 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:45:31.440262 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:45:31.440274 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:45:31.440287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:45:31.440299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:45:31.440311 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:45:31.440324 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:45:31.440335 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:45:31.440347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:45:31.440359 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:45:31.440371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:45:31.440385 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:45:31.440399 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:45:31.440415 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:45:31.440432 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:45:31.440447 kernel: loop: module loaded Jan 30 13:45:31.440462 kernel: fuse: init (API version 7.39) Jan 30 13:45:31.440477 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:45:31.440490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:45:31.440507 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:45:31.440546 systemd-journald[1150]: Collecting audit messages is disabled. Jan 30 13:45:31.440574 systemd-journald[1150]: Journal started Jan 30 13:45:31.440651 systemd-journald[1150]: Runtime Journal (/run/log/journal/8020e0cce0684e539b16a8b5905437d7) is 6.0M, max 48.3M, 42.2M free. Jan 30 13:45:31.442871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:45:31.445606 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:45:31.449786 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:45:31.452235 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:45:31.453559 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:45:31.455655 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:45:31.456860 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:45:31.458188 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:45:31.459578 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:45:31.461529 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:45:31.463347 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:45:31.463559 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:45:31.465446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:45:31.465723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:45:31.467555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:45:31.467773 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:45:31.469553 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:45:31.469774 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:45:31.471465 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:45:31.473172 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:45:31.473371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:45:31.475255 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:45:31.477003 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:45:31.478877 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:45:31.486619 kernel: ACPI: bus type drm_connector registered Jan 30 13:45:31.487776 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:45:31.488098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:45:31.496809 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:45:31.508753 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:45:31.511553 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:45:31.512983 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:45:31.515125 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:45:31.520780 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:45:31.522248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:45:31.526300 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:45:31.529866 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:45:31.533517 systemd-journald[1150]: Time spent on flushing to /var/log/journal/8020e0cce0684e539b16a8b5905437d7 is 17.300ms for 983 entries. Jan 30 13:45:31.533517 systemd-journald[1150]: System Journal (/var/log/journal/8020e0cce0684e539b16a8b5905437d7) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:45:31.565085 systemd-journald[1150]: Received client request to flush runtime journal. Jan 30 13:45:31.531834 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:45:31.539504 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:45:31.545527 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:45:31.549654 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:45:31.551695 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:45:31.555548 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:45:31.566130 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:45:31.569536 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:45:31.579736 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:45:31.580227 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 30 13:45:31.580250 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 30 13:45:31.583071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:45:31.590121 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:45:31.594872 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:45:31.603912 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:45:31.625537 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:45:31.631788 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:45:31.649719 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 30 13:45:31.649742 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 30 13:45:31.655413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:45:32.081253 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:45:32.097801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:45:32.121354 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Jan 30 13:45:32.137381 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:45:32.148824 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:45:32.161757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:45:32.176753 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:45:32.190642 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1239) Jan 30 13:45:32.222842 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:45:32.239617 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:45:32.240122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:45:32.245617 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:45:32.260763 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:45:32.261238 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:45:32.261398 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:45:32.261793 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:45:32.270793 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:45:32.282667 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:45:32.283962 systemd-networkd[1242]: lo: Link UP Jan 30 13:45:32.283971 systemd-networkd[1242]: lo: Gained carrier Jan 30 13:45:32.287977 systemd-networkd[1242]: Enumeration completed Jan 30 13:45:32.288135 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:45:32.288646 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:45:32.288697 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:45:32.289485 systemd-networkd[1242]: eth0: Link UP Jan 30 13:45:32.289536 systemd-networkd[1242]: eth0: Gained carrier Jan 30 13:45:32.289604 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:45:32.296780 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:45:32.303678 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:45:32.315913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:45:32.320274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:45:32.320691 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:32.324452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:45:32.380018 kernel: kvm_amd: TSC scaling supported Jan 30 13:45:32.380078 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:45:32.380092 kernel: kvm_amd: Nested Paging enabled Jan 30 13:45:32.380103 kernel: kvm_amd: LBR virtualization supported Jan 30 13:45:32.381889 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:45:32.381928 kernel: kvm_amd: Virtual GIF supported Jan 30 13:45:32.403508 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:45:32.410088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:45:32.427233 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:45:32.439809 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:45:32.448401 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:45:32.479736 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:45:32.481278 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:45:32.492699 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:45:32.497376 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:45:32.534927 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:45:32.536410 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:45:32.537698 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:45:32.537729 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:45:32.538797 systemd[1]: Reached target machines.target - Containers. Jan 30 13:45:32.540826 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:45:32.552705 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:45:32.555103 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:45:32.556259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:45:32.557188 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:45:32.560090 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:45:32.563264 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:45:32.565330 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:45:32.577634 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:45:32.584640 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:45:32.590728 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:45:32.591625 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:45:32.600613 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:45:32.622738 kernel: loop1: detected capacity change from 0 to 142488 Jan 30 13:45:32.652620 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 13:45:32.679622 kernel: loop3: detected capacity change from 0 to 140768 Jan 30 13:45:32.688607 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:45:32.698609 kernel: loop5: detected capacity change from 0 to 210664 Jan 30 13:45:32.704631 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:45:32.705225 (sd-merge)[1313]: Merged extensions into '/usr'. Jan 30 13:45:32.708872 systemd[1]: Reloading requested from client PID 1299 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:45:32.708886 systemd[1]: Reloading... Jan 30 13:45:32.757766 zram_generator::config[1345]: No configuration found. Jan 30 13:45:32.797536 ldconfig[1295]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:45:32.874387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:45:32.937755 systemd[1]: Reloading finished in 228 ms. Jan 30 13:45:32.955529 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:45:32.957109 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:45:32.968732 systemd[1]: Starting ensure-sysext.service... Jan 30 13:45:32.970725 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:45:32.976791 systemd[1]: Reloading requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:45:32.976809 systemd[1]: Reloading... Jan 30 13:45:32.993252 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:45:32.993635 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:45:32.994648 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:45:32.994955 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jan 30 13:45:32.995038 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jan 30 13:45:32.998305 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:45:32.998318 systemd-tmpfiles[1386]: Skipping /boot Jan 30 13:45:33.011813 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:45:33.011826 systemd-tmpfiles[1386]: Skipping /boot Jan 30 13:45:33.026612 zram_generator::config[1415]: No configuration found. Jan 30 13:45:33.151768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:45:33.235356 systemd[1]: Reloading finished in 258 ms. Jan 30 13:45:33.254415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:45:33.271017 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:45:33.273797 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:45:33.276730 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:45:33.282817 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:45:33.287778 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:45:33.295391 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:45:33.295562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:45:33.297360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:45:33.302608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:45:33.306838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:45:33.310849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:45:33.311172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:45:33.312230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:45:33.312471 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:45:33.316403 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:45:33.319226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:45:33.319447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:45:33.321727 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:45:33.322155 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:45:33.333801 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:45:33.336543 augenrules[1493]: No rules Jan 30 13:45:33.338531 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:45:33.343154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:45:33.343408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:45:33.351882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:45:33.354474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:45:33.357457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:45:33.367967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:45:33.369513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:45:33.371802 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:45:33.373061 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:45:33.374925 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:45:33.376918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:45:33.377199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:45:33.379071 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:45:33.379376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:45:33.381241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:45:33.381659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:45:33.383629 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:45:33.383849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:45:33.389535 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:45:33.390577 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:45:33.390629 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:45:33.391156 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:45:33.392768 systemd[1]: Finished ensure-sysext.service. Jan 30 13:45:33.406869 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:45:33.407233 systemd-resolved[1464]: Positive Trust Anchors: Jan 30 13:45:33.407256 systemd-resolved[1464]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:45:33.407302 systemd-resolved[1464]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:45:33.411389 systemd-resolved[1464]: Defaulting to hostname 'linux'. Jan 30 13:45:33.413450 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:45:33.414714 systemd[1]: Reached target network.target - Network. Jan 30 13:45:33.415773 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:45:33.465808 systemd-networkd[1242]: eth0: Gained IPv6LL Jan 30 13:45:33.468979 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:45:33.475968 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:45:33.479106 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:45:33.480460 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:45:33.481734 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:45:34.850635 systemd-resolved[1464]: Clock change detected. Flushing caches. Jan 30 13:45:34.850680 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:45:34.850778 systemd-timesyncd[1522]: Initial clock synchronization to Thu 2025-01-30 13:45:34.850570 UTC. Jan 30 13:45:34.851411 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:45:34.852684 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:45:34.853985 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:45:34.854031 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:45:34.854954 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:45:34.856214 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:45:34.857408 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:45:34.858660 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:45:34.860188 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:45:34.863224 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:45:34.865858 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:45:34.872068 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:45:34.873190 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:45:34.874173 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:45:34.875307 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:45:34.875354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:45:34.875381 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:45:34.876878 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:45:34.879356 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:45:34.881807 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:45:34.886958 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:45:34.917091 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:45:34.923555 jq[1531]: false Jan 30 13:45:34.929905 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:45:34.931543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:34.934149 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:45:34.937179 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:45:34.942758 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:45:34.949035 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:45:34.950763 extend-filesystems[1534]: Found loop3 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found loop4 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found loop5 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found sr0 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found vda Jan 30 13:45:34.954464 extend-filesystems[1534]: Found vda1 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found vda2 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found vda3 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found usr Jan 30 13:45:34.954464 extend-filesystems[1534]: Found vda4 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found vda6 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found vda7 Jan 30 13:45:34.954464 extend-filesystems[1534]: Found vda9 Jan 30 13:45:34.954464 extend-filesystems[1534]: Checking size of /dev/vda9 Jan 30 13:45:34.993071 extend-filesystems[1534]: Resized partition /dev/vda9 Jan 30 13:45:34.958285 dbus-daemon[1530]: [system] SELinux support is enabled Jan 30 13:45:34.955524 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:45:34.995633 extend-filesystems[1569]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:45:34.999907 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:45:34.961983 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:45:34.965248 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:45:34.969529 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:45:34.974789 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:45:35.000397 update_engine[1557]: I20250130 13:45:34.991021 1557 main.cc:92] Flatcar Update Engine starting Jan 30 13:45:35.000397 update_engine[1557]: I20250130 13:45:34.992213 1557 update_check_scheduler.cc:74] Next update check in 9m47s Jan 30 13:45:34.977953 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:45:35.006155 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1243) Jan 30 13:45:35.006208 jq[1561]: true Jan 30 13:45:34.997044 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:45:34.997359 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:45:35.003105 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:45:35.003490 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:45:35.007312 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:45:35.018258 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:45:35.018589 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:45:35.024812 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:45:35.037517 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:45:35.039319 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:45:35.040349 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:45:35.055757 jq[1577]: true Jan 30 13:45:35.068913 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:45:35.068913 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:45:35.068913 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:45:35.082249 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Jan 30 13:45:35.087858 tar[1574]: linux-amd64/helm Jan 30 13:45:35.069697 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:45:35.070977 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:45:35.090343 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:45:35.091780 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:45:35.091894 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:45:35.091915 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:45:35.093365 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:45:35.093381 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:45:35.095380 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:45:35.106429 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:45:35.108737 systemd-logind[1550]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:45:35.108764 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:45:35.113229 systemd-logind[1550]: New seat seat0. Jan 30 13:45:35.114374 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:45:35.124122 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:45:35.128127 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:45:35.132607 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:45:35.159531 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:45:35.166623 sshd_keygen[1571]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:45:35.196326 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:45:35.207635 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:45:35.214705 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:45:35.215073 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:45:35.225265 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:45:35.237789 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:45:35.247438 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:45:35.259105 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:45:35.260640 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:45:35.285956 containerd[1578]: time="2025-01-30T13:45:35.285862417Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:45:35.308551 containerd[1578]: time="2025-01-30T13:45:35.308526352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:45:35.310696 containerd[1578]: time="2025-01-30T13:45:35.310635597Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:45:35.310746 containerd[1578]: time="2025-01-30T13:45:35.310693806Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:45:35.310746 containerd[1578]: time="2025-01-30T13:45:35.310733921Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:45:35.310983 containerd[1578]: time="2025-01-30T13:45:35.310953402Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:45:35.311028 containerd[1578]: time="2025-01-30T13:45:35.310983529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:45:35.311102 containerd[1578]: time="2025-01-30T13:45:35.311078046Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:45:35.311123 containerd[1578]: time="2025-01-30T13:45:35.311103764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:45:35.311467 containerd[1578]: time="2025-01-30T13:45:35.311434034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:45:35.311467 containerd[1578]: time="2025-01-30T13:45:35.311459652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:45:35.311508 containerd[1578]: time="2025-01-30T13:45:35.311486883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:45:35.311508 containerd[1578]: time="2025-01-30T13:45:35.311501961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:45:35.311639 containerd[1578]: time="2025-01-30T13:45:35.311613210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:45:35.311965 containerd[1578]: time="2025-01-30T13:45:35.311935324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:45:35.312221 containerd[1578]: time="2025-01-30T13:45:35.312189510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:45:35.312221 containerd[1578]: time="2025-01-30T13:45:35.312214978Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:45:35.312374 containerd[1578]: time="2025-01-30T13:45:35.312346695Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:45:35.312449 containerd[1578]: time="2025-01-30T13:45:35.312425583Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:45:35.325491 containerd[1578]: time="2025-01-30T13:45:35.325472836Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.325562174Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.325580949Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.325596388Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.325619742Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.325771957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326044017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326148072Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326169703Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326183208Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326197165Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326210900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326222803Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326236619Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:45:35.326984 containerd[1578]: time="2025-01-30T13:45:35.326250795Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326264541Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326276964Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326288105Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326310307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326323111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326334993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326347947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326363376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326377423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326388654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326402680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326415213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326429911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327339 containerd[1578]: time="2025-01-30T13:45:35.326441923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326453114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326465568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326480716Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326498650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326509440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326521302Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326567880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326582267Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326594750Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326605480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326614206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326625658Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326640987Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:45:35.327581 containerd[1578]: time="2025-01-30T13:45:35.326655905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:45:35.327862 containerd[1578]: time="2025-01-30T13:45:35.326907396Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:45:35.327862 containerd[1578]: time="2025-01-30T13:45:35.326970995Z" level=info msg="Connect containerd service" Jan 30 13:45:35.327862 containerd[1578]: time="2025-01-30T13:45:35.327016531Z" level=info msg="using legacy CRI server" Jan 30 13:45:35.327862 containerd[1578]: time="2025-01-30T13:45:35.327024305Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:45:35.327862 containerd[1578]: time="2025-01-30T13:45:35.327114875Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:45:35.327862 containerd[1578]: time="2025-01-30T13:45:35.327792476Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:45:35.328090 containerd[1578]: time="2025-01-30T13:45:35.328067401Z" level=info msg="Start subscribing containerd event" Jan 30 13:45:35.328257 containerd[1578]: time="2025-01-30T13:45:35.328105292Z" level=info msg="Start recovering state" Jan 30 13:45:35.328701 containerd[1578]: time="2025-01-30T13:45:35.328519599Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:45:35.329063 containerd[1578]: time="2025-01-30T13:45:35.328897788Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:45:35.329183 containerd[1578]: time="2025-01-30T13:45:35.328585222Z" level=info msg="Start event monitor" Jan 30 13:45:35.329264 containerd[1578]: time="2025-01-30T13:45:35.329249337Z" level=info msg="Start snapshots syncer" Jan 30 13:45:35.329334 containerd[1578]: time="2025-01-30T13:45:35.329322645Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:45:35.329377 containerd[1578]: time="2025-01-30T13:45:35.329366587Z" level=info msg="Start streaming server" Jan 30 13:45:35.329588 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:45:35.331697 containerd[1578]: time="2025-01-30T13:45:35.330689848Z" level=info msg="containerd successfully booted in 0.046350s" Jan 30 13:45:35.475734 tar[1574]: linux-amd64/LICENSE Jan 30 13:45:35.475734 tar[1574]: linux-amd64/README.md Jan 30 13:45:35.491365 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:45:35.756473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:35.758239 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:45:35.761786 systemd[1]: Startup finished in 6.707s (kernel) + 3.747s (userspace) = 10.455s. Jan 30 13:45:35.762305 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:45:36.191561 kubelet[1665]: E0130 13:45:36.191432 1665 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:45:36.195452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:45:36.195765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:45:40.891603 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:45:40.901946 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:38344.service - OpenSSH per-connection server daemon (10.0.0.1:38344). Jan 30 13:45:40.944949 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 38344 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:40.947254 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:40.955763 systemd-logind[1550]: New session 1 of user core. Jan 30 13:45:40.956855 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:45:40.962907 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:45:40.976167 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:45:40.985035 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:45:40.988091 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:45:41.086733 systemd[1686]: Queued start job for default target default.target. Jan 30 13:45:41.087132 systemd[1686]: Created slice app.slice - User Application Slice. Jan 30 13:45:41.087154 systemd[1686]: Reached target paths.target - Paths. Jan 30 13:45:41.087167 systemd[1686]: Reached target timers.target - Timers. Jan 30 13:45:41.096810 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:45:41.102910 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:45:41.102976 systemd[1686]: Reached target sockets.target - Sockets. Jan 30 13:45:41.102989 systemd[1686]: Reached target basic.target - Basic System. Jan 30 13:45:41.103025 systemd[1686]: Reached target default.target - Main User Target. Jan 30 13:45:41.103055 systemd[1686]: Startup finished in 108ms. Jan 30 13:45:41.103741 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:45:41.105216 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:45:41.168916 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:52416.service - OpenSSH per-connection server daemon (10.0.0.1:52416). Jan 30 13:45:41.200287 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 52416 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:41.202079 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:41.206067 systemd-logind[1550]: New session 2 of user core. Jan 30 13:45:41.213952 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:45:41.267211 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:41.275918 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:52420.service - OpenSSH per-connection server daemon (10.0.0.1:52420). Jan 30 13:45:41.276350 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:52416.service: Deactivated successfully. Jan 30 13:45:41.278559 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:45:41.279257 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:45:41.280596 systemd-logind[1550]: Removed session 2. Jan 30 13:45:41.305820 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 52420 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:41.307207 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:41.310644 systemd-logind[1550]: New session 3 of user core. Jan 30 13:45:41.320970 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:45:41.369385 sshd[1703]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:41.377961 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:52426.service - OpenSSH per-connection server daemon (10.0.0.1:52426). Jan 30 13:45:41.378437 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:52420.service: Deactivated successfully. Jan 30 13:45:41.380888 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:45:41.382122 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:45:41.382909 systemd-logind[1550]: Removed session 3. Jan 30 13:45:41.408019 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 52426 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:41.409472 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:41.413130 systemd-logind[1550]: New session 4 of user core. Jan 30 13:45:41.423013 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:45:41.476725 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:41.484928 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:52440.service - OpenSSH per-connection server daemon (10.0.0.1:52440). Jan 30 13:45:41.485371 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:52426.service: Deactivated successfully. Jan 30 13:45:41.487749 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:45:41.488830 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:45:41.489950 systemd-logind[1550]: Removed session 4. Jan 30 13:45:41.515724 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 52440 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:41.517121 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:41.520895 systemd-logind[1550]: New session 5 of user core. Jan 30 13:45:41.535971 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:45:41.593799 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:45:41.594149 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:45:41.614332 sudo[1726]: pam_unix(sudo:session): session closed for user root Jan 30 13:45:41.616279 sshd[1719]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:41.624928 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:52452.service - OpenSSH per-connection server daemon (10.0.0.1:52452). Jan 30 13:45:41.625384 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:52440.service: Deactivated successfully. Jan 30 13:45:41.627646 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:45:41.629043 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:45:41.629772 systemd-logind[1550]: Removed session 5. Jan 30 13:45:41.657406 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 52452 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:41.659069 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:41.662660 systemd-logind[1550]: New session 6 of user core. Jan 30 13:45:41.677978 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:45:41.733603 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:45:41.734030 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:45:41.737641 sudo[1736]: pam_unix(sudo:session): session closed for user root Jan 30 13:45:41.744131 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:45:41.744469 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:45:41.763979 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:45:41.766117 auditctl[1739]: No rules Jan 30 13:45:41.767659 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:45:41.768047 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:45:41.771106 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:45:41.805281 augenrules[1758]: No rules Jan 30 13:45:41.807171 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:45:41.808457 sudo[1735]: pam_unix(sudo:session): session closed for user root Jan 30 13:45:41.810505 sshd[1728]: pam_unix(sshd:session): session closed for user core Jan 30 13:45:41.821051 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:52456.service - OpenSSH per-connection server daemon (10.0.0.1:52456). Jan 30 13:45:41.821612 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:52452.service: Deactivated successfully. Jan 30 13:45:41.824522 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:45:41.825381 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:45:41.826417 systemd-logind[1550]: Removed session 6. Jan 30 13:45:41.853178 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 52456 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:45:41.854834 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:45:41.859196 systemd-logind[1550]: New session 7 of user core. Jan 30 13:45:41.870022 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:45:41.922535 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:45:41.922895 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:45:42.201908 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:45:42.202394 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:45:42.473472 dockerd[1789]: time="2025-01-30T13:45:42.473322272Z" level=info msg="Starting up" Jan 30 13:45:43.130586 dockerd[1789]: time="2025-01-30T13:45:43.130538212Z" level=info msg="Loading containers: start." Jan 30 13:45:43.242739 kernel: Initializing XFRM netlink socket Jan 30 13:45:43.320351 systemd-networkd[1242]: docker0: Link UP Jan 30 13:45:43.348185 dockerd[1789]: time="2025-01-30T13:45:43.348150425Z" level=info msg="Loading containers: done." Jan 30 13:45:43.364336 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3430890880-merged.mount: Deactivated successfully. Jan 30 13:45:43.364854 dockerd[1789]: time="2025-01-30T13:45:43.364350790Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:45:43.364854 dockerd[1789]: time="2025-01-30T13:45:43.364462009Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:45:43.364854 dockerd[1789]: time="2025-01-30T13:45:43.364563209Z" level=info msg="Daemon has completed initialization" Jan 30 13:45:43.400800 dockerd[1789]: time="2025-01-30T13:45:43.399754072Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:45:43.400069 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:45:44.359314 containerd[1578]: time="2025-01-30T13:45:44.359267810Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:45:44.922573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080679148.mount: Deactivated successfully. Jan 30 13:45:45.843421 containerd[1578]: time="2025-01-30T13:45:45.843362857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:45.844284 containerd[1578]: time="2025-01-30T13:45:45.844217590Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:45:45.845420 containerd[1578]: time="2025-01-30T13:45:45.845381703Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:45.848458 containerd[1578]: time="2025-01-30T13:45:45.848403038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:45.849535 containerd[1578]: time="2025-01-30T13:45:45.849493032Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.490189195s" Jan 30 13:45:45.849594 containerd[1578]: time="2025-01-30T13:45:45.849535311Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:45:45.871138 containerd[1578]: time="2025-01-30T13:45:45.871092310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:45:46.446272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:45:46.459865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:46.605564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:46.610849 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:45:46.953430 kubelet[2015]: E0130 13:45:46.953315 2015 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:45:46.960472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:45:46.960966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:45:47.453534 containerd[1578]: time="2025-01-30T13:45:47.453385970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:47.454322 containerd[1578]: time="2025-01-30T13:45:47.454247556Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:45:47.455777 containerd[1578]: time="2025-01-30T13:45:47.455747789Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:47.458537 containerd[1578]: time="2025-01-30T13:45:47.458499729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:47.459559 containerd[1578]: time="2025-01-30T13:45:47.459521575Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.588388649s" Jan 30 13:45:47.459626 containerd[1578]: time="2025-01-30T13:45:47.459555648Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:45:47.482789 containerd[1578]: time="2025-01-30T13:45:47.482749747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:45:48.828410 containerd[1578]: time="2025-01-30T13:45:48.828317687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:48.829352 containerd[1578]: time="2025-01-30T13:45:48.829309156Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:45:48.831014 containerd[1578]: time="2025-01-30T13:45:48.830872918Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:48.833865 containerd[1578]: time="2025-01-30T13:45:48.833832427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:48.834831 containerd[1578]: time="2025-01-30T13:45:48.834797246Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.35201085s" Jan 30 13:45:48.834897 containerd[1578]: time="2025-01-30T13:45:48.834832292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:45:48.854984 containerd[1578]: time="2025-01-30T13:45:48.854943851Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:45:50.153859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254842226.mount: Deactivated successfully. Jan 30 13:45:50.914050 containerd[1578]: time="2025-01-30T13:45:50.913966508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:50.914960 containerd[1578]: time="2025-01-30T13:45:50.914883026Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:45:50.915981 containerd[1578]: time="2025-01-30T13:45:50.915926092Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:50.918786 containerd[1578]: time="2025-01-30T13:45:50.918746961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:50.919577 containerd[1578]: time="2025-01-30T13:45:50.919533566Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 2.064556112s" Jan 30 13:45:50.919617 containerd[1578]: time="2025-01-30T13:45:50.919582428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:45:51.050467 containerd[1578]: time="2025-01-30T13:45:51.050422062Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:45:51.889291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929157886.mount: Deactivated successfully. Jan 30 13:45:52.700987 containerd[1578]: time="2025-01-30T13:45:52.700941095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:52.701733 containerd[1578]: time="2025-01-30T13:45:52.701684899Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:45:52.704071 containerd[1578]: time="2025-01-30T13:45:52.704020639Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:52.707215 containerd[1578]: time="2025-01-30T13:45:52.707181285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:52.708194 containerd[1578]: time="2025-01-30T13:45:52.708153739Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.657683275s" Jan 30 13:45:52.708232 containerd[1578]: time="2025-01-30T13:45:52.708197661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:45:52.731613 containerd[1578]: time="2025-01-30T13:45:52.731582367Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:45:53.194207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345463574.mount: Deactivated successfully. Jan 30 13:45:53.199029 containerd[1578]: time="2025-01-30T13:45:53.198983023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:53.199696 containerd[1578]: time="2025-01-30T13:45:53.199649413Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:45:53.200791 containerd[1578]: time="2025-01-30T13:45:53.200755336Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:53.202976 containerd[1578]: time="2025-01-30T13:45:53.202940213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:53.203506 containerd[1578]: time="2025-01-30T13:45:53.203469736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 471.862492ms" Jan 30 13:45:53.203506 containerd[1578]: time="2025-01-30T13:45:53.203500734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:45:53.225364 containerd[1578]: time="2025-01-30T13:45:53.225338169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:45:53.749238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070269073.mount: Deactivated successfully. Jan 30 13:45:55.898376 containerd[1578]: time="2025-01-30T13:45:55.898304260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:55.899121 containerd[1578]: time="2025-01-30T13:45:55.899041091Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:45:55.900555 containerd[1578]: time="2025-01-30T13:45:55.900514264Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:55.903564 containerd[1578]: time="2025-01-30T13:45:55.903492368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:45:55.904735 containerd[1578]: time="2025-01-30T13:45:55.904677349Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.679308793s" Jan 30 13:45:55.904805 containerd[1578]: time="2025-01-30T13:45:55.904737893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:45:57.191313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:45:57.198870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:57.341308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:57.346344 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:45:57.554036 kubelet[2254]: E0130 13:45:57.553846 2254 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:45:57.557985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:45:57.558254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:45:58.644075 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:58.653901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:58.674517 systemd[1]: Reloading requested from client PID 2272 ('systemctl') (unit session-7.scope)... Jan 30 13:45:58.674532 systemd[1]: Reloading... Jan 30 13:45:58.749843 zram_generator::config[2314]: No configuration found. Jan 30 13:45:59.286965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:45:59.360567 systemd[1]: Reloading finished in 685 ms. Jan 30 13:45:59.406154 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:45:59.406258 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:45:59.406624 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:59.409538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:45:59.554151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:45:59.559412 (kubelet)[2372]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:45:59.605673 kubelet[2372]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:45:59.605673 kubelet[2372]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:45:59.605673 kubelet[2372]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:45:59.606105 kubelet[2372]: I0130 13:45:59.605750 2372 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:45:59.953491 kubelet[2372]: I0130 13:45:59.953363 2372 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:45:59.953491 kubelet[2372]: I0130 13:45:59.953394 2372 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:45:59.953683 kubelet[2372]: I0130 13:45:59.953657 2372 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:45:59.967494 kubelet[2372]: I0130 13:45:59.967450 2372 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:45:59.967944 kubelet[2372]: E0130 13:45:59.967900 2372 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:45:59.979812 kubelet[2372]: I0130 13:45:59.979780 2372 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:45:59.980733 kubelet[2372]: I0130 13:45:59.980679 2372 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:45:59.980934 kubelet[2372]: I0130 13:45:59.980738 2372 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:45:59.981051 kubelet[2372]: I0130 13:45:59.980939 2372 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:45:59.981051 kubelet[2372]: I0130 13:45:59.980952 2372 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:45:59.981142 kubelet[2372]: I0130 13:45:59.981112 2372 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:45:59.981838 kubelet[2372]: I0130 13:45:59.981814 2372 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:45:59.981838 kubelet[2372]: I0130 13:45:59.981835 2372 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:45:59.981921 kubelet[2372]: I0130 13:45:59.981864 2372 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:45:59.981921 kubelet[2372]: I0130 13:45:59.981887 2372 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:45:59.985353 kubelet[2372]: W0130 13:45:59.985200 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:45:59.985353 kubelet[2372]: E0130 13:45:59.985274 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:45:59.985459 kubelet[2372]: W0130 13:45:59.985357 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:45:59.985459 kubelet[2372]: E0130 13:45:59.985397 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:45:59.987930 kubelet[2372]: I0130 13:45:59.987911 2372 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:45:59.989808 kubelet[2372]: I0130 13:45:59.989785 2372 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:45:59.989866 kubelet[2372]: W0130 13:45:59.989848 2372 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:45:59.990528 kubelet[2372]: I0130 13:45:59.990506 2372 server.go:1264] "Started kubelet" Jan 30 13:45:59.991098 kubelet[2372]: I0130 13:45:59.991027 2372 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:45:59.991411 kubelet[2372]: I0130 13:45:59.991358 2372 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:45:59.991825 kubelet[2372]: I0130 13:45:59.991799 2372 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:45:59.992452 kubelet[2372]: I0130 13:45:59.992205 2372 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:45:59.993935 kubelet[2372]: I0130 13:45:59.993732 2372 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:45:59.993935 kubelet[2372]: I0130 13:45:59.993887 2372 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:45:59.995010 kubelet[2372]: E0130 13:45:59.994672 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:45:59.995010 kubelet[2372]: E0130 13:45:59.994777 2372 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c664beac0c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:45:59.990485188 +0000 UTC m=+0.427144126,LastTimestamp:2025-01-30 13:45:59.990485188 +0000 UTC m=+0.427144126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:45:59.995010 kubelet[2372]: E0130 13:45:59.994925 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Jan 30 13:45:59.995010 kubelet[2372]: I0130 13:45:59.994979 2372 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:45:59.995167 kubelet[2372]: I0130 13:45:59.995026 2372 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:45:59.995770 kubelet[2372]: I0130 13:45:59.995746 2372 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:45:59.995878 kubelet[2372]: I0130 13:45:59.995835 2372 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:45:59.996015 kubelet[2372]: W0130 13:45:59.995985 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:45:59.996095 kubelet[2372]: E0130 13:45:59.996086 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:45:59.996254 kubelet[2372]: E0130 13:45:59.996234 2372 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:45:59.997020 kubelet[2372]: I0130 13:45:59.996996 2372 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:46:00.010030 kubelet[2372]: I0130 13:46:00.009906 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:46:00.011355 kubelet[2372]: I0130 13:46:00.011276 2372 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:46:00.011355 kubelet[2372]: I0130 13:46:00.011311 2372 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:46:00.011355 kubelet[2372]: I0130 13:46:00.011328 2372 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:46:00.011501 kubelet[2372]: E0130 13:46:00.011365 2372 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:46:00.014275 kubelet[2372]: W0130 13:46:00.014247 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:00.014275 kubelet[2372]: E0130 13:46:00.014279 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:00.019559 kubelet[2372]: I0130 13:46:00.019542 2372 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:46:00.019639 kubelet[2372]: I0130 13:46:00.019628 2372 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:46:00.019694 kubelet[2372]: I0130 13:46:00.019678 2372 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:00.096704 kubelet[2372]: I0130 13:46:00.096662 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:00.097018 kubelet[2372]: E0130 13:46:00.096989 2372 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Jan 30 13:46:00.112165 kubelet[2372]: E0130 13:46:00.112129 2372 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:46:00.195676 kubelet[2372]: E0130 13:46:00.195639 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Jan 30 13:46:00.299295 kubelet[2372]: I0130 13:46:00.299169 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:00.299558 kubelet[2372]: E0130 13:46:00.299517 2372 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Jan 30 13:46:00.307412 kubelet[2372]: I0130 13:46:00.307381 2372 policy_none.go:49] "None policy: Start" Jan 30 13:46:00.308001 kubelet[2372]: I0130 13:46:00.307981 2372 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:46:00.308075 kubelet[2372]: I0130 13:46:00.308024 2372 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:46:00.312520 kubelet[2372]: E0130 13:46:00.312492 2372 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:46:00.316484 kubelet[2372]: I0130 13:46:00.316463 2372 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:46:00.316685 kubelet[2372]: I0130 13:46:00.316647 2372 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:46:00.316786 kubelet[2372]: I0130 13:46:00.316770 2372 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:46:00.320640 kubelet[2372]: E0130 13:46:00.320603 2372 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:46:00.596542 kubelet[2372]: E0130 13:46:00.596412 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Jan 30 13:46:00.701112 kubelet[2372]: I0130 13:46:00.701066 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:00.701504 kubelet[2372]: E0130 13:46:00.701386 2372 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Jan 30 13:46:00.713583 kubelet[2372]: I0130 13:46:00.713535 2372 topology_manager.go:215] "Topology Admit Handler" podUID="a809a2cbd093d258a3490b18720e4f7b" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:46:00.714689 kubelet[2372]: I0130 13:46:00.714661 2372 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:46:00.715910 kubelet[2372]: I0130 13:46:00.715554 2372 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:46:00.801032 kubelet[2372]: I0130 13:46:00.800992 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a809a2cbd093d258a3490b18720e4f7b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a809a2cbd093d258a3490b18720e4f7b\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:00.801099 kubelet[2372]: I0130 13:46:00.801034 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:00.801099 kubelet[2372]: I0130 13:46:00.801061 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:00.801099 kubelet[2372]: I0130 13:46:00.801086 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:00.801195 kubelet[2372]: I0130 13:46:00.801108 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a809a2cbd093d258a3490b18720e4f7b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a809a2cbd093d258a3490b18720e4f7b\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:00.801195 kubelet[2372]: I0130 13:46:00.801130 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a809a2cbd093d258a3490b18720e4f7b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a809a2cbd093d258a3490b18720e4f7b\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:00.801195 kubelet[2372]: I0130 13:46:00.801151 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:00.801195 kubelet[2372]: I0130 13:46:00.801175 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:00.801293 kubelet[2372]: I0130 13:46:00.801196 2372 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:00.954897 kubelet[2372]: W0130 13:46:00.954801 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:00.954897 kubelet[2372]: E0130 13:46:00.954853 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:00.981321 kubelet[2372]: W0130 13:46:00.981272 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:00.981321 kubelet[2372]: E0130 13:46:00.981311 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:01.019371 kubelet[2372]: E0130 13:46:01.019334 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:01.019852 containerd[1578]: time="2025-01-30T13:46:01.019812396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a809a2cbd093d258a3490b18720e4f7b,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:01.020955 kubelet[2372]: E0130 13:46:01.020937 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:01.021236 containerd[1578]: time="2025-01-30T13:46:01.021209736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:01.022405 kubelet[2372]: E0130 13:46:01.022384 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:01.022703 containerd[1578]: time="2025-01-30T13:46:01.022675765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:01.068374 kubelet[2372]: W0130 13:46:01.068314 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:01.068470 kubelet[2372]: E0130 13:46:01.068382 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:01.397641 kubelet[2372]: E0130 13:46:01.397514 2372 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Jan 30 13:46:01.503274 kubelet[2372]: I0130 13:46:01.503235 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:01.503599 kubelet[2372]: E0130 13:46:01.503566 2372 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Jan 30 13:46:01.566041 kubelet[2372]: W0130 13:46:01.565988 2372 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:01.566041 kubelet[2372]: E0130 13:46:01.566038 2372 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:02.075057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806389333.mount: Deactivated successfully. Jan 30 13:46:02.077866 kubelet[2372]: E0130 13:46:02.077835 2372 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.108:6443: connect: connection refused Jan 30 13:46:02.082739 containerd[1578]: time="2025-01-30T13:46:02.082690573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:02.083649 containerd[1578]: time="2025-01-30T13:46:02.083608955Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:02.084512 containerd[1578]: time="2025-01-30T13:46:02.084483384Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:02.085212 containerd[1578]: time="2025-01-30T13:46:02.085149273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:46:02.086077 containerd[1578]: time="2025-01-30T13:46:02.086019695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:46:02.086970 containerd[1578]: time="2025-01-30T13:46:02.086943266Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:46:02.087853 containerd[1578]: time="2025-01-30T13:46:02.087810332Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:02.091752 containerd[1578]: time="2025-01-30T13:46:02.091699835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:02.092527 containerd[1578]: time="2025-01-30T13:46:02.092489445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.071229384s" Jan 30 13:46:02.093744 containerd[1578]: time="2025-01-30T13:46:02.093698162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.073810264s" Jan 30 13:46:02.094908 containerd[1578]: time="2025-01-30T13:46:02.094873385Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.072124132s" Jan 30 13:46:02.261423 containerd[1578]: time="2025-01-30T13:46:02.261310675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:02.261423 containerd[1578]: time="2025-01-30T13:46:02.261361931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:02.261423 containerd[1578]: time="2025-01-30T13:46:02.261397979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:02.261603 containerd[1578]: time="2025-01-30T13:46:02.261526861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:02.262650 containerd[1578]: time="2025-01-30T13:46:02.262526565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:02.262650 containerd[1578]: time="2025-01-30T13:46:02.262588241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:02.262650 containerd[1578]: time="2025-01-30T13:46:02.262598790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:02.263337 containerd[1578]: time="2025-01-30T13:46:02.263066277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:02.264246 containerd[1578]: time="2025-01-30T13:46:02.264170898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:02.264296 containerd[1578]: time="2025-01-30T13:46:02.264265175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:02.264329 containerd[1578]: time="2025-01-30T13:46:02.264294600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:02.264502 containerd[1578]: time="2025-01-30T13:46:02.264468977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:02.318204 containerd[1578]: time="2025-01-30T13:46:02.318163825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8710e3294be8170b85bff7bec479de2c924b39bfb0cd0244c312e3cd129f087c\"" Jan 30 13:46:02.318919 containerd[1578]: time="2025-01-30T13:46:02.318604151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a809a2cbd093d258a3490b18720e4f7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cd9fc2298ae97f8cb4c6d2914a215d603571e3d135812921ebecfdbefb19fbb\"" Jan 30 13:46:02.319619 kubelet[2372]: E0130 13:46:02.319592 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:02.319928 containerd[1578]: time="2025-01-30T13:46:02.319909658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cf33e4a1d6443b8d4e2bc67f3e2a7fc6a0be3d2dea55dea889087faf8d917a7\"" Jan 30 13:46:02.319977 kubelet[2372]: E0130 13:46:02.319940 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:02.320763 kubelet[2372]: E0130 13:46:02.320444 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:02.322568 containerd[1578]: time="2025-01-30T13:46:02.322540411Z" level=info msg="CreateContainer within sandbox \"4cf33e4a1d6443b8d4e2bc67f3e2a7fc6a0be3d2dea55dea889087faf8d917a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:46:02.323514 containerd[1578]: time="2025-01-30T13:46:02.322709238Z" level=info msg="CreateContainer within sandbox \"8710e3294be8170b85bff7bec479de2c924b39bfb0cd0244c312e3cd129f087c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:46:02.323586 containerd[1578]: time="2025-01-30T13:46:02.323378442Z" level=info msg="CreateContainer within sandbox \"8cd9fc2298ae97f8cb4c6d2914a215d603571e3d135812921ebecfdbefb19fbb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:46:02.439044 containerd[1578]: time="2025-01-30T13:46:02.438927697Z" level=info msg="CreateContainer within sandbox \"8710e3294be8170b85bff7bec479de2c924b39bfb0cd0244c312e3cd129f087c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c441641adf1dd85f4d3dd937ea0c5e475f6b322ec24c98bdfc6b1cfa5161de59\"" Jan 30 13:46:02.439542 containerd[1578]: time="2025-01-30T13:46:02.439516431Z" level=info msg="StartContainer for \"c441641adf1dd85f4d3dd937ea0c5e475f6b322ec24c98bdfc6b1cfa5161de59\"" Jan 30 13:46:02.440234 containerd[1578]: time="2025-01-30T13:46:02.440192459Z" level=info msg="CreateContainer within sandbox \"4cf33e4a1d6443b8d4e2bc67f3e2a7fc6a0be3d2dea55dea889087faf8d917a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5b3c267c6b146a430eb3fbd8ff255cc15d38a6658114b0655d94a5687c63f70\"" Jan 30 13:46:02.440643 containerd[1578]: time="2025-01-30T13:46:02.440608929Z" level=info msg="StartContainer for \"f5b3c267c6b146a430eb3fbd8ff255cc15d38a6658114b0655d94a5687c63f70\"" Jan 30 13:46:02.443091 containerd[1578]: time="2025-01-30T13:46:02.443059935Z" level=info msg="CreateContainer within sandbox \"8cd9fc2298ae97f8cb4c6d2914a215d603571e3d135812921ebecfdbefb19fbb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c75a86d722745a6ac261b850545ccb787eecb9b45bb9b8e2f54d6d97f970bb58\"" Jan 30 13:46:02.443428 containerd[1578]: time="2025-01-30T13:46:02.443396155Z" level=info msg="StartContainer for \"c75a86d722745a6ac261b850545ccb787eecb9b45bb9b8e2f54d6d97f970bb58\"" Jan 30 13:46:02.512517 containerd[1578]: time="2025-01-30T13:46:02.512470209Z" level=info msg="StartContainer for \"f5b3c267c6b146a430eb3fbd8ff255cc15d38a6658114b0655d94a5687c63f70\" returns successfully" Jan 30 13:46:02.512703 containerd[1578]: time="2025-01-30T13:46:02.512677087Z" level=info msg="StartContainer for \"c441641adf1dd85f4d3dd937ea0c5e475f6b322ec24c98bdfc6b1cfa5161de59\" returns successfully" Jan 30 13:46:02.514371 containerd[1578]: time="2025-01-30T13:46:02.512974365Z" level=info msg="StartContainer for \"c75a86d722745a6ac261b850545ccb787eecb9b45bb9b8e2f54d6d97f970bb58\" returns successfully" Jan 30 13:46:03.024356 kubelet[2372]: E0130 13:46:03.024288 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:03.027160 kubelet[2372]: E0130 13:46:03.026942 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:03.029009 kubelet[2372]: E0130 13:46:03.028912 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:03.105956 kubelet[2372]: I0130 13:46:03.105726 2372 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:03.247778 kubelet[2372]: E0130 13:46:03.246979 2372 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:46:03.345323 kubelet[2372]: I0130 13:46:03.343342 2372 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:46:03.356808 kubelet[2372]: E0130 13:46:03.356774 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:03.457491 kubelet[2372]: E0130 13:46:03.457449 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:03.558447 kubelet[2372]: E0130 13:46:03.558393 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:03.659317 kubelet[2372]: E0130 13:46:03.659203 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:03.759794 kubelet[2372]: E0130 13:46:03.759767 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:03.860324 kubelet[2372]: E0130 13:46:03.860286 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:03.960931 kubelet[2372]: E0130 13:46:03.960810 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:04.031362 kubelet[2372]: E0130 13:46:04.031321 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:04.060922 kubelet[2372]: E0130 13:46:04.060894 2372 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:04.988557 kubelet[2372]: I0130 13:46:04.988520 2372 apiserver.go:52] "Watching apiserver" Jan 30 13:46:04.995824 kubelet[2372]: I0130 13:46:04.995787 2372 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:46:05.193464 kubelet[2372]: E0130 13:46:05.193422 2372 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:05.493686 systemd[1]: Reloading requested from client PID 2653 ('systemctl') (unit session-7.scope)... Jan 30 13:46:05.493706 systemd[1]: Reloading... Jan 30 13:46:05.571749 zram_generator::config[2695]: No configuration found. Jan 30 13:46:05.687708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:05.765587 systemd[1]: Reloading finished in 271 ms. Jan 30 13:46:05.800611 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:05.817203 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:46:05.817600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:05.829925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:05.977630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:05.982965 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:46:06.029254 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:06.029254 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:46:06.029254 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:06.029254 kubelet[2747]: I0130 13:46:06.029222 2747 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:46:06.034310 kubelet[2747]: I0130 13:46:06.034272 2747 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:46:06.034310 kubelet[2747]: I0130 13:46:06.034306 2747 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:46:06.034481 kubelet[2747]: I0130 13:46:06.034455 2747 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:46:06.036274 kubelet[2747]: I0130 13:46:06.036242 2747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:46:06.037781 kubelet[2747]: I0130 13:46:06.037750 2747 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:46:06.045568 kubelet[2747]: I0130 13:46:06.045539 2747 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:46:06.047559 kubelet[2747]: I0130 13:46:06.047525 2747 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:46:06.047722 kubelet[2747]: I0130 13:46:06.047550 2747 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:46:06.047799 kubelet[2747]: I0130 13:46:06.047739 2747 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:46:06.047799 kubelet[2747]: I0130 13:46:06.047751 2747 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:46:06.047799 kubelet[2747]: I0130 13:46:06.047793 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:06.047931 kubelet[2747]: I0130 13:46:06.047879 2747 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:46:06.047931 kubelet[2747]: I0130 13:46:06.047892 2747 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:46:06.047931 kubelet[2747]: I0130 13:46:06.047912 2747 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:46:06.047991 kubelet[2747]: I0130 13:46:06.047935 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:46:06.048869 kubelet[2747]: I0130 13:46:06.048850 2747 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:46:06.049044 kubelet[2747]: I0130 13:46:06.049022 2747 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:46:06.049432 kubelet[2747]: I0130 13:46:06.049412 2747 server.go:1264] "Started kubelet" Jan 30 13:46:06.050748 kubelet[2747]: I0130 13:46:06.050615 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:46:06.050949 kubelet[2747]: I0130 13:46:06.050926 2747 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:46:06.051013 kubelet[2747]: I0130 13:46:06.050965 2747 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:46:06.052062 kubelet[2747]: I0130 13:46:06.052040 2747 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:46:06.058821 kubelet[2747]: I0130 13:46:06.058499 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:46:06.059193 kubelet[2747]: E0130 13:46:06.059168 2747 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:46:06.061340 kubelet[2747]: I0130 13:46:06.060988 2747 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:46:06.061340 kubelet[2747]: I0130 13:46:06.061108 2747 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:46:06.061340 kubelet[2747]: I0130 13:46:06.061241 2747 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:46:06.063999 kubelet[2747]: I0130 13:46:06.063978 2747 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:46:06.064211 kubelet[2747]: I0130 13:46:06.064181 2747 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:46:06.066113 kubelet[2747]: I0130 13:46:06.066093 2747 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:46:06.070032 kubelet[2747]: I0130 13:46:06.069988 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:46:06.071143 kubelet[2747]: I0130 13:46:06.071111 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:46:06.071143 kubelet[2747]: I0130 13:46:06.071142 2747 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:46:06.071206 kubelet[2747]: I0130 13:46:06.071165 2747 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:46:06.071234 kubelet[2747]: E0130 13:46:06.071210 2747 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:46:06.113210 kubelet[2747]: I0130 13:46:06.113173 2747 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:46:06.113210 kubelet[2747]: I0130 13:46:06.113198 2747 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:46:06.113210 kubelet[2747]: I0130 13:46:06.113219 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:06.113388 kubelet[2747]: I0130 13:46:06.113363 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:46:06.113388 kubelet[2747]: I0130 13:46:06.113374 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:46:06.113440 kubelet[2747]: I0130 13:46:06.113392 2747 policy_none.go:49] "None policy: Start" Jan 30 13:46:06.113925 kubelet[2747]: I0130 13:46:06.113908 2747 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:46:06.113960 kubelet[2747]: I0130 13:46:06.113929 2747 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:46:06.114047 kubelet[2747]: I0130 13:46:06.114033 2747 state_mem.go:75] "Updated machine memory state" Jan 30 13:46:06.115546 kubelet[2747]: I0130 13:46:06.115523 2747 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:46:06.115743 kubelet[2747]: I0130 13:46:06.115691 2747 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:46:06.116287 kubelet[2747]: I0130 13:46:06.115975 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:46:06.171840 kubelet[2747]: I0130 13:46:06.171791 2747 topology_manager.go:215] "Topology Admit Handler" podUID="a809a2cbd093d258a3490b18720e4f7b" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:46:06.171920 kubelet[2747]: I0130 13:46:06.171876 2747 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:46:06.171943 kubelet[2747]: I0130 13:46:06.171921 2747 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:46:06.221144 kubelet[2747]: I0130 13:46:06.221098 2747 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:46:06.254354 kubelet[2747]: E0130 13:46:06.254279 2747 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:06.265516 kubelet[2747]: I0130 13:46:06.265488 2747 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 13:46:06.265672 kubelet[2747]: I0130 13:46:06.265594 2747 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:46:06.362638 kubelet[2747]: I0130 13:46:06.362444 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:06.362638 kubelet[2747]: I0130 13:46:06.362492 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a809a2cbd093d258a3490b18720e4f7b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a809a2cbd093d258a3490b18720e4f7b\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:06.362638 kubelet[2747]: I0130 13:46:06.362520 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a809a2cbd093d258a3490b18720e4f7b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a809a2cbd093d258a3490b18720e4f7b\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:06.362638 kubelet[2747]: I0130 13:46:06.362543 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:06.362638 kubelet[2747]: I0130 13:46:06.362568 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:06.362899 kubelet[2747]: I0130 13:46:06.362588 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:06.362899 kubelet[2747]: I0130 13:46:06.362607 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a809a2cbd093d258a3490b18720e4f7b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a809a2cbd093d258a3490b18720e4f7b\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:06.362899 kubelet[2747]: I0130 13:46:06.362647 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:06.362899 kubelet[2747]: I0130 13:46:06.362673 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:06.506389 sudo[2781]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:46:06.506767 sudo[2781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:46:06.536506 kubelet[2747]: E0130 13:46:06.536430 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:06.536506 kubelet[2747]: E0130 13:46:06.536441 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:06.555215 kubelet[2747]: E0130 13:46:06.555191 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:06.962313 sudo[2781]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:07.049211 kubelet[2747]: I0130 13:46:07.049162 2747 apiserver.go:52] "Watching apiserver" Jan 30 13:46:07.061246 kubelet[2747]: I0130 13:46:07.061225 2747 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:46:07.088469 kubelet[2747]: E0130 13:46:07.088074 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:07.088469 kubelet[2747]: E0130 13:46:07.088289 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:07.093069 kubelet[2747]: E0130 13:46:07.093051 2747 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:07.093570 kubelet[2747]: E0130 13:46:07.093504 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:07.111340 kubelet[2747]: I0130 13:46:07.111215 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.111198023 podStartE2EDuration="2.111198023s" podCreationTimestamp="2025-01-30 13:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:07.105682251 +0000 UTC m=+1.118790293" watchObservedRunningTime="2025-01-30 13:46:07.111198023 +0000 UTC m=+1.124306055" Jan 30 13:46:07.111340 kubelet[2747]: I0130 13:46:07.111319 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1113153630000001 podStartE2EDuration="1.111315363s" podCreationTimestamp="2025-01-30 13:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:07.110627323 +0000 UTC m=+1.123735365" watchObservedRunningTime="2025-01-30 13:46:07.111315363 +0000 UTC m=+1.124423405" Jan 30 13:46:08.089027 kubelet[2747]: E0130 13:46:08.088994 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:08.195204 sudo[1771]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:08.197207 sshd[1764]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:08.201978 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:52456.service: Deactivated successfully. Jan 30 13:46:08.204244 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:46:08.204882 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:46:08.205863 systemd-logind[1550]: Removed session 7. Jan 30 13:46:08.431054 kubelet[2747]: E0130 13:46:08.430911 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:08.829559 kubelet[2747]: E0130 13:46:08.829444 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:09.090697 kubelet[2747]: E0130 13:46:09.090586 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:18.364423 kubelet[2747]: E0130 13:46:18.364383 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:18.373408 kubelet[2747]: I0130 13:46:18.373340 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=12.373323315 podStartE2EDuration="12.373323315s" podCreationTimestamp="2025-01-30 13:46:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:07.115678253 +0000 UTC m=+1.128786295" watchObservedRunningTime="2025-01-30 13:46:18.373323315 +0000 UTC m=+12.386431357" Jan 30 13:46:18.433915 kubelet[2747]: E0130 13:46:18.433873 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:18.832869 kubelet[2747]: E0130 13:46:18.832491 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:20.307597 update_engine[1557]: I20250130 13:46:20.307538 1557 update_attempter.cc:509] Updating boot flags... Jan 30 13:46:20.332741 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2832) Jan 30 13:46:20.364750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2830) Jan 30 13:46:20.397458 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2830) Jan 30 13:46:20.962428 kubelet[2747]: I0130 13:46:20.962386 2747 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:46:20.962910 containerd[1578]: time="2025-01-30T13:46:20.962783048Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:46:20.963219 kubelet[2747]: I0130 13:46:20.962957 2747 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:46:21.528653 kubelet[2747]: I0130 13:46:21.528287 2747 topology_manager.go:215] "Topology Admit Handler" podUID="b1911d59-d0d1-428a-a76f-a9ca1494f8b3" podNamespace="kube-system" podName="kube-proxy-x2l6z" Jan 30 13:46:21.533326 kubelet[2747]: I0130 13:46:21.533277 2747 topology_manager.go:215] "Topology Admit Handler" podUID="86c18653-e014-495b-998c-3a522d5a8eeb" podNamespace="kube-system" podName="cilium-l4jd7" Jan 30 13:46:21.558991 kubelet[2747]: I0130 13:46:21.558946 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86c18653-e014-495b-998c-3a522d5a8eeb-hubble-tls\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.558991 kubelet[2747]: I0130 13:46:21.558984 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1911d59-d0d1-428a-a76f-a9ca1494f8b3-kube-proxy\") pod \"kube-proxy-x2l6z\" (UID: \"b1911d59-d0d1-428a-a76f-a9ca1494f8b3\") " pod="kube-system/kube-proxy-x2l6z" Jan 30 13:46:21.558991 kubelet[2747]: I0130 13:46:21.559002 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-etc-cni-netd\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559187 kubelet[2747]: I0130 13:46:21.559017 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86c18653-e014-495b-998c-3a522d5a8eeb-clustermesh-secrets\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559187 kubelet[2747]: I0130 13:46:21.559032 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-host-proc-sys-net\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559187 kubelet[2747]: I0130 13:46:21.559048 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1911d59-d0d1-428a-a76f-a9ca1494f8b3-lib-modules\") pod \"kube-proxy-x2l6z\" (UID: \"b1911d59-d0d1-428a-a76f-a9ca1494f8b3\") " pod="kube-system/kube-proxy-x2l6z" Jan 30 13:46:21.559187 kubelet[2747]: I0130 13:46:21.559064 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cni-path\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559187 kubelet[2747]: I0130 13:46:21.559108 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-xtables-lock\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559187 kubelet[2747]: I0130 13:46:21.559133 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-cgroup\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559339 kubelet[2747]: I0130 13:46:21.559152 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-host-proc-sys-kernel\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559339 kubelet[2747]: I0130 13:46:21.559165 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-bpf-maps\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559339 kubelet[2747]: I0130 13:46:21.559181 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-hostproc\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559339 kubelet[2747]: I0130 13:46:21.559199 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1911d59-d0d1-428a-a76f-a9ca1494f8b3-xtables-lock\") pod \"kube-proxy-x2l6z\" (UID: \"b1911d59-d0d1-428a-a76f-a9ca1494f8b3\") " pod="kube-system/kube-proxy-x2l6z" Jan 30 13:46:21.559339 kubelet[2747]: I0130 13:46:21.559220 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8p7\" (UniqueName: \"kubernetes.io/projected/b1911d59-d0d1-428a-a76f-a9ca1494f8b3-kube-api-access-hb8p7\") pod \"kube-proxy-x2l6z\" (UID: \"b1911d59-d0d1-428a-a76f-a9ca1494f8b3\") " pod="kube-system/kube-proxy-x2l6z" Jan 30 13:46:21.559339 kubelet[2747]: I0130 13:46:21.559234 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-lib-modules\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559460 kubelet[2747]: I0130 13:46:21.559275 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdl9b\" (UniqueName: \"kubernetes.io/projected/86c18653-e014-495b-998c-3a522d5a8eeb-kube-api-access-jdl9b\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559460 kubelet[2747]: I0130 13:46:21.559290 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-run\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.559460 kubelet[2747]: I0130 13:46:21.559332 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-config-path\") pod \"cilium-l4jd7\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " pod="kube-system/cilium-l4jd7" Jan 30 13:46:21.832200 kubelet[2747]: E0130 13:46:21.832080 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:21.832926 containerd[1578]: time="2025-01-30T13:46:21.832775234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2l6z,Uid:b1911d59-d0d1-428a-a76f-a9ca1494f8b3,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:21.838549 kubelet[2747]: E0130 13:46:21.838472 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:21.838931 containerd[1578]: time="2025-01-30T13:46:21.838887616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l4jd7,Uid:86c18653-e014-495b-998c-3a522d5a8eeb,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:21.861013 containerd[1578]: time="2025-01-30T13:46:21.860921185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:21.861325 containerd[1578]: time="2025-01-30T13:46:21.860988021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:21.861325 containerd[1578]: time="2025-01-30T13:46:21.861137855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:21.861325 containerd[1578]: time="2025-01-30T13:46:21.861274062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:21.867673 containerd[1578]: time="2025-01-30T13:46:21.867583938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:21.867848 containerd[1578]: time="2025-01-30T13:46:21.867640495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:21.867961 containerd[1578]: time="2025-01-30T13:46:21.867832007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:21.868168 containerd[1578]: time="2025-01-30T13:46:21.868109042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:21.917802 containerd[1578]: time="2025-01-30T13:46:21.916355965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2l6z,Uid:b1911d59-d0d1-428a-a76f-a9ca1494f8b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"727676f4e22c3138eb31c001f353c46cdb53dc41e449a8e60170e7c9089b8427\"" Jan 30 13:46:21.918943 kubelet[2747]: E0130 13:46:21.918773 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:21.928101 containerd[1578]: time="2025-01-30T13:46:21.927901254Z" level=info msg="CreateContainer within sandbox \"727676f4e22c3138eb31c001f353c46cdb53dc41e449a8e60170e7c9089b8427\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:46:21.933241 containerd[1578]: time="2025-01-30T13:46:21.933198103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l4jd7,Uid:86c18653-e014-495b-998c-3a522d5a8eeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\"" Jan 30 13:46:21.941925 kubelet[2747]: I0130 13:46:21.941638 2747 topology_manager.go:215] "Topology Admit Handler" podUID="d699227b-dde2-440c-8aa5-1301fce7f0cb" podNamespace="kube-system" podName="cilium-operator-599987898-f45dk" Jan 30 13:46:21.943611 kubelet[2747]: E0130 13:46:21.943140 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:21.946478 containerd[1578]: time="2025-01-30T13:46:21.946264760Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:46:21.962491 kubelet[2747]: I0130 13:46:21.962092 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d699227b-dde2-440c-8aa5-1301fce7f0cb-cilium-config-path\") pod \"cilium-operator-599987898-f45dk\" (UID: \"d699227b-dde2-440c-8aa5-1301fce7f0cb\") " pod="kube-system/cilium-operator-599987898-f45dk" Jan 30 13:46:21.962491 kubelet[2747]: I0130 13:46:21.962143 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47zt\" (UniqueName: \"kubernetes.io/projected/d699227b-dde2-440c-8aa5-1301fce7f0cb-kube-api-access-c47zt\") pod \"cilium-operator-599987898-f45dk\" (UID: \"d699227b-dde2-440c-8aa5-1301fce7f0cb\") " pod="kube-system/cilium-operator-599987898-f45dk" Jan 30 13:46:21.972694 containerd[1578]: time="2025-01-30T13:46:21.972642426Z" level=info msg="CreateContainer within sandbox \"727676f4e22c3138eb31c001f353c46cdb53dc41e449a8e60170e7c9089b8427\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"44170d1333d8e69a83b1d5d9bd1079b95ffe698d4a9a2cc2be325f4964219170\"" Jan 30 13:46:21.973531 containerd[1578]: time="2025-01-30T13:46:21.973501020Z" level=info msg="StartContainer for \"44170d1333d8e69a83b1d5d9bd1079b95ffe698d4a9a2cc2be325f4964219170\"" Jan 30 13:46:22.038397 containerd[1578]: time="2025-01-30T13:46:22.038345261Z" level=info msg="StartContainer for \"44170d1333d8e69a83b1d5d9bd1079b95ffe698d4a9a2cc2be325f4964219170\" returns successfully" Jan 30 13:46:22.110965 kubelet[2747]: E0130 13:46:22.110860 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:22.123333 kubelet[2747]: I0130 13:46:22.123224 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x2l6z" podStartSLOduration=1.122988926 podStartE2EDuration="1.122988926s" podCreationTimestamp="2025-01-30 13:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:22.12288579 +0000 UTC m=+16.135993822" watchObservedRunningTime="2025-01-30 13:46:22.122988926 +0000 UTC m=+16.136096968" Jan 30 13:46:22.251266 kubelet[2747]: E0130 13:46:22.251221 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:22.251967 containerd[1578]: time="2025-01-30T13:46:22.251855875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f45dk,Uid:d699227b-dde2-440c-8aa5-1301fce7f0cb,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:22.283333 containerd[1578]: time="2025-01-30T13:46:22.283212637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:22.283333 containerd[1578]: time="2025-01-30T13:46:22.283263063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:22.283333 containerd[1578]: time="2025-01-30T13:46:22.283275566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:22.283609 containerd[1578]: time="2025-01-30T13:46:22.283374384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:22.340168 containerd[1578]: time="2025-01-30T13:46:22.340126817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f45dk,Uid:d699227b-dde2-440c-8aa5-1301fce7f0cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8\"" Jan 30 13:46:22.340962 kubelet[2747]: E0130 13:46:22.340913 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:25.659397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034168297.mount: Deactivated successfully. Jan 30 13:46:27.714192 containerd[1578]: time="2025-01-30T13:46:27.714144877Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:27.715366 containerd[1578]: time="2025-01-30T13:46:27.715067568Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:46:27.717198 containerd[1578]: time="2025-01-30T13:46:27.716685520Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:27.718282 containerd[1578]: time="2025-01-30T13:46:27.718224743Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.771896413s" Jan 30 13:46:27.718330 containerd[1578]: time="2025-01-30T13:46:27.718289665Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:46:27.719939 containerd[1578]: time="2025-01-30T13:46:27.719903339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:46:27.721216 containerd[1578]: time="2025-01-30T13:46:27.721175528Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:46:27.733692 containerd[1578]: time="2025-01-30T13:46:27.733630081Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\"" Jan 30 13:46:27.733983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120401050.mount: Deactivated successfully. Jan 30 13:46:27.734928 containerd[1578]: time="2025-01-30T13:46:27.734506204Z" level=info msg="StartContainer for \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\"" Jan 30 13:46:27.789933 containerd[1578]: time="2025-01-30T13:46:27.789873221Z" level=info msg="StartContainer for \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\" returns successfully" Jan 30 13:46:28.131593 kubelet[2747]: E0130 13:46:28.131485 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:28.361741 containerd[1578]: time="2025-01-30T13:46:28.359518612Z" level=info msg="shim disconnected" id=e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada namespace=k8s.io Jan 30 13:46:28.361889 containerd[1578]: time="2025-01-30T13:46:28.361750710Z" level=warning msg="cleaning up after shim disconnected" id=e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada namespace=k8s.io Jan 30 13:46:28.361889 containerd[1578]: time="2025-01-30T13:46:28.361775308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:28.731544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada-rootfs.mount: Deactivated successfully. Jan 30 13:46:29.133790 kubelet[2747]: E0130 13:46:29.133759 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:29.136192 containerd[1578]: time="2025-01-30T13:46:29.136155884Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:46:29.151138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082753558.mount: Deactivated successfully. Jan 30 13:46:29.154209 containerd[1578]: time="2025-01-30T13:46:29.154171831Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\"" Jan 30 13:46:29.154814 containerd[1578]: time="2025-01-30T13:46:29.154604548Z" level=info msg="StartContainer for \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\"" Jan 30 13:46:29.205445 containerd[1578]: time="2025-01-30T13:46:29.205409350Z" level=info msg="StartContainer for \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\" returns successfully" Jan 30 13:46:29.217920 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:46:29.218603 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:29.218675 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:29.228230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:29.241486 containerd[1578]: time="2025-01-30T13:46:29.241429574Z" level=info msg="shim disconnected" id=800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f namespace=k8s.io Jan 30 13:46:29.241693 containerd[1578]: time="2025-01-30T13:46:29.241487844Z" level=warning msg="cleaning up after shim disconnected" id=800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f namespace=k8s.io Jan 30 13:46:29.241693 containerd[1578]: time="2025-01-30T13:46:29.241498665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:29.245005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:29.297861 systemd-resolved[1464]: Under memory pressure, flushing caches. Jan 30 13:46:29.297909 systemd-resolved[1464]: Flushed all caches. Jan 30 13:46:29.332742 systemd-journald[1150]: Under memory pressure, flushing caches. Jan 30 13:46:29.730844 systemd[1]: run-containerd-runc-k8s.io-800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f-runc.3p7dKP.mount: Deactivated successfully. Jan 30 13:46:29.731039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f-rootfs.mount: Deactivated successfully. Jan 30 13:46:29.805037 containerd[1578]: time="2025-01-30T13:46:29.804987216Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:29.805743 containerd[1578]: time="2025-01-30T13:46:29.805665594Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:46:29.806841 containerd[1578]: time="2025-01-30T13:46:29.806815281Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:29.808179 containerd[1578]: time="2025-01-30T13:46:29.808145899Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.088208547s" Jan 30 13:46:29.808207 containerd[1578]: time="2025-01-30T13:46:29.808179092Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:46:29.811138 containerd[1578]: time="2025-01-30T13:46:29.811098645Z" level=info msg="CreateContainer within sandbox \"eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:46:29.821485 containerd[1578]: time="2025-01-30T13:46:29.821459524Z" level=info msg="CreateContainer within sandbox \"eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\"" Jan 30 13:46:29.822138 containerd[1578]: time="2025-01-30T13:46:29.821890456Z" level=info msg="StartContainer for \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\"" Jan 30 13:46:29.874643 containerd[1578]: time="2025-01-30T13:46:29.874600459Z" level=info msg="StartContainer for \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\" returns successfully" Jan 30 13:46:30.136825 kubelet[2747]: E0130 13:46:30.136788 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:30.139567 kubelet[2747]: E0130 13:46:30.138896 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:30.140098 containerd[1578]: time="2025-01-30T13:46:30.140049561Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:46:30.158888 containerd[1578]: time="2025-01-30T13:46:30.158837152Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\"" Jan 30 13:46:30.159994 kubelet[2747]: I0130 13:46:30.159588 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-f45dk" podStartSLOduration=1.692108629 podStartE2EDuration="9.159571225s" podCreationTimestamp="2025-01-30 13:46:21 +0000 UTC" firstStartedPulling="2025-01-30 13:46:22.341411876 +0000 UTC m=+16.354519918" lastFinishedPulling="2025-01-30 13:46:29.808874472 +0000 UTC m=+23.821982514" observedRunningTime="2025-01-30 13:46:30.159315072 +0000 UTC m=+24.172423114" watchObservedRunningTime="2025-01-30 13:46:30.159571225 +0000 UTC m=+24.172679267" Jan 30 13:46:30.160392 containerd[1578]: time="2025-01-30T13:46:30.160349191Z" level=info msg="StartContainer for \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\"" Jan 30 13:46:30.260064 containerd[1578]: time="2025-01-30T13:46:30.259880617Z" level=info msg="StartContainer for \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\" returns successfully" Jan 30 13:46:30.283790 containerd[1578]: time="2025-01-30T13:46:30.283731348Z" level=info msg="shim disconnected" id=8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4 namespace=k8s.io Jan 30 13:46:30.283949 containerd[1578]: time="2025-01-30T13:46:30.283792303Z" level=warning msg="cleaning up after shim disconnected" id=8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4 namespace=k8s.io Jan 30 13:46:30.283949 containerd[1578]: time="2025-01-30T13:46:30.283802061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:31.143370 kubelet[2747]: E0130 13:46:31.142936 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:31.143370 kubelet[2747]: E0130 13:46:31.143014 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:31.147047 containerd[1578]: time="2025-01-30T13:46:31.146975500Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:46:31.164418 containerd[1578]: time="2025-01-30T13:46:31.164368023Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\"" Jan 30 13:46:31.164922 containerd[1578]: time="2025-01-30T13:46:31.164891689Z" level=info msg="StartContainer for \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\"" Jan 30 13:46:31.250025 containerd[1578]: time="2025-01-30T13:46:31.249980521Z" level=info msg="StartContainer for \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\" returns successfully" Jan 30 13:46:31.273953 containerd[1578]: time="2025-01-30T13:46:31.273886924Z" level=info msg="shim disconnected" id=ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c namespace=k8s.io Jan 30 13:46:31.273953 containerd[1578]: time="2025-01-30T13:46:31.273947558Z" level=warning msg="cleaning up after shim disconnected" id=ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c namespace=k8s.io Jan 30 13:46:31.273953 containerd[1578]: time="2025-01-30T13:46:31.273956374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:46:31.730913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c-rootfs.mount: Deactivated successfully. Jan 30 13:46:32.146438 kubelet[2747]: E0130 13:46:32.146414 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:32.148247 containerd[1578]: time="2025-01-30T13:46:32.148202237Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:46:32.214336 containerd[1578]: time="2025-01-30T13:46:32.214261797Z" level=info msg="CreateContainer within sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\"" Jan 30 13:46:32.214885 containerd[1578]: time="2025-01-30T13:46:32.214845998Z" level=info msg="StartContainer for \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\"" Jan 30 13:46:32.300842 containerd[1578]: time="2025-01-30T13:46:32.300787079Z" level=info msg="StartContainer for \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\" returns successfully" Jan 30 13:46:32.463551 kubelet[2747]: I0130 13:46:32.463455 2747 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:46:32.653936 kubelet[2747]: I0130 13:46:32.653348 2747 topology_manager.go:215] "Topology Admit Handler" podUID="c8184349-e3fc-46a1-8304-e2aac81970c2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7jfmr" Jan 30 13:46:32.682231 kubelet[2747]: I0130 13:46:32.681259 2747 topology_manager.go:215] "Topology Admit Handler" podUID="d1a5c819-2805-475c-bbc9-bd1ac5386347" podNamespace="kube-system" podName="coredns-7db6d8ff4d-26bxv" Jan 30 13:46:32.785602 kubelet[2747]: I0130 13:46:32.785444 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcjj\" (UniqueName: \"kubernetes.io/projected/d1a5c819-2805-475c-bbc9-bd1ac5386347-kube-api-access-kfcjj\") pod \"coredns-7db6d8ff4d-26bxv\" (UID: \"d1a5c819-2805-475c-bbc9-bd1ac5386347\") " pod="kube-system/coredns-7db6d8ff4d-26bxv" Jan 30 13:46:32.785602 kubelet[2747]: I0130 13:46:32.785481 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26btr\" (UniqueName: \"kubernetes.io/projected/c8184349-e3fc-46a1-8304-e2aac81970c2-kube-api-access-26btr\") pod \"coredns-7db6d8ff4d-7jfmr\" (UID: \"c8184349-e3fc-46a1-8304-e2aac81970c2\") " pod="kube-system/coredns-7db6d8ff4d-7jfmr" Jan 30 13:46:32.785602 kubelet[2747]: I0130 13:46:32.785504 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1a5c819-2805-475c-bbc9-bd1ac5386347-config-volume\") pod \"coredns-7db6d8ff4d-26bxv\" (UID: \"d1a5c819-2805-475c-bbc9-bd1ac5386347\") " pod="kube-system/coredns-7db6d8ff4d-26bxv" Jan 30 13:46:32.785602 kubelet[2747]: I0130 13:46:32.785525 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8184349-e3fc-46a1-8304-e2aac81970c2-config-volume\") pod \"coredns-7db6d8ff4d-7jfmr\" (UID: \"c8184349-e3fc-46a1-8304-e2aac81970c2\") " pod="kube-system/coredns-7db6d8ff4d-7jfmr" Jan 30 13:46:32.959780 kubelet[2747]: E0130 13:46:32.959741 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:32.960326 containerd[1578]: time="2025-01-30T13:46:32.960276552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7jfmr,Uid:c8184349-e3fc-46a1-8304-e2aac81970c2,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:32.986935 kubelet[2747]: E0130 13:46:32.986888 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:32.987463 containerd[1578]: time="2025-01-30T13:46:32.987411552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-26bxv,Uid:d1a5c819-2805-475c-bbc9-bd1ac5386347,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:33.151600 kubelet[2747]: E0130 13:46:33.151536 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:33.227725 kubelet[2747]: I0130 13:46:33.227627 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l4jd7" podStartSLOduration=6.453752885 podStartE2EDuration="12.227607493s" podCreationTimestamp="2025-01-30 13:46:21 +0000 UTC" firstStartedPulling="2025-01-30 13:46:21.945882467 +0000 UTC m=+15.958990519" lastFinishedPulling="2025-01-30 13:46:27.719737085 +0000 UTC m=+21.732845127" observedRunningTime="2025-01-30 13:46:33.227353796 +0000 UTC m=+27.240461838" watchObservedRunningTime="2025-01-30 13:46:33.227607493 +0000 UTC m=+27.240715535" Jan 30 13:46:33.231043 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:34958.service - OpenSSH per-connection server daemon (10.0.0.1:34958). Jan 30 13:46:33.269270 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 34958 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:33.271257 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:33.275756 systemd-logind[1550]: New session 8 of user core. Jan 30 13:46:33.284203 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:46:33.409940 sshd[3550]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:33.414235 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:34958.service: Deactivated successfully. Jan 30 13:46:33.416631 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:46:33.416657 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:46:33.417834 systemd-logind[1550]: Removed session 8. Jan 30 13:46:34.152899 kubelet[2747]: E0130 13:46:34.152860 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:34.393114 systemd-networkd[1242]: cilium_host: Link UP Jan 30 13:46:34.393291 systemd-networkd[1242]: cilium_net: Link UP Jan 30 13:46:34.395266 systemd-networkd[1242]: cilium_net: Gained carrier Jan 30 13:46:34.395504 systemd-networkd[1242]: cilium_host: Gained carrier Jan 30 13:46:34.395650 systemd-networkd[1242]: cilium_net: Gained IPv6LL Jan 30 13:46:34.395845 systemd-networkd[1242]: cilium_host: Gained IPv6LL Jan 30 13:46:34.498536 systemd-networkd[1242]: cilium_vxlan: Link UP Jan 30 13:46:34.498543 systemd-networkd[1242]: cilium_vxlan: Gained carrier Jan 30 13:46:34.699743 kernel: NET: Registered PF_ALG protocol family Jan 30 13:46:35.155119 kubelet[2747]: E0130 13:46:35.155078 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:35.337615 systemd-networkd[1242]: lxc_health: Link UP Jan 30 13:46:35.348603 systemd-networkd[1242]: lxc_health: Gained carrier Jan 30 13:46:35.772464 systemd-networkd[1242]: lxc01c0c5e8ebe1: Link UP Jan 30 13:46:35.778495 systemd-networkd[1242]: lxc92da2a75f4c3: Link UP Jan 30 13:46:35.786742 kernel: eth0: renamed from tmp82ea7 Jan 30 13:46:35.796031 systemd-networkd[1242]: lxc92da2a75f4c3: Gained carrier Jan 30 13:46:35.798858 kernel: eth0: renamed from tmp047cf Jan 30 13:46:35.804853 systemd-networkd[1242]: lxc01c0c5e8ebe1: Gained carrier Jan 30 13:46:36.017869 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Jan 30 13:46:36.156274 kubelet[2747]: E0130 13:46:36.156231 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:36.465922 systemd-networkd[1242]: lxc_health: Gained IPv6LL Jan 30 13:46:36.978867 systemd-networkd[1242]: lxc01c0c5e8ebe1: Gained IPv6LL Jan 30 13:46:37.169849 systemd-networkd[1242]: lxc92da2a75f4c3: Gained IPv6LL Jan 30 13:46:38.417941 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:34970.service - OpenSSH per-connection server daemon (10.0.0.1:34970). Jan 30 13:46:38.455210 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:38.456781 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:38.460754 systemd-logind[1550]: New session 9 of user core. Jan 30 13:46:38.464021 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:46:38.576553 sshd[3969]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:38.581164 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:46:38.582060 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:34970.service: Deactivated successfully. Jan 30 13:46:38.585043 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:46:38.585943 systemd-logind[1550]: Removed session 9. Jan 30 13:46:39.065981 containerd[1578]: time="2025-01-30T13:46:39.065889281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:39.065981 containerd[1578]: time="2025-01-30T13:46:39.065941949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:39.065981 containerd[1578]: time="2025-01-30T13:46:39.065954082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:39.066676 containerd[1578]: time="2025-01-30T13:46:39.066513795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:39.095766 containerd[1578]: time="2025-01-30T13:46:39.095435774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:39.095766 containerd[1578]: time="2025-01-30T13:46:39.095481580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:39.095766 containerd[1578]: time="2025-01-30T13:46:39.095495586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:39.095766 containerd[1578]: time="2025-01-30T13:46:39.095579483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:39.102097 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:46:39.120727 systemd-resolved[1464]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:46:39.129206 containerd[1578]: time="2025-01-30T13:46:39.129177595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7jfmr,Uid:c8184349-e3fc-46a1-8304-e2aac81970c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"82ea712a56cec8cbe3933b393c7231623f7060824eb8a03a1997d49febbfd945\"" Jan 30 13:46:39.130328 kubelet[2747]: E0130 13:46:39.130299 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:39.132307 containerd[1578]: time="2025-01-30T13:46:39.132204596Z" level=info msg="CreateContainer within sandbox \"82ea712a56cec8cbe3933b393c7231623f7060824eb8a03a1997d49febbfd945\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:46:39.144622 containerd[1578]: time="2025-01-30T13:46:39.144593068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-26bxv,Uid:d1a5c819-2805-475c-bbc9-bd1ac5386347,Namespace:kube-system,Attempt:0,} returns sandbox id \"047cf39d4fbf8a340d0075f4a8b1a475a68e8043289262aa58aee411da0aaeba\"" Jan 30 13:46:39.145461 kubelet[2747]: E0130 13:46:39.145440 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:39.147142 containerd[1578]: time="2025-01-30T13:46:39.147104731Z" level=info msg="CreateContainer within sandbox \"047cf39d4fbf8a340d0075f4a8b1a475a68e8043289262aa58aee411da0aaeba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:46:39.450351 containerd[1578]: time="2025-01-30T13:46:39.450308583Z" level=info msg="CreateContainer within sandbox \"82ea712a56cec8cbe3933b393c7231623f7060824eb8a03a1997d49febbfd945\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21975d1483c13537675871ec0e9a71dc4d16f69746943067a38f5bb53caf0c3e\"" Jan 30 13:46:39.450944 containerd[1578]: time="2025-01-30T13:46:39.450781602Z" level=info msg="StartContainer for \"21975d1483c13537675871ec0e9a71dc4d16f69746943067a38f5bb53caf0c3e\"" Jan 30 13:46:39.456635 containerd[1578]: time="2025-01-30T13:46:39.456578282Z" level=info msg="CreateContainer within sandbox \"047cf39d4fbf8a340d0075f4a8b1a475a68e8043289262aa58aee411da0aaeba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdc92461da7f16c34bc15c55323d9dc28b080bdb39aecacf73740f4689a48be6\"" Jan 30 13:46:39.457293 containerd[1578]: time="2025-01-30T13:46:39.457129437Z" level=info msg="StartContainer for \"fdc92461da7f16c34bc15c55323d9dc28b080bdb39aecacf73740f4689a48be6\"" Jan 30 13:46:39.508844 containerd[1578]: time="2025-01-30T13:46:39.508799757Z" level=info msg="StartContainer for \"21975d1483c13537675871ec0e9a71dc4d16f69746943067a38f5bb53caf0c3e\" returns successfully" Jan 30 13:46:39.508979 containerd[1578]: time="2025-01-30T13:46:39.508812651Z" level=info msg="StartContainer for \"fdc92461da7f16c34bc15c55323d9dc28b080bdb39aecacf73740f4689a48be6\" returns successfully" Jan 30 13:46:40.167354 kubelet[2747]: E0130 13:46:40.167319 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.169449 kubelet[2747]: E0130 13:46:40.169219 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.175745 kubelet[2747]: I0130 13:46:40.175679 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-26bxv" podStartSLOduration=19.175661149 podStartE2EDuration="19.175661149s" podCreationTimestamp="2025-01-30 13:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:40.175483004 +0000 UTC m=+34.188591056" watchObservedRunningTime="2025-01-30 13:46:40.175661149 +0000 UTC m=+34.188769191" Jan 30 13:46:40.194645 kubelet[2747]: I0130 13:46:40.194572 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7jfmr" podStartSLOduration=19.194550279 podStartE2EDuration="19.194550279s" podCreationTimestamp="2025-01-30 13:46:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:40.194387603 +0000 UTC m=+34.207495645" watchObservedRunningTime="2025-01-30 13:46:40.194550279 +0000 UTC m=+34.207658321" Jan 30 13:46:41.171368 kubelet[2747]: E0130 13:46:41.171331 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:41.171881 kubelet[2747]: E0130 13:46:41.171509 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:42.172914 kubelet[2747]: E0130 13:46:42.172889 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:42.173354 kubelet[2747]: E0130 13:46:42.172889 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:43.585991 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:33064.service - OpenSSH per-connection server daemon (10.0.0.1:33064). Jan 30 13:46:43.623605 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 33064 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:43.625783 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:43.630037 systemd-logind[1550]: New session 10 of user core. Jan 30 13:46:43.639970 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:46:43.761667 sshd[4158]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:43.765774 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:33064.service: Deactivated successfully. Jan 30 13:46:43.768484 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:46:43.768566 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:46:43.769815 systemd-logind[1550]: Removed session 10. Jan 30 13:46:44.001572 kubelet[2747]: I0130 13:46:44.001542 2747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:46:44.002390 kubelet[2747]: E0130 13:46:44.002272 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:44.177237 kubelet[2747]: E0130 13:46:44.177201 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:48.772000 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:33074.service - OpenSSH per-connection server daemon (10.0.0.1:33074). Jan 30 13:46:48.805185 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 33074 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:48.806957 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:48.811106 systemd-logind[1550]: New session 11 of user core. Jan 30 13:46:48.821067 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:46:48.924611 sshd[4174]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:48.933954 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:33082.service - OpenSSH per-connection server daemon (10.0.0.1:33082). Jan 30 13:46:48.934697 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:33074.service: Deactivated successfully. Jan 30 13:46:48.938095 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:46:48.939138 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:46:48.940104 systemd-logind[1550]: Removed session 11. Jan 30 13:46:48.969439 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 33082 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:48.970958 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:48.974651 systemd-logind[1550]: New session 12 of user core. Jan 30 13:46:48.984984 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:46:49.139647 sshd[4188]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:49.146048 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:33088.service - OpenSSH per-connection server daemon (10.0.0.1:33088). Jan 30 13:46:49.147896 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:33082.service: Deactivated successfully. Jan 30 13:46:49.154249 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:46:49.156960 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:46:49.158913 systemd-logind[1550]: Removed session 12. Jan 30 13:46:49.180102 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 33088 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:49.181636 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:49.186057 systemd-logind[1550]: New session 13 of user core. Jan 30 13:46:49.194991 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:46:49.301435 sshd[4201]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:49.304647 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:33088.service: Deactivated successfully. Jan 30 13:46:49.308790 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:46:49.309040 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:46:49.310171 systemd-logind[1550]: Removed session 13. Jan 30 13:46:54.317938 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:35872.service - OpenSSH per-connection server daemon (10.0.0.1:35872). Jan 30 13:46:54.347142 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 35872 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:54.348554 sshd[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:54.352586 systemd-logind[1550]: New session 14 of user core. Jan 30 13:46:54.362006 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:46:54.468844 sshd[4223]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:54.472903 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:35872.service: Deactivated successfully. Jan 30 13:46:54.475462 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:46:54.476494 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:46:54.477442 systemd-logind[1550]: Removed session 14. Jan 30 13:46:59.476973 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:35880.service - OpenSSH per-connection server daemon (10.0.0.1:35880). Jan 30 13:46:59.509561 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 35880 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:59.511358 sshd[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:59.515271 systemd-logind[1550]: New session 15 of user core. Jan 30 13:46:59.524976 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:46:59.632838 sshd[4239]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:59.639945 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:35894.service - OpenSSH per-connection server daemon (10.0.0.1:35894). Jan 30 13:46:59.640517 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:35880.service: Deactivated successfully. Jan 30 13:46:59.642880 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:46:59.644859 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:46:59.646154 systemd-logind[1550]: Removed session 15. Jan 30 13:46:59.673456 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 35894 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:59.675065 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:59.678922 systemd-logind[1550]: New session 16 of user core. Jan 30 13:46:59.686003 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:46:59.857474 sshd[4251]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:59.873053 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:35896.service - OpenSSH per-connection server daemon (10.0.0.1:35896). Jan 30 13:46:59.873597 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:35894.service: Deactivated successfully. Jan 30 13:46:59.877446 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:46:59.878638 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:46:59.880151 systemd-logind[1550]: Removed session 16. Jan 30 13:46:59.914322 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 35896 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:46:59.916204 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:59.920377 systemd-logind[1550]: New session 17 of user core. Jan 30 13:46:59.925092 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:47:01.236356 sshd[4265]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:01.252074 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:43600.service - OpenSSH per-connection server daemon (10.0.0.1:43600). Jan 30 13:47:01.252847 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:35896.service: Deactivated successfully. Jan 30 13:47:01.256385 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:47:01.261439 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:47:01.262847 systemd-logind[1550]: Removed session 17. Jan 30 13:47:01.289934 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 43600 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:01.291637 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:01.296190 systemd-logind[1550]: New session 18 of user core. Jan 30 13:47:01.305978 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:47:01.537737 sshd[4288]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:01.546987 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:43608.service - OpenSSH per-connection server daemon (10.0.0.1:43608). Jan 30 13:47:01.547662 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:43600.service: Deactivated successfully. Jan 30 13:47:01.549863 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:47:01.551501 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:47:01.553238 systemd-logind[1550]: Removed session 18. Jan 30 13:47:01.576644 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 43608 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:01.578591 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:01.583381 systemd-logind[1550]: New session 19 of user core. Jan 30 13:47:01.592042 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:47:01.700361 sshd[4302]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:01.703924 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:43608.service: Deactivated successfully. Jan 30 13:47:01.706022 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:47:01.706158 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:47:01.707009 systemd-logind[1550]: Removed session 19. Jan 30 13:47:06.713925 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:43622.service - OpenSSH per-connection server daemon (10.0.0.1:43622). Jan 30 13:47:06.743864 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 43622 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:06.745342 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:06.749097 systemd-logind[1550]: New session 20 of user core. Jan 30 13:47:06.758972 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:47:06.859556 sshd[4321]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:06.863646 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:43622.service: Deactivated successfully. Jan 30 13:47:06.866083 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:47:06.866140 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:47:06.867119 systemd-logind[1550]: Removed session 20. Jan 30 13:47:11.870059 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:50414.service - OpenSSH per-connection server daemon (10.0.0.1:50414). Jan 30 13:47:11.899699 sshd[4339]: Accepted publickey for core from 10.0.0.1 port 50414 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:11.901109 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:11.904936 systemd-logind[1550]: New session 21 of user core. Jan 30 13:47:11.914963 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:47:12.019997 sshd[4339]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:12.023334 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:50414.service: Deactivated successfully. Jan 30 13:47:12.025464 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:47:12.025550 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:47:12.026349 systemd-logind[1550]: Removed session 21. Jan 30 13:47:17.030924 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:50416.service - OpenSSH per-connection server daemon (10.0.0.1:50416). Jan 30 13:47:17.060373 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 50416 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:17.061728 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:17.065081 systemd-logind[1550]: New session 22 of user core. Jan 30 13:47:17.076937 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:47:17.175920 sshd[4355]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:17.179474 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:50416.service: Deactivated successfully. Jan 30 13:47:17.181696 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:47:17.182456 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:47:17.183367 systemd-logind[1550]: Removed session 22. Jan 30 13:47:22.193014 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:33668.service - OpenSSH per-connection server daemon (10.0.0.1:33668). Jan 30 13:47:22.223734 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 33668 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:22.225142 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:22.228872 systemd-logind[1550]: New session 23 of user core. Jan 30 13:47:22.239982 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:47:22.337102 sshd[4372]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:22.346924 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:33678.service - OpenSSH per-connection server daemon (10.0.0.1:33678). Jan 30 13:47:22.347360 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:33668.service: Deactivated successfully. Jan 30 13:47:22.350946 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:47:22.351854 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:47:22.352799 systemd-logind[1550]: Removed session 23. Jan 30 13:47:22.377161 sshd[4384]: Accepted publickey for core from 10.0.0.1 port 33678 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:22.378476 sshd[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:22.381926 systemd-logind[1550]: New session 24 of user core. Jan 30 13:47:22.390966 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:47:23.797007 containerd[1578]: time="2025-01-30T13:47:23.796949650Z" level=info msg="StopContainer for \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\" with timeout 30 (s)" Jan 30 13:47:23.801749 containerd[1578]: time="2025-01-30T13:47:23.801014467Z" level=info msg="Stop container \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\" with signal terminated" Jan 30 13:47:23.851902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9-rootfs.mount: Deactivated successfully. Jan 30 13:47:23.854413 containerd[1578]: time="2025-01-30T13:47:23.854316432Z" level=info msg="StopContainer for \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\" with timeout 2 (s)" Jan 30 13:47:23.854522 containerd[1578]: time="2025-01-30T13:47:23.854504913Z" level=info msg="Stop container \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\" with signal terminated" Jan 30 13:47:23.857148 containerd[1578]: time="2025-01-30T13:47:23.857096041Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:47:23.861959 systemd-networkd[1242]: lxc_health: Link DOWN Jan 30 13:47:23.861969 systemd-networkd[1242]: lxc_health: Lost carrier Jan 30 13:47:23.863751 containerd[1578]: time="2025-01-30T13:47:23.863588252Z" level=info msg="shim disconnected" id=ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9 namespace=k8s.io Jan 30 13:47:23.863817 containerd[1578]: time="2025-01-30T13:47:23.863797873Z" level=warning msg="cleaning up after shim disconnected" id=ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9 namespace=k8s.io Jan 30 13:47:23.863861 containerd[1578]: time="2025-01-30T13:47:23.863813313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:23.881361 containerd[1578]: time="2025-01-30T13:47:23.881307696Z" level=info msg="StopContainer for \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\" returns successfully" Jan 30 13:47:23.882013 containerd[1578]: time="2025-01-30T13:47:23.881986905Z" level=info msg="StopPodSandbox for \"eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8\"" Jan 30 13:47:23.882080 containerd[1578]: time="2025-01-30T13:47:23.882022915Z" level=info msg="Container to stop \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:23.884394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8-shm.mount: Deactivated successfully. Jan 30 13:47:23.912024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8-rootfs.mount: Deactivated successfully. Jan 30 13:47:23.915106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931-rootfs.mount: Deactivated successfully. Jan 30 13:47:23.926728 containerd[1578]: time="2025-01-30T13:47:23.926663357Z" level=info msg="shim disconnected" id=eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8 namespace=k8s.io Jan 30 13:47:23.926728 containerd[1578]: time="2025-01-30T13:47:23.926728441Z" level=warning msg="cleaning up after shim disconnected" id=eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8 namespace=k8s.io Jan 30 13:47:23.926850 containerd[1578]: time="2025-01-30T13:47:23.926737630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:23.926926 containerd[1578]: time="2025-01-30T13:47:23.926850876Z" level=info msg="shim disconnected" id=64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931 namespace=k8s.io Jan 30 13:47:23.926992 containerd[1578]: time="2025-01-30T13:47:23.926926841Z" level=warning msg="cleaning up after shim disconnected" id=64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931 namespace=k8s.io Jan 30 13:47:23.926992 containerd[1578]: time="2025-01-30T13:47:23.926937521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:23.940344 containerd[1578]: time="2025-01-30T13:47:23.940302173Z" level=info msg="TearDown network for sandbox \"eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8\" successfully" Jan 30 13:47:23.940344 containerd[1578]: time="2025-01-30T13:47:23.940334825Z" level=info msg="StopPodSandbox for \"eb5b678de1420057e3eb1258bf45c63c92a057225aca56e2310e6ffe785c48d8\" returns successfully" Jan 30 13:47:23.993840 containerd[1578]: time="2025-01-30T13:47:23.993771107Z" level=info msg="StopContainer for \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\" returns successfully" Jan 30 13:47:23.994271 containerd[1578]: time="2025-01-30T13:47:23.994085588Z" level=info msg="StopPodSandbox for \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\"" Jan 30 13:47:23.994271 containerd[1578]: time="2025-01-30T13:47:23.994117220Z" level=info msg="Container to stop \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:23.994271 containerd[1578]: time="2025-01-30T13:47:23.994132699Z" level=info msg="Container to stop \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:23.994271 containerd[1578]: time="2025-01-30T13:47:23.994142318Z" level=info msg="Container to stop \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:23.994271 containerd[1578]: time="2025-01-30T13:47:23.994151976Z" level=info msg="Container to stop \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:23.994271 containerd[1578]: time="2025-01-30T13:47:23.994160682Z" level=info msg="Container to stop \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:47:24.023238 containerd[1578]: time="2025-01-30T13:47:24.023179273Z" level=info msg="shim disconnected" id=4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14 namespace=k8s.io Jan 30 13:47:24.023238 containerd[1578]: time="2025-01-30T13:47:24.023230371Z" level=warning msg="cleaning up after shim disconnected" id=4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14 namespace=k8s.io Jan 30 13:47:24.023238 containerd[1578]: time="2025-01-30T13:47:24.023239418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:24.036869 containerd[1578]: time="2025-01-30T13:47:24.036813276Z" level=info msg="TearDown network for sandbox \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" successfully" Jan 30 13:47:24.036869 containerd[1578]: time="2025-01-30T13:47:24.036846940Z" level=info msg="StopPodSandbox for \"4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14\" returns successfully" Jan 30 13:47:24.168364 kubelet[2747]: I0130 13:47:24.168334 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cni-path\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.168364 kubelet[2747]: I0130 13:47:24.168367 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-xtables-lock\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.168905 kubelet[2747]: I0130 13:47:24.168389 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86c18653-e014-495b-998c-3a522d5a8eeb-hubble-tls\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.168905 kubelet[2747]: I0130 13:47:24.168402 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-host-proc-sys-net\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.168905 kubelet[2747]: I0130 13:47:24.168418 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdl9b\" (UniqueName: \"kubernetes.io/projected/86c18653-e014-495b-998c-3a522d5a8eeb-kube-api-access-jdl9b\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.168905 kubelet[2747]: I0130 13:47:24.168436 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-config-path\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.168905 kubelet[2747]: I0130 13:47:24.168450 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-cgroup\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.168905 kubelet[2747]: I0130 13:47:24.168465 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-hostproc\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.169048 kubelet[2747]: I0130 13:47:24.168478 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-lib-modules\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.169048 kubelet[2747]: I0130 13:47:24.168492 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-host-proc-sys-kernel\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.169048 kubelet[2747]: I0130 13:47:24.168505 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-bpf-maps\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.169048 kubelet[2747]: I0130 13:47:24.168524 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86c18653-e014-495b-998c-3a522d5a8eeb-clustermesh-secrets\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.169048 kubelet[2747]: I0130 13:47:24.168542 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d699227b-dde2-440c-8aa5-1301fce7f0cb-cilium-config-path\") pod \"d699227b-dde2-440c-8aa5-1301fce7f0cb\" (UID: \"d699227b-dde2-440c-8aa5-1301fce7f0cb\") " Jan 30 13:47:24.169048 kubelet[2747]: I0130 13:47:24.168558 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-run\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.169188 kubelet[2747]: I0130 13:47:24.168572 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-etc-cni-netd\") pod \"86c18653-e014-495b-998c-3a522d5a8eeb\" (UID: \"86c18653-e014-495b-998c-3a522d5a8eeb\") " Jan 30 13:47:24.169188 kubelet[2747]: I0130 13:47:24.168588 2747 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c47zt\" (UniqueName: \"kubernetes.io/projected/d699227b-dde2-440c-8aa5-1301fce7f0cb-kube-api-access-c47zt\") pod \"d699227b-dde2-440c-8aa5-1301fce7f0cb\" (UID: \"d699227b-dde2-440c-8aa5-1301fce7f0cb\") " Jan 30 13:47:24.169429 kubelet[2747]: I0130 13:47:24.168485 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.169429 kubelet[2747]: I0130 13:47:24.169416 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.169429 kubelet[2747]: I0130 13:47:24.168519 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cni-path" (OuterVolumeSpecName: "cni-path") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.169429 kubelet[2747]: I0130 13:47:24.168539 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.169429 kubelet[2747]: I0130 13:47:24.169390 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.169570 kubelet[2747]: I0130 13:47:24.169395 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.169570 kubelet[2747]: I0130 13:47:24.169405 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.169570 kubelet[2747]: I0130 13:47:24.169463 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.172099 kubelet[2747]: I0130 13:47:24.172025 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.172147 kubelet[2747]: I0130 13:47:24.172130 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:24.172251 kubelet[2747]: I0130 13:47:24.172176 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-hostproc" (OuterVolumeSpecName: "hostproc") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:47:24.173115 kubelet[2747]: I0130 13:47:24.173095 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c18653-e014-495b-998c-3a522d5a8eeb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:47:24.173220 kubelet[2747]: I0130 13:47:24.173198 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c18653-e014-495b-998c-3a522d5a8eeb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:24.173735 kubelet[2747]: I0130 13:47:24.173685 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d699227b-dde2-440c-8aa5-1301fce7f0cb-kube-api-access-c47zt" (OuterVolumeSpecName: "kube-api-access-c47zt") pod "d699227b-dde2-440c-8aa5-1301fce7f0cb" (UID: "d699227b-dde2-440c-8aa5-1301fce7f0cb"). InnerVolumeSpecName "kube-api-access-c47zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:24.174161 kubelet[2747]: I0130 13:47:24.174112 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c18653-e014-495b-998c-3a522d5a8eeb-kube-api-access-jdl9b" (OuterVolumeSpecName: "kube-api-access-jdl9b") pod "86c18653-e014-495b-998c-3a522d5a8eeb" (UID: "86c18653-e014-495b-998c-3a522d5a8eeb"). InnerVolumeSpecName "kube-api-access-jdl9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:47:24.174869 kubelet[2747]: I0130 13:47:24.174844 2747 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d699227b-dde2-440c-8aa5-1301fce7f0cb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d699227b-dde2-440c-8aa5-1301fce7f0cb" (UID: "d699227b-dde2-440c-8aa5-1301fce7f0cb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:47:24.245551 kubelet[2747]: I0130 13:47:24.245525 2747 scope.go:117] "RemoveContainer" containerID="64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931" Jan 30 13:47:24.247277 containerd[1578]: time="2025-01-30T13:47:24.246597902Z" level=info msg="RemoveContainer for \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\"" Jan 30 13:47:24.256785 containerd[1578]: time="2025-01-30T13:47:24.256708818Z" level=info msg="RemoveContainer for \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\" returns successfully" Jan 30 13:47:24.257043 kubelet[2747]: I0130 13:47:24.257007 2747 scope.go:117] "RemoveContainer" containerID="ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c" Jan 30 13:47:24.258174 containerd[1578]: time="2025-01-30T13:47:24.258138472Z" level=info msg="RemoveContainer for \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\"" Jan 30 13:47:24.262934 containerd[1578]: time="2025-01-30T13:47:24.262612678Z" level=info msg="RemoveContainer for \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\" returns successfully" Jan 30 13:47:24.263064 kubelet[2747]: I0130 13:47:24.262806 2747 scope.go:117] "RemoveContainer" containerID="8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4" Jan 30 13:47:24.264428 containerd[1578]: time="2025-01-30T13:47:24.264394405Z" level=info msg="RemoveContainer for \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\"" Jan 30 13:47:24.268823 kubelet[2747]: I0130 13:47:24.268777 2747 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d699227b-dde2-440c-8aa5-1301fce7f0cb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.268823 kubelet[2747]: I0130 13:47:24.268799 2747 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.268823 kubelet[2747]: I0130 13:47:24.268807 2747 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.268823 kubelet[2747]: I0130 13:47:24.268814 2747 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c47zt\" (UniqueName: \"kubernetes.io/projected/d699227b-dde2-440c-8aa5-1301fce7f0cb-kube-api-access-c47zt\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.268823 kubelet[2747]: I0130 13:47:24.268824 2747 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.268823 kubelet[2747]: I0130 13:47:24.268832 2747 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.268823 kubelet[2747]: I0130 13:47:24.268839 2747 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269078 kubelet[2747]: I0130 13:47:24.268848 2747 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jdl9b\" (UniqueName: \"kubernetes.io/projected/86c18653-e014-495b-998c-3a522d5a8eeb-kube-api-access-jdl9b\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269078 kubelet[2747]: I0130 13:47:24.268856 2747 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269078 kubelet[2747]: I0130 13:47:24.268863 2747 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86c18653-e014-495b-998c-3a522d5a8eeb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269078 kubelet[2747]: I0130 13:47:24.268871 2747 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269078 kubelet[2747]: I0130 13:47:24.268878 2747 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269078 kubelet[2747]: I0130 13:47:24.268886 2747 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269078 kubelet[2747]: I0130 13:47:24.268900 2747 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269078 kubelet[2747]: I0130 13:47:24.268907 2747 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86c18653-e014-495b-998c-3a522d5a8eeb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269326 kubelet[2747]: I0130 13:47:24.268916 2747 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86c18653-e014-495b-998c-3a522d5a8eeb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 30 13:47:24.269353 containerd[1578]: time="2025-01-30T13:47:24.269222808Z" level=info msg="RemoveContainer for \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\" returns successfully" Jan 30 13:47:24.269387 kubelet[2747]: I0130 13:47:24.269372 2747 scope.go:117] "RemoveContainer" containerID="800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f" Jan 30 13:47:24.270387 containerd[1578]: time="2025-01-30T13:47:24.270348732Z" level=info msg="RemoveContainer for \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\"" Jan 30 13:47:24.274324 containerd[1578]: time="2025-01-30T13:47:24.274286913Z" level=info msg="RemoveContainer for \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\" returns successfully" Jan 30 13:47:24.274464 kubelet[2747]: I0130 13:47:24.274429 2747 scope.go:117] "RemoveContainer" containerID="e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada" Jan 30 13:47:24.275476 containerd[1578]: time="2025-01-30T13:47:24.275446510Z" level=info msg="RemoveContainer for \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\"" Jan 30 13:47:24.278933 containerd[1578]: time="2025-01-30T13:47:24.278904333Z" level=info msg="RemoveContainer for \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\" returns successfully" Jan 30 13:47:24.279105 kubelet[2747]: I0130 13:47:24.279064 2747 scope.go:117] "RemoveContainer" containerID="64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931" Jan 30 13:47:24.279267 containerd[1578]: time="2025-01-30T13:47:24.279227912Z" level=error msg="ContainerStatus for \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\": not found" Jan 30 13:47:24.279437 kubelet[2747]: E0130 13:47:24.279410 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\": not found" containerID="64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931" Jan 30 13:47:24.279531 kubelet[2747]: I0130 13:47:24.279444 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931"} err="failed to get container status \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\": rpc error: code = NotFound desc = an error occurred when try to find container \"64221c6f6540d546e03834f4f66b81fa8a5d5dd32d4b5cc95a67555521fe0931\": not found" Jan 30 13:47:24.279572 kubelet[2747]: I0130 13:47:24.279535 2747 scope.go:117] "RemoveContainer" containerID="ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c" Jan 30 13:47:24.279772 containerd[1578]: time="2025-01-30T13:47:24.279734381Z" level=error msg="ContainerStatus for \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\": not found" Jan 30 13:47:24.279875 kubelet[2747]: E0130 13:47:24.279848 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\": not found" containerID="ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c" Jan 30 13:47:24.279946 kubelet[2747]: I0130 13:47:24.279878 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c"} err="failed to get container status \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee7833b702d1992c892b4dcd8f2b45c4d6dfa2c5b6d7bf2f7de3f35bdf76453c\": not found" Jan 30 13:47:24.279946 kubelet[2747]: I0130 13:47:24.279900 2747 scope.go:117] "RemoveContainer" containerID="8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4" Jan 30 13:47:24.280098 containerd[1578]: time="2025-01-30T13:47:24.280062357Z" level=error msg="ContainerStatus for \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\": not found" Jan 30 13:47:24.280199 kubelet[2747]: E0130 13:47:24.280181 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\": not found" containerID="8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4" Jan 30 13:47:24.280239 kubelet[2747]: I0130 13:47:24.280198 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4"} err="failed to get container status \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8329689a606f6c3ab86c1915fa3cfd8443eeb70a5f4b7d7fc429c8301dbe63a4\": not found" Jan 30 13:47:24.280239 kubelet[2747]: I0130 13:47:24.280212 2747 scope.go:117] "RemoveContainer" containerID="800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f" Jan 30 13:47:24.280364 containerd[1578]: time="2025-01-30T13:47:24.280338615Z" level=error msg="ContainerStatus for \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\": not found" Jan 30 13:47:24.280465 kubelet[2747]: E0130 13:47:24.280443 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\": not found" containerID="800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f" Jan 30 13:47:24.280496 kubelet[2747]: I0130 13:47:24.280470 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f"} err="failed to get container status \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\": rpc error: code = NotFound desc = an error occurred when try to find container \"800f5850eb7ff1f8518aa49448502b1a7f9d76219f4901179d02e1c045b4a19f\": not found" Jan 30 13:47:24.280496 kubelet[2747]: I0130 13:47:24.280490 2747 scope.go:117] "RemoveContainer" containerID="e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada" Jan 30 13:47:24.280728 containerd[1578]: time="2025-01-30T13:47:24.280679518Z" level=error msg="ContainerStatus for \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\": not found" Jan 30 13:47:24.280915 kubelet[2747]: E0130 13:47:24.280880 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\": not found" containerID="e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada" Jan 30 13:47:24.280950 kubelet[2747]: I0130 13:47:24.280919 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada"} err="failed to get container status \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4773430117f15195a03f5a43de08a30b92075c28edba3b934c90b86203adada\": not found" Jan 30 13:47:24.280950 kubelet[2747]: I0130 13:47:24.280947 2747 scope.go:117] "RemoveContainer" containerID="ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9" Jan 30 13:47:24.281861 containerd[1578]: time="2025-01-30T13:47:24.281838644Z" level=info msg="RemoveContainer for \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\"" Jan 30 13:47:24.284698 containerd[1578]: time="2025-01-30T13:47:24.284670920Z" level=info msg="RemoveContainer for \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\" returns successfully" Jan 30 13:47:24.284840 kubelet[2747]: I0130 13:47:24.284817 2747 scope.go:117] "RemoveContainer" containerID="ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9" Jan 30 13:47:24.285026 containerd[1578]: time="2025-01-30T13:47:24.284983318Z" level=error msg="ContainerStatus for \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\": not found" Jan 30 13:47:24.285101 kubelet[2747]: E0130 13:47:24.285088 2747 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\": not found" containerID="ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9" Jan 30 13:47:24.285137 kubelet[2747]: I0130 13:47:24.285108 2747 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9"} err="failed to get container status \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce85e30e67c34cd825f6b023f67ef91b2f7eba16f338383c779f5eb6c272d2f9\": not found" Jan 30 13:47:24.832069 systemd[1]: var-lib-kubelet-pods-d699227b\x2ddde2\x2d440c\x2d8aa5\x2d1301fce7f0cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc47zt.mount: Deactivated successfully. Jan 30 13:47:24.832289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14-rootfs.mount: Deactivated successfully. Jan 30 13:47:24.832428 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b6ce60d8671c9c8d26c5d5872d45014494d39bb8fded64668c34f11f34e0b14-shm.mount: Deactivated successfully. Jan 30 13:47:24.832574 systemd[1]: var-lib-kubelet-pods-86c18653\x2de014\x2d495b\x2d998c\x2d3a522d5a8eeb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djdl9b.mount: Deactivated successfully. Jan 30 13:47:24.832734 systemd[1]: var-lib-kubelet-pods-86c18653\x2de014\x2d495b\x2d998c\x2d3a522d5a8eeb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:47:24.832883 systemd[1]: var-lib-kubelet-pods-86c18653\x2de014\x2d495b\x2d998c\x2d3a522d5a8eeb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:47:25.072967 kubelet[2747]: E0130 13:47:25.072903 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:25.745559 sshd[4384]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:25.752950 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:33688.service - OpenSSH per-connection server daemon (10.0.0.1:33688). Jan 30 13:47:25.753407 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:33678.service: Deactivated successfully. Jan 30 13:47:25.756165 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:47:25.757189 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:47:25.758465 systemd-logind[1550]: Removed session 24. Jan 30 13:47:25.787390 sshd[4557]: Accepted publickey for core from 10.0.0.1 port 33688 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:25.788902 sshd[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:25.792940 systemd-logind[1550]: New session 25 of user core. Jan 30 13:47:25.806989 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:47:26.072509 kubelet[2747]: E0130 13:47:26.072395 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:26.074391 kubelet[2747]: I0130 13:47:26.074353 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c18653-e014-495b-998c-3a522d5a8eeb" path="/var/lib/kubelet/pods/86c18653-e014-495b-998c-3a522d5a8eeb/volumes" Jan 30 13:47:26.075260 kubelet[2747]: I0130 13:47:26.075232 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d699227b-dde2-440c-8aa5-1301fce7f0cb" path="/var/lib/kubelet/pods/d699227b-dde2-440c-8aa5-1301fce7f0cb/volumes" Jan 30 13:47:26.130912 kubelet[2747]: E0130 13:47:26.130880 2747 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:47:26.290944 sshd[4557]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:26.304384 kubelet[2747]: I0130 13:47:26.304303 2747 topology_manager.go:215] "Topology Admit Handler" podUID="5a20a07f-9074-4f0d-8461-c162abce6121" podNamespace="kube-system" podName="cilium-tbxpc" Jan 30 13:47:26.304384 kubelet[2747]: E0130 13:47:26.304358 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86c18653-e014-495b-998c-3a522d5a8eeb" containerName="mount-cgroup" Jan 30 13:47:26.304384 kubelet[2747]: E0130 13:47:26.304366 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86c18653-e014-495b-998c-3a522d5a8eeb" containerName="apply-sysctl-overwrites" Jan 30 13:47:26.304384 kubelet[2747]: E0130 13:47:26.304374 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86c18653-e014-495b-998c-3a522d5a8eeb" containerName="clean-cilium-state" Jan 30 13:47:26.304384 kubelet[2747]: E0130 13:47:26.304385 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86c18653-e014-495b-998c-3a522d5a8eeb" containerName="mount-bpf-fs" Jan 30 13:47:26.304384 kubelet[2747]: E0130 13:47:26.304390 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="86c18653-e014-495b-998c-3a522d5a8eeb" containerName="cilium-agent" Jan 30 13:47:26.304384 kubelet[2747]: E0130 13:47:26.304397 2747 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d699227b-dde2-440c-8aa5-1301fce7f0cb" containerName="cilium-operator" Jan 30 13:47:26.304870 kubelet[2747]: I0130 13:47:26.304419 2747 memory_manager.go:354] "RemoveStaleState removing state" podUID="d699227b-dde2-440c-8aa5-1301fce7f0cb" containerName="cilium-operator" Jan 30 13:47:26.304870 kubelet[2747]: I0130 13:47:26.304425 2747 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c18653-e014-495b-998c-3a522d5a8eeb" containerName="cilium-agent" Jan 30 13:47:26.308504 systemd[1]: Started sshd@25-10.0.0.108:22-10.0.0.1:33692.service - OpenSSH per-connection server daemon (10.0.0.1:33692). Jan 30 13:47:26.309291 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:33688.service: Deactivated successfully. Jan 30 13:47:26.321870 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:47:26.324550 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:47:26.330098 systemd-logind[1550]: Removed session 25. Jan 30 13:47:26.359908 sshd[4571]: Accepted publickey for core from 10.0.0.1 port 33692 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:26.361630 sshd[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:26.365957 systemd-logind[1550]: New session 26 of user core. Jan 30 13:47:26.372954 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:47:26.425178 sshd[4571]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:26.435931 systemd[1]: Started sshd@26-10.0.0.108:22-10.0.0.1:33702.service - OpenSSH per-connection server daemon (10.0.0.1:33702). Jan 30 13:47:26.436383 systemd[1]: sshd@25-10.0.0.108:22-10.0.0.1:33692.service: Deactivated successfully. Jan 30 13:47:26.439044 systemd-logind[1550]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:47:26.440411 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:47:26.441893 systemd-logind[1550]: Removed session 26. Jan 30 13:47:26.467543 sshd[4580]: Accepted publickey for core from 10.0.0.1 port 33702 ssh2: RSA SHA256:OHll324lP2j8WlAQ+hjipts1Kkp63o3797QuaW9/4NE Jan 30 13:47:26.469291 sshd[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:26.473358 systemd-logind[1550]: New session 27 of user core. Jan 30 13:47:26.480870 kubelet[2747]: I0130 13:47:26.480831 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-cilium-run\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.480939 kubelet[2747]: I0130 13:47:26.480869 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-host-proc-sys-net\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.480939 kubelet[2747]: I0130 13:47:26.480893 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-xtables-lock\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.480939 kubelet[2747]: I0130 13:47:26.480909 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a20a07f-9074-4f0d-8461-c162abce6121-hubble-tls\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.480939 kubelet[2747]: I0130 13:47:26.480924 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-bpf-maps\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.480939 kubelet[2747]: I0130 13:47:26.480939 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a20a07f-9074-4f0d-8461-c162abce6121-cilium-config-path\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481057 kubelet[2747]: I0130 13:47:26.480956 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a20a07f-9074-4f0d-8461-c162abce6121-clustermesh-secrets\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481105 kubelet[2747]: I0130 13:47:26.481063 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gjnc\" (UniqueName: \"kubernetes.io/projected/5a20a07f-9074-4f0d-8461-c162abce6121-kube-api-access-2gjnc\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481156 kubelet[2747]: I0130 13:47:26.481132 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-cilium-cgroup\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481187 kubelet[2747]: I0130 13:47:26.481161 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5a20a07f-9074-4f0d-8461-c162abce6121-cilium-ipsec-secrets\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481210 kubelet[2747]: I0130 13:47:26.481183 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-hostproc\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481260 kubelet[2747]: I0130 13:47:26.481243 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-lib-modules\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481288 kubelet[2747]: I0130 13:47:26.481268 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-host-proc-sys-kernel\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481311 kubelet[2747]: I0130 13:47:26.481290 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-cni-path\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481334 kubelet[2747]: I0130 13:47:26.481315 2747 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a20a07f-9074-4f0d-8461-c162abce6121-etc-cni-netd\") pod \"cilium-tbxpc\" (UID: \"5a20a07f-9074-4f0d-8461-c162abce6121\") " pod="kube-system/cilium-tbxpc" Jan 30 13:47:26.481956 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:47:26.612989 kubelet[2747]: E0130 13:47:26.612872 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:26.614257 containerd[1578]: time="2025-01-30T13:47:26.614219390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbxpc,Uid:5a20a07f-9074-4f0d-8461-c162abce6121,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:26.640143 containerd[1578]: time="2025-01-30T13:47:26.639980894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:26.640143 containerd[1578]: time="2025-01-30T13:47:26.640040147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:26.640143 containerd[1578]: time="2025-01-30T13:47:26.640052040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:26.640337 containerd[1578]: time="2025-01-30T13:47:26.640157351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:26.678303 containerd[1578]: time="2025-01-30T13:47:26.678251616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tbxpc,Uid:5a20a07f-9074-4f0d-8461-c162abce6121,Namespace:kube-system,Attempt:0,} returns sandbox id \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\"" Jan 30 13:47:26.679216 kubelet[2747]: E0130 13:47:26.679185 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:26.690629 containerd[1578]: time="2025-01-30T13:47:26.690580399Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:47:26.704731 containerd[1578]: time="2025-01-30T13:47:26.704648715Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b44158a4161d8383212d4be68d546d680b0a1ead3c818d5d8d8fb0261d0dc321\"" Jan 30 13:47:26.705342 containerd[1578]: time="2025-01-30T13:47:26.705257537Z" level=info msg="StartContainer for \"b44158a4161d8383212d4be68d546d680b0a1ead3c818d5d8d8fb0261d0dc321\"" Jan 30 13:47:26.759906 containerd[1578]: time="2025-01-30T13:47:26.759867895Z" level=info msg="StartContainer for \"b44158a4161d8383212d4be68d546d680b0a1ead3c818d5d8d8fb0261d0dc321\" returns successfully" Jan 30 13:47:26.803300 containerd[1578]: time="2025-01-30T13:47:26.803238078Z" level=info msg="shim disconnected" id=b44158a4161d8383212d4be68d546d680b0a1ead3c818d5d8d8fb0261d0dc321 namespace=k8s.io Jan 30 13:47:26.803300 containerd[1578]: time="2025-01-30T13:47:26.803293123Z" level=warning msg="cleaning up after shim disconnected" id=b44158a4161d8383212d4be68d546d680b0a1ead3c818d5d8d8fb0261d0dc321 namespace=k8s.io Jan 30 13:47:26.803300 containerd[1578]: time="2025-01-30T13:47:26.803303312Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:27.254546 kubelet[2747]: E0130 13:47:27.254511 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:27.256349 containerd[1578]: time="2025-01-30T13:47:27.256291012Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:47:27.458726 containerd[1578]: time="2025-01-30T13:47:27.458651191Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d8c72925997491e0714bfa19d26ef906dce88d2538fc23ef494cbe958ce1d8bb\"" Jan 30 13:47:27.459257 containerd[1578]: time="2025-01-30T13:47:27.459209587Z" level=info msg="StartContainer for \"d8c72925997491e0714bfa19d26ef906dce88d2538fc23ef494cbe958ce1d8bb\"" Jan 30 13:47:27.518248 containerd[1578]: time="2025-01-30T13:47:27.518127658Z" level=info msg="StartContainer for \"d8c72925997491e0714bfa19d26ef906dce88d2538fc23ef494cbe958ce1d8bb\" returns successfully" Jan 30 13:47:27.544639 kubelet[2747]: I0130 13:47:27.544568 2747 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:47:27Z","lastTransitionTime":"2025-01-30T13:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:47:27.547275 containerd[1578]: time="2025-01-30T13:47:27.547210461Z" level=info msg="shim disconnected" id=d8c72925997491e0714bfa19d26ef906dce88d2538fc23ef494cbe958ce1d8bb namespace=k8s.io Jan 30 13:47:27.547275 containerd[1578]: time="2025-01-30T13:47:27.547271016Z" level=warning msg="cleaning up after shim disconnected" id=d8c72925997491e0714bfa19d26ef906dce88d2538fc23ef494cbe958ce1d8bb namespace=k8s.io Jan 30 13:47:27.547275 containerd[1578]: time="2025-01-30T13:47:27.547283150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:28.257664 kubelet[2747]: E0130 13:47:28.257639 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:28.259623 containerd[1578]: time="2025-01-30T13:47:28.259518486Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:47:28.275217 containerd[1578]: time="2025-01-30T13:47:28.275175608Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00012b788342800801ab4f9a39ab8e6ade541dcc6a1d4dba05b9f564b2d5409c\"" Jan 30 13:47:28.275665 containerd[1578]: time="2025-01-30T13:47:28.275642520Z" level=info msg="StartContainer for \"00012b788342800801ab4f9a39ab8e6ade541dcc6a1d4dba05b9f564b2d5409c\"" Jan 30 13:47:28.336511 containerd[1578]: time="2025-01-30T13:47:28.336473619Z" level=info msg="StartContainer for \"00012b788342800801ab4f9a39ab8e6ade541dcc6a1d4dba05b9f564b2d5409c\" returns successfully" Jan 30 13:47:28.360630 containerd[1578]: time="2025-01-30T13:47:28.360561863Z" level=info msg="shim disconnected" id=00012b788342800801ab4f9a39ab8e6ade541dcc6a1d4dba05b9f564b2d5409c namespace=k8s.io Jan 30 13:47:28.360630 containerd[1578]: time="2025-01-30T13:47:28.360620635Z" level=warning msg="cleaning up after shim disconnected" id=00012b788342800801ab4f9a39ab8e6ade541dcc6a1d4dba05b9f564b2d5409c namespace=k8s.io Jan 30 13:47:28.360630 containerd[1578]: time="2025-01-30T13:47:28.360631477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:28.589015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00012b788342800801ab4f9a39ab8e6ade541dcc6a1d4dba05b9f564b2d5409c-rootfs.mount: Deactivated successfully. Jan 30 13:47:29.260962 kubelet[2747]: E0130 13:47:29.260938 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:29.262515 containerd[1578]: time="2025-01-30T13:47:29.262463334Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:47:29.275327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243442389.mount: Deactivated successfully. Jan 30 13:47:29.276346 containerd[1578]: time="2025-01-30T13:47:29.276310660Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"011e78453d15fa5300c2f45dbff52438b8073775eb2728cb3cfd52b3e5d9bd3f\"" Jan 30 13:47:29.276848 containerd[1578]: time="2025-01-30T13:47:29.276821434Z" level=info msg="StartContainer for \"011e78453d15fa5300c2f45dbff52438b8073775eb2728cb3cfd52b3e5d9bd3f\"" Jan 30 13:47:29.339838 containerd[1578]: time="2025-01-30T13:47:29.339782555Z" level=info msg="StartContainer for \"011e78453d15fa5300c2f45dbff52438b8073775eb2728cb3cfd52b3e5d9bd3f\" returns successfully" Jan 30 13:47:29.365873 containerd[1578]: time="2025-01-30T13:47:29.365812322Z" level=info msg="shim disconnected" id=011e78453d15fa5300c2f45dbff52438b8073775eb2728cb3cfd52b3e5d9bd3f namespace=k8s.io Jan 30 13:47:29.365873 containerd[1578]: time="2025-01-30T13:47:29.365865714Z" level=warning msg="cleaning up after shim disconnected" id=011e78453d15fa5300c2f45dbff52438b8073775eb2728cb3cfd52b3e5d9bd3f namespace=k8s.io Jan 30 13:47:29.365873 containerd[1578]: time="2025-01-30T13:47:29.365873729Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:29.588322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-011e78453d15fa5300c2f45dbff52438b8073775eb2728cb3cfd52b3e5d9bd3f-rootfs.mount: Deactivated successfully. Jan 30 13:47:30.269093 kubelet[2747]: E0130 13:47:30.269057 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:30.271513 containerd[1578]: time="2025-01-30T13:47:30.271468240Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:47:30.285500 containerd[1578]: time="2025-01-30T13:47:30.285357957Z" level=info msg="CreateContainer within sandbox \"139d50a6790c416bd3a54e430a86ae7efc153dcf3c07835a00b139e857e9db39\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3542a520866e8aaba482b1b1d328b22708e109da9c5950aa68f1e56a374874b6\"" Jan 30 13:47:30.286957 containerd[1578]: time="2025-01-30T13:47:30.286927349Z" level=info msg="StartContainer for \"3542a520866e8aaba482b1b1d328b22708e109da9c5950aa68f1e56a374874b6\"" Jan 30 13:47:30.347793 containerd[1578]: time="2025-01-30T13:47:30.347664115Z" level=info msg="StartContainer for \"3542a520866e8aaba482b1b1d328b22708e109da9c5950aa68f1e56a374874b6\" returns successfully" Jan 30 13:47:30.745750 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:47:31.072682 kubelet[2747]: E0130 13:47:31.072555 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:31.273637 kubelet[2747]: E0130 13:47:31.273520 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:32.613900 kubelet[2747]: E0130 13:47:32.613750 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:33.869134 systemd-networkd[1242]: lxc_health: Link UP Jan 30 13:47:33.878226 systemd-networkd[1242]: lxc_health: Gained carrier Jan 30 13:47:34.614966 kubelet[2747]: E0130 13:47:34.614923 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:34.670593 kubelet[2747]: I0130 13:47:34.668809 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tbxpc" podStartSLOduration=8.668783637 podStartE2EDuration="8.668783637s" podCreationTimestamp="2025-01-30 13:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:31.300832124 +0000 UTC m=+85.313940166" watchObservedRunningTime="2025-01-30 13:47:34.668783637 +0000 UTC m=+88.681891679" Jan 30 13:47:35.153969 systemd-networkd[1242]: lxc_health: Gained IPv6LL Jan 30 13:47:35.279827 kubelet[2747]: E0130 13:47:35.279782 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:36.282147 kubelet[2747]: E0130 13:47:36.282111 2747 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:39.135527 sshd[4580]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:39.140101 systemd[1]: sshd@26-10.0.0.108:22-10.0.0.1:33702.service: Deactivated successfully. Jan 30 13:47:39.142832 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:47:39.143792 systemd-logind[1550]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:47:39.144645 systemd-logind[1550]: Removed session 27.