Aug 13 01:17:15.559621 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 01:17:15.559634 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:17:15.559642 kernel: BIOS-provided physical RAM map: Aug 13 01:17:15.559646 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Aug 13 01:17:15.559650 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Aug 13 01:17:15.559654 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Aug 13 01:17:15.559658 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Aug 13 01:17:15.559662 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Aug 13 01:17:15.559666 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a70fff] usable Aug 13 01:17:15.559670 kernel: BIOS-e820: [mem 0x0000000081a71000-0x0000000081a71fff] ACPI NVS Aug 13 01:17:15.559675 kernel: BIOS-e820: [mem 0x0000000081a72000-0x0000000081a72fff] reserved Aug 13 01:17:15.559679 kernel: BIOS-e820: [mem 0x0000000081a73000-0x000000008afcdfff] usable Aug 13 01:17:15.559683 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Aug 13 01:17:15.559687 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Aug 13 01:17:15.559692 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Aug 13 01:17:15.559697 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Aug 13 01:17:15.559701 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Aug 13 01:17:15.559706 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Aug 13 01:17:15.559710 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Aug 13 01:17:15.559714 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Aug 13 01:17:15.559718 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Aug 13 01:17:15.559723 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 01:17:15.559727 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Aug 13 01:17:15.559731 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Aug 13 01:17:15.559735 kernel: NX (Execute Disable) protection: active Aug 13 01:17:15.559740 kernel: SMBIOS 3.2.1 present. Aug 13 01:17:15.559745 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Aug 13 01:17:15.559749 kernel: tsc: Detected 3400.000 MHz processor Aug 13 01:17:15.559753 kernel: tsc: Detected 3399.906 MHz TSC Aug 13 01:17:15.559758 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:17:15.559763 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:17:15.559767 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Aug 13 01:17:15.559772 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:17:15.559776 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Aug 13 01:17:15.559781 kernel: Using GB pages for direct mapping Aug 13 01:17:15.559785 kernel: ACPI: Early table checksum verification disabled Aug 13 01:17:15.559790 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Aug 13 01:17:15.559795 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Aug 13 01:17:15.559799 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Aug 13 01:17:15.559804 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Aug 13 01:17:15.559810 kernel: ACPI: FACS 0x000000008C66DF80 000040 Aug 13 01:17:15.559815 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Aug 13 01:17:15.559820 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Aug 13 01:17:15.559825 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Aug 13 01:17:15.559830 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Aug 13 01:17:15.559835 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Aug 13 01:17:15.559840 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Aug 13 01:17:15.559845 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Aug 13 01:17:15.559850 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Aug 13 01:17:15.559854 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 01:17:15.559860 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Aug 13 01:17:15.559865 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Aug 13 01:17:15.559870 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 01:17:15.559875 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 01:17:15.559879 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Aug 13 01:17:15.559884 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Aug 13 01:17:15.559889 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Aug 13 01:17:15.559894 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Aug 13 01:17:15.559899 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Aug 13 01:17:15.559904 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Aug 13 01:17:15.559909 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Aug 13 01:17:15.559914 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Aug 13 01:17:15.559919 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Aug 13 01:17:15.559924 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Aug 13 01:17:15.559928 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Aug 13 01:17:15.559933 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Aug 13 01:17:15.559938 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Aug 13 01:17:15.559944 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Aug 13 01:17:15.559948 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Aug 13 01:17:15.559953 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Aug 13 01:17:15.559958 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Aug 13 01:17:15.559963 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Aug 13 01:17:15.559968 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Aug 13 01:17:15.559973 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Aug 13 01:17:15.559977 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Aug 13 01:17:15.559983 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Aug 13 01:17:15.559988 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Aug 13 01:17:15.559993 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Aug 13 01:17:15.559998 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Aug 13 01:17:15.560002 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Aug 13 01:17:15.560007 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Aug 13 01:17:15.560012 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Aug 13 01:17:15.560017 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Aug 13 01:17:15.560022 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Aug 13 01:17:15.560027 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Aug 13 01:17:15.560032 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Aug 13 01:17:15.560037 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Aug 13 01:17:15.560042 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Aug 13 01:17:15.560046 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Aug 13 01:17:15.560051 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Aug 13 01:17:15.560056 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Aug 13 01:17:15.560061 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Aug 13 01:17:15.560065 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Aug 13 01:17:15.560071 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Aug 13 01:17:15.560076 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Aug 13 01:17:15.560081 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Aug 13 01:17:15.560085 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Aug 13 01:17:15.560090 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Aug 13 01:17:15.560095 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Aug 13 01:17:15.560100 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Aug 13 01:17:15.560105 kernel: No NUMA configuration found Aug 13 01:17:15.560110 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Aug 13 01:17:15.560115 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Aug 13 01:17:15.560120 kernel: Zone ranges: Aug 13 01:17:15.560125 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:17:15.560130 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:17:15.560135 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Aug 13 01:17:15.560139 kernel: Movable zone start for each node Aug 13 01:17:15.560144 kernel: Early memory node ranges Aug 13 01:17:15.560149 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Aug 13 01:17:15.560154 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Aug 13 01:17:15.560159 kernel: node 0: [mem 0x0000000040400000-0x0000000081a70fff] Aug 13 01:17:15.560164 kernel: node 0: [mem 0x0000000081a73000-0x000000008afcdfff] Aug 13 01:17:15.560169 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Aug 13 01:17:15.560174 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Aug 13 01:17:15.560178 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Aug 13 01:17:15.560183 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Aug 13 01:17:15.560188 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:17:15.560196 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Aug 13 01:17:15.560202 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Aug 13 01:17:15.560207 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Aug 13 01:17:15.560212 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Aug 13 01:17:15.560218 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Aug 13 01:17:15.560224 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Aug 13 01:17:15.560231 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Aug 13 01:17:15.560236 kernel: ACPI: PM-Timer IO Port: 0x1808 Aug 13 01:17:15.560241 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 01:17:15.560246 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 01:17:15.560251 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 01:17:15.560257 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 01:17:15.560263 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 01:17:15.560268 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 01:17:15.560273 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 01:17:15.560278 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 01:17:15.560283 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 01:17:15.560288 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 01:17:15.560293 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 01:17:15.560298 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 01:17:15.560304 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 01:17:15.560309 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 01:17:15.560314 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 01:17:15.560320 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 01:17:15.560325 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Aug 13 01:17:15.560330 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:17:15.560335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:17:15.560340 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:17:15.560345 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:17:15.560351 kernel: TSC deadline timer available Aug 13 01:17:15.560356 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Aug 13 01:17:15.560361 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Aug 13 01:17:15.560367 kernel: Booting paravirtualized kernel on bare hardware Aug 13 01:17:15.560372 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:17:15.560377 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Aug 13 01:17:15.560382 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Aug 13 01:17:15.560387 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Aug 13 01:17:15.560392 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 01:17:15.560398 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Aug 13 01:17:15.560403 kernel: Policy zone: Normal Aug 13 01:17:15.560409 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:17:15.560415 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:17:15.560420 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Aug 13 01:17:15.560425 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Aug 13 01:17:15.560430 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:17:15.560436 kernel: Memory: 32722608K/33452984K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 730116K reserved, 0K cma-reserved) Aug 13 01:17:15.560442 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 01:17:15.560447 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 01:17:15.560452 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 01:17:15.560457 kernel: rcu: Hierarchical RCU implementation. Aug 13 01:17:15.560462 kernel: rcu: RCU event tracing is enabled. Aug 13 01:17:15.560468 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 01:17:15.560473 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:17:15.560478 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:17:15.560484 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:17:15.560489 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 01:17:15.560494 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Aug 13 01:17:15.560500 kernel: random: crng init done Aug 13 01:17:15.560505 kernel: Console: colour dummy device 80x25 Aug 13 01:17:15.560510 kernel: printk: console [tty0] enabled Aug 13 01:17:15.560515 kernel: printk: console [ttyS1] enabled Aug 13 01:17:15.560520 kernel: ACPI: Core revision 20210730 Aug 13 01:17:15.560525 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Aug 13 01:17:15.560531 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:17:15.560536 kernel: DMAR: Host address width 39 Aug 13 01:17:15.560541 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Aug 13 01:17:15.560547 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Aug 13 01:17:15.560552 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Aug 13 01:17:15.560557 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Aug 13 01:17:15.560562 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Aug 13 01:17:15.560567 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Aug 13 01:17:15.560572 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Aug 13 01:17:15.560578 kernel: x2apic enabled Aug 13 01:17:15.560583 kernel: Switched APIC routing to cluster x2apic. Aug 13 01:17:15.560589 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:17:15.560594 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Aug 13 01:17:15.560599 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Aug 13 01:17:15.560604 kernel: CPU0: Thermal monitoring enabled (TM1) Aug 13 01:17:15.560609 kernel: process: using mwait in idle threads Aug 13 01:17:15.560614 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 01:17:15.560620 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 01:17:15.560625 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:17:15.560631 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Aug 13 01:17:15.560636 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 01:17:15.560641 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 01:17:15.560646 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 01:17:15.560651 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 01:17:15.560656 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 01:17:15.560662 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:17:15.560667 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 01:17:15.560673 kernel: TAA: Mitigation: TSX disabled Aug 13 01:17:15.560678 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Aug 13 01:17:15.560683 kernel: SRBDS: Mitigation: Microcode Aug 13 01:17:15.560688 kernel: GDS: Mitigation: Microcode Aug 13 01:17:15.560693 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 01:17:15.560699 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:17:15.560704 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:17:15.560709 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:17:15.560714 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 13 01:17:15.560720 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 13 01:17:15.560725 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:17:15.560730 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 13 01:17:15.560735 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 13 01:17:15.560740 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Aug 13 01:17:15.560745 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:17:15.560750 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:17:15.560755 kernel: LSM: Security Framework initializing Aug 13 01:17:15.560760 kernel: SELinux: Initializing. Aug 13 01:17:15.560767 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:17:15.560772 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:17:15.560777 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Aug 13 01:17:15.560782 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 01:17:15.560787 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Aug 13 01:17:15.560792 kernel: ... version: 4 Aug 13 01:17:15.560797 kernel: ... bit width: 48 Aug 13 01:17:15.560803 kernel: ... generic registers: 4 Aug 13 01:17:15.560808 kernel: ... value mask: 0000ffffffffffff Aug 13 01:17:15.560814 kernel: ... max period: 00007fffffffffff Aug 13 01:17:15.560819 kernel: ... fixed-purpose events: 3 Aug 13 01:17:15.560824 kernel: ... event mask: 000000070000000f Aug 13 01:17:15.560829 kernel: signal: max sigframe size: 2032 Aug 13 01:17:15.560834 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:17:15.560839 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Aug 13 01:17:15.560845 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:17:15.560850 kernel: x86: Booting SMP configuration: Aug 13 01:17:15.560855 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Aug 13 01:17:15.560861 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 01:17:15.560866 kernel: #9 #10 #11 #12 #13 #14 #15 Aug 13 01:17:15.560871 kernel: smp: Brought up 1 node, 16 CPUs Aug 13 01:17:15.560876 kernel: smpboot: Max logical packages: 1 Aug 13 01:17:15.560882 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Aug 13 01:17:15.560887 kernel: devtmpfs: initialized Aug 13 01:17:15.560892 kernel: x86/mm: Memory block size: 128MB Aug 13 01:17:15.560897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a71000-0x81a71fff] (4096 bytes) Aug 13 01:17:15.560902 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Aug 13 01:17:15.560908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:17:15.560913 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 01:17:15.560918 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:17:15.560924 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:17:15.560929 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:17:15.560934 kernel: audit: type=2000 audit(1755047830.119:1): state=initialized audit_enabled=0 res=1 Aug 13 01:17:15.560939 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:17:15.560944 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:17:15.560950 kernel: cpuidle: using governor menu Aug 13 01:17:15.560955 kernel: ACPI: bus type PCI registered Aug 13 01:17:15.560960 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:17:15.560965 kernel: dca service started, version 1.12.1 Aug 13 01:17:15.560970 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Aug 13 01:17:15.560975 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Aug 13 01:17:15.560980 kernel: PCI: Using configuration type 1 for base access Aug 13 01:17:15.560985 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Aug 13 01:17:15.560990 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:17:15.560996 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:17:15.561002 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:17:15.561007 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:17:15.561012 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:17:15.561017 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:17:15.561022 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 01:17:15.561027 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 01:17:15.561032 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 01:17:15.561037 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Aug 13 01:17:15.561043 kernel: ACPI: Dynamic OEM Table Load: Aug 13 01:17:15.561048 kernel: ACPI: SSDT 0xFFFF92B78021B800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Aug 13 01:17:15.561054 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Aug 13 01:17:15.561059 kernel: ACPI: Dynamic OEM Table Load: Aug 13 01:17:15.561064 kernel: ACPI: SSDT 0xFFFF92B781AE6000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Aug 13 01:17:15.561069 kernel: ACPI: Dynamic OEM Table Load: Aug 13 01:17:15.561074 kernel: ACPI: SSDT 0xFFFF92B781A5E800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Aug 13 01:17:15.561079 kernel: ACPI: Dynamic OEM Table Load: Aug 13 01:17:15.561084 kernel: ACPI: SSDT 0xFFFF92B781B48800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Aug 13 01:17:15.561089 kernel: ACPI: Dynamic OEM Table Load: Aug 13 01:17:15.561095 kernel: ACPI: SSDT 0xFFFF92B780148000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Aug 13 01:17:15.561100 kernel: ACPI: Dynamic OEM Table Load: Aug 13 01:17:15.561105 kernel: ACPI: SSDT 0xFFFF92B781AE5C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Aug 13 01:17:15.561110 kernel: ACPI: Interpreter enabled Aug 13 01:17:15.561115 kernel: ACPI: PM: (supports S0 S5) Aug 13 01:17:15.561120 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:17:15.561126 kernel: HEST: Enabling Firmware First mode for corrected errors. Aug 13 01:17:15.561131 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Aug 13 01:17:15.561136 kernel: HEST: Table parsing has been initialized. Aug 13 01:17:15.561142 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Aug 13 01:17:15.561147 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:17:15.561152 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Aug 13 01:17:15.561157 kernel: ACPI: PM: Power Resource [USBC] Aug 13 01:17:15.561162 kernel: ACPI: PM: Power Resource [V0PR] Aug 13 01:17:15.561168 kernel: ACPI: PM: Power Resource [V1PR] Aug 13 01:17:15.561173 kernel: ACPI: PM: Power Resource [V2PR] Aug 13 01:17:15.561178 kernel: ACPI: PM: Power Resource [WRST] Aug 13 01:17:15.561183 kernel: ACPI: PM: Power Resource [FN00] Aug 13 01:17:15.561188 kernel: ACPI: PM: Power Resource [FN01] Aug 13 01:17:15.561194 kernel: ACPI: PM: Power Resource [FN02] Aug 13 01:17:15.561199 kernel: ACPI: PM: Power Resource [FN03] Aug 13 01:17:15.561204 kernel: ACPI: PM: Power Resource [FN04] Aug 13 01:17:15.561209 kernel: ACPI: PM: Power Resource [PIN] Aug 13 01:17:15.561214 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Aug 13 01:17:15.561284 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:17:15.561333 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Aug 13 01:17:15.561380 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Aug 13 01:17:15.561388 kernel: PCI host bridge to bus 0000:00 Aug 13 01:17:15.561435 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:17:15.561476 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:17:15.561517 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:17:15.561557 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Aug 13 01:17:15.561596 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Aug 13 01:17:15.561638 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Aug 13 01:17:15.561691 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Aug 13 01:17:15.561746 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Aug 13 01:17:15.561793 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.561845 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Aug 13 01:17:15.561891 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.561943 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Aug 13 01:17:15.561990 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Aug 13 01:17:15.562039 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Aug 13 01:17:15.562085 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Aug 13 01:17:15.562136 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Aug 13 01:17:15.562183 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Aug 13 01:17:15.562232 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Aug 13 01:17:15.562283 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Aug 13 01:17:15.562329 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Aug 13 01:17:15.562376 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Aug 13 01:17:15.562428 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Aug 13 01:17:15.562474 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 01:17:15.562524 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Aug 13 01:17:15.562571 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 01:17:15.562620 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Aug 13 01:17:15.562665 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Aug 13 01:17:15.562709 kernel: pci 0000:00:16.0: PME# supported from D3hot Aug 13 01:17:15.562759 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Aug 13 01:17:15.562804 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Aug 13 01:17:15.562851 kernel: pci 0000:00:16.1: PME# supported from D3hot Aug 13 01:17:15.562900 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Aug 13 01:17:15.562945 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Aug 13 01:17:15.562990 kernel: pci 0000:00:16.4: PME# supported from D3hot Aug 13 01:17:15.563039 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Aug 13 01:17:15.563107 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Aug 13 01:17:15.563154 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Aug 13 01:17:15.563201 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Aug 13 01:17:15.563269 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Aug 13 01:17:15.563332 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Aug 13 01:17:15.563376 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Aug 13 01:17:15.563420 kernel: pci 0000:00:17.0: PME# supported from D3hot Aug 13 01:17:15.563470 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Aug 13 01:17:15.563515 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.563570 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Aug 13 01:17:15.563616 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.563665 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Aug 13 01:17:15.563709 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.563758 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Aug 13 01:17:15.563805 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.563856 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Aug 13 01:17:15.563902 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.563951 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Aug 13 01:17:15.563998 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Aug 13 01:17:15.564046 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Aug 13 01:17:15.564097 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Aug 13 01:17:15.564141 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Aug 13 01:17:15.564185 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Aug 13 01:17:15.564236 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Aug 13 01:17:15.564318 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Aug 13 01:17:15.564366 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 01:17:15.564416 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Aug 13 01:17:15.564464 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Aug 13 01:17:15.564512 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Aug 13 01:17:15.564559 kernel: pci 0000:02:00.0: PME# supported from D3cold Aug 13 01:17:15.564604 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Aug 13 01:17:15.564651 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Aug 13 01:17:15.564703 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Aug 13 01:17:15.564751 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Aug 13 01:17:15.564797 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Aug 13 01:17:15.564844 kernel: pci 0000:02:00.1: PME# supported from D3cold Aug 13 01:17:15.564924 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Aug 13 01:17:15.564991 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Aug 13 01:17:15.565038 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Aug 13 01:17:15.565084 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Aug 13 01:17:15.565130 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 01:17:15.565174 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Aug 13 01:17:15.565226 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Aug 13 01:17:15.565318 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Aug 13 01:17:15.565365 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Aug 13 01:17:15.565410 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Aug 13 01:17:15.565459 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Aug 13 01:17:15.565505 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.565550 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Aug 13 01:17:15.565595 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Aug 13 01:17:15.565639 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Aug 13 01:17:15.565689 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Aug 13 01:17:15.565736 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Aug 13 01:17:15.565784 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Aug 13 01:17:15.565830 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Aug 13 01:17:15.565877 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Aug 13 01:17:15.565922 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Aug 13 01:17:15.565968 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Aug 13 01:17:15.566013 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Aug 13 01:17:15.566057 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Aug 13 01:17:15.566102 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Aug 13 01:17:15.566157 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Aug 13 01:17:15.566204 kernel: pci 0000:07:00.0: enabling Extended Tags Aug 13 01:17:15.566275 kernel: pci 0000:07:00.0: supports D1 D2 Aug 13 01:17:15.566340 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 01:17:15.566385 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Aug 13 01:17:15.566430 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Aug 13 01:17:15.566476 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Aug 13 01:17:15.566526 kernel: pci_bus 0000:08: extended config space not accessible Aug 13 01:17:15.566580 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Aug 13 01:17:15.566631 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Aug 13 01:17:15.566679 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Aug 13 01:17:15.566729 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Aug 13 01:17:15.566778 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:17:15.566827 kernel: pci 0000:08:00.0: supports D1 D2 Aug 13 01:17:15.566876 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 01:17:15.566925 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Aug 13 01:17:15.566972 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Aug 13 01:17:15.567018 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Aug 13 01:17:15.567026 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Aug 13 01:17:15.567032 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Aug 13 01:17:15.567037 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Aug 13 01:17:15.567042 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Aug 13 01:17:15.567048 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Aug 13 01:17:15.567054 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Aug 13 01:17:15.567060 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Aug 13 01:17:15.567066 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Aug 13 01:17:15.567071 kernel: iommu: Default domain type: Translated Aug 13 01:17:15.567076 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:17:15.567125 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Aug 13 01:17:15.567173 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:17:15.567223 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Aug 13 01:17:15.567234 kernel: vgaarb: loaded Aug 13 01:17:15.567263 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 01:17:15.567269 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 01:17:15.567274 kernel: PTP clock support registered Aug 13 01:17:15.567299 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:17:15.567305 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:17:15.567310 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Aug 13 01:17:15.567315 kernel: e820: reserve RAM buffer [mem 0x81a71000-0x83ffffff] Aug 13 01:17:15.567320 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Aug 13 01:17:15.567327 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Aug 13 01:17:15.567332 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Aug 13 01:17:15.567337 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Aug 13 01:17:15.567342 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 13 01:17:15.567348 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Aug 13 01:17:15.567353 kernel: clocksource: Switched to clocksource tsc-early Aug 13 01:17:15.567358 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:17:15.567364 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:17:15.567369 kernel: pnp: PnP ACPI init Aug 13 01:17:15.567417 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Aug 13 01:17:15.567463 kernel: pnp 00:02: [dma 0 disabled] Aug 13 01:17:15.567509 kernel: pnp 00:03: [dma 0 disabled] Aug 13 01:17:15.567555 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Aug 13 01:17:15.567595 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Aug 13 01:17:15.567639 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Aug 13 01:17:15.567685 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Aug 13 01:17:15.567725 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Aug 13 01:17:15.567767 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Aug 13 01:17:15.567806 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Aug 13 01:17:15.567847 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Aug 13 01:17:15.567886 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Aug 13 01:17:15.567927 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Aug 13 01:17:15.567969 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Aug 13 01:17:15.568013 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Aug 13 01:17:15.568054 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Aug 13 01:17:15.568094 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Aug 13 01:17:15.568134 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Aug 13 01:17:15.568174 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Aug 13 01:17:15.568215 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Aug 13 01:17:15.568285 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Aug 13 01:17:15.568349 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Aug 13 01:17:15.568357 kernel: pnp: PnP ACPI: found 10 devices Aug 13 01:17:15.568362 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:17:15.568368 kernel: NET: Registered PF_INET protocol family Aug 13 01:17:15.568373 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:17:15.568379 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 01:17:15.568384 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:17:15.568391 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:17:15.568397 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Aug 13 01:17:15.568402 kernel: TCP: Hash tables configured (established 262144 bind 65536) Aug 13 01:17:15.568408 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 01:17:15.568413 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 01:17:15.568419 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:17:15.568424 kernel: NET: Registered PF_XDP protocol family Aug 13 01:17:15.568468 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Aug 13 01:17:15.568514 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Aug 13 01:17:15.568561 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Aug 13 01:17:15.568607 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 01:17:15.568655 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Aug 13 01:17:15.568701 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Aug 13 01:17:15.568750 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Aug 13 01:17:15.568796 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Aug 13 01:17:15.568843 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Aug 13 01:17:15.568890 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Aug 13 01:17:15.568937 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 01:17:15.568982 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Aug 13 01:17:15.569027 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Aug 13 01:17:15.569073 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Aug 13 01:17:15.569120 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Aug 13 01:17:15.569166 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Aug 13 01:17:15.569212 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Aug 13 01:17:15.569302 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Aug 13 01:17:15.569348 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Aug 13 01:17:15.569394 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Aug 13 01:17:15.569442 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Aug 13 01:17:15.569488 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Aug 13 01:17:15.569535 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Aug 13 01:17:15.569581 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Aug 13 01:17:15.569627 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Aug 13 01:17:15.569668 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Aug 13 01:17:15.569708 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:17:15.569748 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:17:15.569787 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:17:15.569827 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Aug 13 01:17:15.569865 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Aug 13 01:17:15.569915 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Aug 13 01:17:15.569958 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Aug 13 01:17:15.570003 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Aug 13 01:17:15.570045 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Aug 13 01:17:15.570091 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Aug 13 01:17:15.570135 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Aug 13 01:17:15.570182 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Aug 13 01:17:15.570224 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Aug 13 01:17:15.570315 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Aug 13 01:17:15.570359 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Aug 13 01:17:15.570366 kernel: PCI: CLS 64 bytes, default 64 Aug 13 01:17:15.570372 kernel: DMAR: No ATSR found Aug 13 01:17:15.570377 kernel: DMAR: No SATC found Aug 13 01:17:15.570383 kernel: DMAR: dmar0: Using Queued invalidation Aug 13 01:17:15.570430 kernel: pci 0000:00:00.0: Adding to iommu group 0 Aug 13 01:17:15.570475 kernel: pci 0000:00:01.0: Adding to iommu group 1 Aug 13 01:17:15.570521 kernel: pci 0000:00:01.1: Adding to iommu group 1 Aug 13 01:17:15.570566 kernel: pci 0000:00:08.0: Adding to iommu group 2 Aug 13 01:17:15.570611 kernel: pci 0000:00:12.0: Adding to iommu group 3 Aug 13 01:17:15.570655 kernel: pci 0000:00:14.0: Adding to iommu group 4 Aug 13 01:17:15.570701 kernel: pci 0000:00:14.2: Adding to iommu group 4 Aug 13 01:17:15.570745 kernel: pci 0000:00:15.0: Adding to iommu group 5 Aug 13 01:17:15.570793 kernel: pci 0000:00:15.1: Adding to iommu group 5 Aug 13 01:17:15.570838 kernel: pci 0000:00:16.0: Adding to iommu group 6 Aug 13 01:17:15.570882 kernel: pci 0000:00:16.1: Adding to iommu group 6 Aug 13 01:17:15.570927 kernel: pci 0000:00:16.4: Adding to iommu group 6 Aug 13 01:17:15.570972 kernel: pci 0000:00:17.0: Adding to iommu group 7 Aug 13 01:17:15.571017 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Aug 13 01:17:15.571061 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Aug 13 01:17:15.571107 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Aug 13 01:17:15.571154 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Aug 13 01:17:15.571199 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Aug 13 01:17:15.571247 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Aug 13 01:17:15.571334 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Aug 13 01:17:15.571380 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Aug 13 01:17:15.571424 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Aug 13 01:17:15.571471 kernel: pci 0000:02:00.0: Adding to iommu group 1 Aug 13 01:17:15.571517 kernel: pci 0000:02:00.1: Adding to iommu group 1 Aug 13 01:17:15.571566 kernel: pci 0000:04:00.0: Adding to iommu group 15 Aug 13 01:17:15.571613 kernel: pci 0000:05:00.0: Adding to iommu group 16 Aug 13 01:17:15.571660 kernel: pci 0000:07:00.0: Adding to iommu group 17 Aug 13 01:17:15.571710 kernel: pci 0000:08:00.0: Adding to iommu group 17 Aug 13 01:17:15.571718 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Aug 13 01:17:15.571723 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:17:15.571729 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Aug 13 01:17:15.571734 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Aug 13 01:17:15.571739 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Aug 13 01:17:15.571746 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Aug 13 01:17:15.571752 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Aug 13 01:17:15.571800 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Aug 13 01:17:15.571809 kernel: Initialise system trusted keyrings Aug 13 01:17:15.571814 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Aug 13 01:17:15.571820 kernel: Key type asymmetric registered Aug 13 01:17:15.571825 kernel: Asymmetric key parser 'x509' registered Aug 13 01:17:15.571830 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 01:17:15.571837 kernel: io scheduler mq-deadline registered Aug 13 01:17:15.571842 kernel: io scheduler kyber registered Aug 13 01:17:15.571848 kernel: io scheduler bfq registered Aug 13 01:17:15.571893 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Aug 13 01:17:15.571939 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Aug 13 01:17:15.571985 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Aug 13 01:17:15.572031 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Aug 13 01:17:15.572076 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Aug 13 01:17:15.572123 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Aug 13 01:17:15.572169 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Aug 13 01:17:15.572220 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Aug 13 01:17:15.572230 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Aug 13 01:17:15.572236 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Aug 13 01:17:15.572241 kernel: pstore: Registered erst as persistent store backend Aug 13 01:17:15.572273 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:17:15.572278 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:17:15.572304 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:17:15.572310 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 01:17:15.572356 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Aug 13 01:17:15.572364 kernel: i8042: PNP: No PS/2 controller found. Aug 13 01:17:15.572405 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Aug 13 01:17:15.572447 kernel: rtc_cmos rtc_cmos: registered as rtc0 Aug 13 01:17:15.572488 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-08-13T01:17:14 UTC (1755047834) Aug 13 01:17:15.572531 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Aug 13 01:17:15.572539 kernel: intel_pstate: Intel P-state driver initializing Aug 13 01:17:15.572544 kernel: intel_pstate: Disabling energy efficiency optimization Aug 13 01:17:15.572549 kernel: intel_pstate: HWP enabled Aug 13 01:17:15.572555 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Aug 13 01:17:15.572560 kernel: vesafb: scrolling: redraw Aug 13 01:17:15.572566 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Aug 13 01:17:15.572571 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000d76c032c, using 768k, total 768k Aug 13 01:17:15.572576 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 01:17:15.572583 kernel: fb0: VESA VGA frame buffer device Aug 13 01:17:15.572588 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:17:15.572594 kernel: Segment Routing with IPv6 Aug 13 01:17:15.572599 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:17:15.572604 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:17:15.572610 kernel: Key type dns_resolver registered Aug 13 01:17:15.572615 kernel: microcode: sig=0x906ed, pf=0x2, revision=0x102 Aug 13 01:17:15.572621 kernel: microcode: Microcode Update Driver: v2.2. Aug 13 01:17:15.572626 kernel: IPI shorthand broadcast: enabled Aug 13 01:17:15.572631 kernel: sched_clock: Marking stable (1783394021, 1334484272)->(4530938271, -1413059978) Aug 13 01:17:15.572638 kernel: registered taskstats version 1 Aug 13 01:17:15.572643 kernel: Loading compiled-in X.509 certificates Aug 13 01:17:15.572648 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 01:17:15.572653 kernel: Key type .fscrypt registered Aug 13 01:17:15.572659 kernel: Key type fscrypt-provisioning registered Aug 13 01:17:15.572664 kernel: pstore: Using crash dump compression: deflate Aug 13 01:17:15.572669 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:17:15.572675 kernel: ima: No architecture policies found Aug 13 01:17:15.572681 kernel: clk: Disabling unused clocks Aug 13 01:17:15.572686 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 01:17:15.572692 kernel: Write protecting the kernel read-only data: 28672k Aug 13 01:17:15.572697 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 01:17:15.572703 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 01:17:15.572708 kernel: Run /init as init process Aug 13 01:17:15.572713 kernel: with arguments: Aug 13 01:17:15.572719 kernel: /init Aug 13 01:17:15.572724 kernel: with environment: Aug 13 01:17:15.572730 kernel: HOME=/ Aug 13 01:17:15.572735 kernel: TERM=linux Aug 13 01:17:15.572740 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:17:15.572747 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:17:15.572754 systemd[1]: Detected architecture x86-64. Aug 13 01:17:15.572760 systemd[1]: Running in initrd. Aug 13 01:17:15.572766 systemd[1]: No hostname configured, using default hostname. Aug 13 01:17:15.572771 systemd[1]: Hostname set to . Aug 13 01:17:15.572777 systemd[1]: Initializing machine ID from random generator. Aug 13 01:17:15.572783 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:17:15.572789 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:17:15.572794 systemd[1]: Reached target cryptsetup.target. Aug 13 01:17:15.572799 systemd[1]: Reached target paths.target. Aug 13 01:17:15.572805 systemd[1]: Reached target slices.target. Aug 13 01:17:15.572810 systemd[1]: Reached target swap.target. Aug 13 01:17:15.572816 systemd[1]: Reached target timers.target. Aug 13 01:17:15.572822 systemd[1]: Listening on iscsid.socket. Aug 13 01:17:15.572828 systemd[1]: Listening on iscsiuio.socket. Aug 13 01:17:15.572834 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:17:15.572839 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:17:15.572845 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:17:15.572850 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:17:15.572856 kernel: tsc: Refined TSC clocksource calibration: 3408.017 MHz Aug 13 01:17:15.572862 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fe44c681, max_idle_ns: 440795269197 ns Aug 13 01:17:15.572868 kernel: clocksource: Switched to clocksource tsc Aug 13 01:17:15.572873 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:17:15.572879 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:17:15.572884 systemd[1]: Reached target sockets.target. Aug 13 01:17:15.572890 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:17:15.572896 systemd[1]: Finished network-cleanup.service. Aug 13 01:17:15.572902 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:17:15.572907 systemd[1]: Starting systemd-journald.service... Aug 13 01:17:15.572914 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:17:15.572921 systemd-journald[269]: Journal started Aug 13 01:17:15.572998 systemd-journald[269]: Runtime Journal (/run/log/journal/65118271ce154ce9866566f87acfe610) is 8.0M, max 640.1M, 632.1M free. Aug 13 01:17:15.574471 systemd-modules-load[270]: Inserted module 'overlay' Aug 13 01:17:15.578000 audit: BPF prog-id=6 op=LOAD Aug 13 01:17:15.597284 kernel: audit: type=1334 audit(1755047835.578:2): prog-id=6 op=LOAD Aug 13 01:17:15.597300 systemd[1]: Starting systemd-resolved.service... Aug 13 01:17:15.647283 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:17:15.647301 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 01:17:15.680277 kernel: Bridge firewalling registered Aug 13 01:17:15.680294 systemd[1]: Started systemd-journald.service. Aug 13 01:17:15.694332 systemd-modules-load[270]: Inserted module 'br_netfilter' Aug 13 01:17:15.742140 kernel: audit: type=1130 audit(1755047835.701:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.697474 systemd-resolved[272]: Positive Trust Anchors: Aug 13 01:17:15.806361 kernel: SCSI subsystem initialized Aug 13 01:17:15.806375 kernel: audit: type=1130 audit(1755047835.753:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.697481 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:17:15.922319 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:17:15.922331 kernel: audit: type=1130 audit(1755047835.828:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.922339 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:17:15.922346 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 01:17:15.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.697502 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:17:16.005504 kernel: audit: type=1130 audit(1755047835.939:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.699156 systemd-resolved[272]: Defaulting to hostname 'linux'. Aug 13 01:17:16.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.702473 systemd[1]: Started systemd-resolved.service. Aug 13 01:17:16.113384 kernel: audit: type=1130 audit(1755047836.013:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:16.113396 kernel: audit: type=1130 audit(1755047836.066:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:16.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:15.754428 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:17:15.829356 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:17:15.925330 systemd-modules-load[270]: Inserted module 'dm_multipath' Aug 13 01:17:15.940541 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:17:16.014610 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 01:17:16.067539 systemd[1]: Reached target nss-lookup.target. Aug 13 01:17:16.122855 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 01:17:16.136785 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:17:16.143880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:17:16.144625 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:17:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:16.146792 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:17:16.192360 kernel: audit: type=1130 audit(1755047836.143:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:16.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:16.209608 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 01:17:16.277295 kernel: audit: type=1130 audit(1755047836.208:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:16.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:16.268935 systemd[1]: Starting dracut-cmdline.service... Aug 13 01:17:16.291369 dracut-cmdline[295]: dracut-dracut-053 Aug 13 01:17:16.291369 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Aug 13 01:17:16.291369 dracut-cmdline[295]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:17:16.361325 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:17:16.361340 kernel: iscsi: registered transport (tcp) Aug 13 01:17:16.421645 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:17:16.421662 kernel: QLogic iSCSI HBA Driver Aug 13 01:17:16.437171 systemd[1]: Finished dracut-cmdline.service. Aug 13 01:17:16.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:16.437710 systemd[1]: Starting dracut-pre-udev.service... Aug 13 01:17:16.493308 kernel: raid6: avx2x4 gen() 48562 MB/s Aug 13 01:17:16.528306 kernel: raid6: avx2x4 xor() 21879 MB/s Aug 13 01:17:16.563267 kernel: raid6: avx2x2 gen() 53425 MB/s Aug 13 01:17:16.598268 kernel: raid6: avx2x2 xor() 31978 MB/s Aug 13 01:17:16.633305 kernel: raid6: avx2x1 gen() 45032 MB/s Aug 13 01:17:16.668308 kernel: raid6: avx2x1 xor() 27820 MB/s Aug 13 01:17:16.702234 kernel: raid6: sse2x4 gen() 21312 MB/s Aug 13 01:17:16.736270 kernel: raid6: sse2x4 xor() 11978 MB/s Aug 13 01:17:16.770305 kernel: raid6: sse2x2 gen() 21594 MB/s Aug 13 01:17:16.804267 kernel: raid6: sse2x2 xor() 13328 MB/s Aug 13 01:17:16.838267 kernel: raid6: sse2x1 gen() 18262 MB/s Aug 13 01:17:16.890256 kernel: raid6: sse2x1 xor() 8917 MB/s Aug 13 01:17:16.890273 kernel: raid6: using algorithm avx2x2 gen() 53425 MB/s Aug 13 01:17:16.890282 kernel: raid6: .... xor() 31978 MB/s, rmw enabled Aug 13 01:17:16.908524 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:17:16.955234 kernel: xor: automatically using best checksumming function avx Aug 13 01:17:17.036279 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 01:17:17.041100 systemd[1]: Finished dracut-pre-udev.service. Aug 13 01:17:17.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:17.048000 audit: BPF prog-id=7 op=LOAD Aug 13 01:17:17.048000 audit: BPF prog-id=8 op=LOAD Aug 13 01:17:17.050278 systemd[1]: Starting systemd-udevd.service... Aug 13 01:17:17.058440 systemd-udevd[475]: Using default interface naming scheme 'v252'. Aug 13 01:17:17.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:17.063364 systemd[1]: Started systemd-udevd.service. Aug 13 01:17:17.104352 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Aug 13 01:17:17.080442 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 01:17:17.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:17.108260 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 01:17:17.123098 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:17:17.176803 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:17:17.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:17.205271 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:17:17.223238 kernel: ACPI: bus type USB registered Aug 13 01:17:17.223267 kernel: usbcore: registered new interface driver usbfs Aug 13 01:17:17.259366 kernel: usbcore: registered new interface driver hub Aug 13 01:17:17.259395 kernel: usbcore: registered new device driver usb Aug 13 01:17:17.278238 kernel: libata version 3.00 loaded. Aug 13 01:17:17.301240 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:17:17.301296 kernel: AES CTR mode by8 optimization enabled Aug 13 01:17:17.354459 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Aug 13 01:17:17.354504 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Aug 13 01:17:17.356266 kernel: ahci 0000:00:17.0: version 3.0 Aug 13 01:17:18.004997 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Aug 13 01:17:18.327871 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Aug 13 01:17:18.328138 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Aug 13 01:17:18.328312 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Aug 13 01:17:18.328463 kernel: igb 0000:04:00.0: added PHC on eth0 Aug 13 01:17:18.328620 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Aug 13 01:17:18.328773 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:c4 Aug 13 01:17:18.328922 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Aug 13 01:17:18.329072 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Aug 13 01:17:18.329222 kernel: igb 0000:05:00.0: added PHC on eth1 Aug 13 01:17:18.329393 kernel: scsi host0: ahci Aug 13 01:17:18.329557 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Aug 13 01:17:18.329711 kernel: scsi host1: ahci Aug 13 01:17:18.329866 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:72:07:c5 Aug 13 01:17:18.330018 kernel: scsi host2: ahci Aug 13 01:17:18.330177 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Aug 13 01:17:18.330337 kernel: scsi host3: ahci Aug 13 01:17:18.330500 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Aug 13 01:17:18.330652 kernel: scsi host4: ahci Aug 13 01:17:18.330806 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Aug 13 01:17:18.330950 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Aug 13 01:17:18.331099 kernel: scsi host5: ahci Aug 13 01:17:18.331259 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Aug 13 01:17:18.331409 kernel: scsi host6: ahci Aug 13 01:17:18.331566 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Aug 13 01:17:18.331713 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Aug 13 01:17:18.331862 kernel: scsi host7: ahci Aug 13 01:17:18.332016 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Aug 13 01:17:18.332159 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 133 Aug 13 01:17:18.332182 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Aug 13 01:17:18.332334 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Aug 13 01:17:18.332481 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 133 Aug 13 01:17:18.332504 kernel: hub 1-0:1.0: USB hub found Aug 13 01:17:18.332679 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 133 Aug 13 01:17:18.332702 kernel: hub 1-0:1.0: 16 ports detected Aug 13 01:17:18.332860 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 133 Aug 13 01:17:18.332882 kernel: hub 2-0:1.0: USB hub found Aug 13 01:17:18.333046 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 133 Aug 13 01:17:18.333068 kernel: hub 2-0:1.0: 10 ports detected Aug 13 01:17:18.333238 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 133 Aug 13 01:17:18.333257 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 133 Aug 13 01:17:18.333273 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 133 Aug 13 01:17:18.333289 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Aug 13 01:17:18.333416 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Aug 13 01:17:18.333542 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Aug 13 01:17:18.333765 kernel: hub 1-14:1.0: USB hub found Aug 13 01:17:18.333924 kernel: hub 1-14:1.0: 4 ports detected Aug 13 01:17:18.334061 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Aug 13 01:17:18.334189 kernel: ata7: SATA link down (SStatus 0 SControl 300) Aug 13 01:17:18.334208 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Aug 13 01:17:19.038774 kernel: ata8: SATA link down (SStatus 0 SControl 300) Aug 13 01:17:19.038798 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Aug 13 01:17:19.039025 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:17:19.039052 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:17:19.039069 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:17:19.039082 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:17:19.039102 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 13 01:17:19.039121 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Aug 13 01:17:19.039139 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Aug 13 01:17:19.039161 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Aug 13 01:17:19.039179 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Aug 13 01:17:19.039198 kernel: ata1.00: Features: NCQ-prio Aug 13 01:17:19.039217 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Aug 13 01:17:19.039244 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Aug 13 01:17:19.039569 kernel: ata2.00: Features: NCQ-prio Aug 13 01:17:19.039589 kernel: ata1.00: configured for UDMA/133 Aug 13 01:17:19.039608 kernel: ata2.00: configured for UDMA/133 Aug 13 01:17:19.039627 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Aug 13 01:17:19.117014 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Aug 13 01:17:19.117135 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 01:17:19.117147 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Aug 13 01:17:19.117217 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 01:17:19.117226 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Aug 13 01:17:19.117332 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Aug 13 01:17:19.117437 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Aug 13 01:17:19.117530 kernel: sd 1:0:0:0: [sdb] Write Protect is off Aug 13 01:17:19.117620 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Aug 13 01:17:19.117707 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:17:19.117804 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 01:17:19.117818 kernel: ata2.00: Enabling discard_zeroes_data Aug 13 01:17:19.117829 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Aug 13 01:17:19.117928 kernel: port_module: 9 callbacks suppressed Aug 13 01:17:19.117938 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Aug 13 01:17:19.117999 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 01:17:19.118058 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Aug 13 01:17:19.118116 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:17:19.118176 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Aug 13 01:17:19.118241 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:17:19.118304 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 01:17:19.118312 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 01:17:19.118319 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:17:19.118326 kernel: GPT:9289727 != 937703087 Aug 13 01:17:19.118334 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Aug 13 01:17:19.118392 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:17:19.118399 kernel: GPT:9289727 != 937703087 Aug 13 01:17:19.118406 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:17:19.118412 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:17:19.118419 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 01:17:19.118425 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:17:19.118483 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Aug 13 01:17:19.153005 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 01:17:19.232172 kernel: usbcore: registered new interface driver usbhid Aug 13 01:17:19.232187 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (656) Aug 13 01:17:19.232195 kernel: usbhid: USB HID core driver Aug 13 01:17:19.232202 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Aug 13 01:17:19.232209 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Aug 13 01:17:19.183916 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 01:17:19.242285 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 01:17:19.370774 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Aug 13 01:17:19.370863 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Aug 13 01:17:19.370872 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Aug 13 01:17:19.269459 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 01:17:19.383406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:17:19.406552 systemd[1]: Starting disk-uuid.service... Aug 13 01:17:19.423597 disk-uuid[696]: Primary Header is updated. Aug 13 01:17:19.423597 disk-uuid[696]: Secondary Entries is updated. Aug 13 01:17:19.423597 disk-uuid[696]: Secondary Header is updated. Aug 13 01:17:19.494315 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 01:17:19.494327 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:17:19.494334 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 01:17:19.494340 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:17:19.519256 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 01:17:19.539275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:17:20.519091 kernel: ata1.00: Enabling discard_zeroes_data Aug 13 01:17:20.538835 disk-uuid[697]: The operation has completed successfully. Aug 13 01:17:20.548450 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:17:20.576069 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:17:20.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:20.576129 systemd[1]: Finished disk-uuid.service. Aug 13 01:17:20.687360 kernel: audit: type=1130 audit(1755047840.582:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:20.687375 kernel: audit: type=1131 audit(1755047840.582:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:20.687383 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 01:17:20.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:20.612250 systemd[1]: Starting verity-setup.service... Aug 13 01:17:20.736912 systemd[1]: Found device dev-mapper-usr.device. Aug 13 01:17:20.747595 systemd[1]: Mounting sysusr-usr.mount... Aug 13 01:17:20.760695 systemd[1]: Finished verity-setup.service. Aug 13 01:17:20.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:20.824235 kernel: audit: type=1130 audit(1755047840.774:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:20.881732 systemd[1]: Mounted sysusr-usr.mount. Aug 13 01:17:20.895345 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 01:17:20.888609 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 01:17:20.981071 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:17:20.981086 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:17:20.981094 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:17:20.981101 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:17:20.889006 systemd[1]: Starting ignition-setup.service... Aug 13 01:17:20.908585 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 01:17:21.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:20.989702 systemd[1]: Finished ignition-setup.service. Aug 13 01:17:21.114047 kernel: audit: type=1130 audit(1755047841.005:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.114064 kernel: audit: type=1130 audit(1755047841.063:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.006598 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 01:17:21.145198 kernel: audit: type=1334 audit(1755047841.121:24): prog-id=9 op=LOAD Aug 13 01:17:21.121000 audit: BPF prog-id=9 op=LOAD Aug 13 01:17:21.064919 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 01:17:21.123149 systemd[1]: Starting systemd-networkd.service... Aug 13 01:17:21.161220 systemd-networkd[884]: lo: Link UP Aug 13 01:17:21.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.190320 ignition[873]: Ignition 2.14.0 Aug 13 01:17:21.244502 kernel: audit: type=1130 audit(1755047841.175:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.161222 systemd-networkd[884]: lo: Gained carrier Aug 13 01:17:21.190326 ignition[873]: Stage: fetch-offline Aug 13 01:17:21.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.161567 systemd-networkd[884]: Enumeration completed Aug 13 01:17:21.398016 kernel: audit: type=1130 audit(1755047841.264:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.398030 kernel: audit: type=1130 audit(1755047841.323:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.398038 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Aug 13 01:17:21.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.190360 ignition[873]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:17:21.434382 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Aug 13 01:17:21.161647 systemd[1]: Started systemd-networkd.service. Aug 13 01:17:21.190374 ignition[873]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Aug 13 01:17:21.162214 systemd-networkd[884]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:17:21.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.197661 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 01:17:21.176330 systemd[1]: Reached target network.target. Aug 13 01:17:21.487484 iscsid[907]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:17:21.487484 iscsid[907]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 01:17:21.487484 iscsid[907]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 01:17:21.487484 iscsid[907]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 01:17:21.487484 iscsid[907]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 01:17:21.487484 iscsid[907]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:17:21.487484 iscsid[907]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 01:17:21.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.197727 ignition[873]: parsed url from cmdline: "" Aug 13 01:17:21.202094 unknown[873]: fetched base config from "system" Aug 13 01:17:21.653398 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Aug 13 01:17:21.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:21.197730 ignition[873]: no config URL provided Aug 13 01:17:21.202098 unknown[873]: fetched user config from "system" Aug 13 01:17:21.197733 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:17:21.237860 systemd[1]: Starting iscsiuio.service... Aug 13 01:17:21.197756 ignition[873]: parsing config with SHA512: dabb7d436634c9d66e28e15cae49a24c03b05866e2c1139d89b5ce043a8fabb09cb0b78f6ba8018a555cf7ea75bcc05df830c6115514e430a8f0cfea83ce660a Aug 13 01:17:21.251524 systemd[1]: Started iscsiuio.service. Aug 13 01:17:21.202413 ignition[873]: fetch-offline: fetch-offline passed Aug 13 01:17:21.265544 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 01:17:21.202416 ignition[873]: POST message to Packet Timeline Aug 13 01:17:21.324468 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 01:17:21.202420 ignition[873]: POST Status error: resource requires networking Aug 13 01:17:21.324930 systemd[1]: Starting ignition-kargs.service... Aug 13 01:17:21.202460 ignition[873]: Ignition finished successfully Aug 13 01:17:21.398936 systemd-networkd[884]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:17:21.402466 ignition[897]: Ignition 2.14.0 Aug 13 01:17:21.411929 systemd[1]: Starting iscsid.service... Aug 13 01:17:21.402470 ignition[897]: Stage: kargs Aug 13 01:17:21.442572 systemd[1]: Started iscsid.service. Aug 13 01:17:21.402526 ignition[897]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:17:21.456844 systemd[1]: Starting dracut-initqueue.service... Aug 13 01:17:21.402536 ignition[897]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Aug 13 01:17:21.476463 systemd[1]: Finished dracut-initqueue.service. Aug 13 01:17:21.403881 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 01:17:21.499352 systemd[1]: Reached target remote-fs-pre.target. Aug 13 01:17:21.406366 ignition[897]: kargs: kargs passed Aug 13 01:17:21.543385 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:17:21.406380 ignition[897]: POST message to Packet Timeline Aug 13 01:17:21.543435 systemd[1]: Reached target remote-fs.target. Aug 13 01:17:21.406408 ignition[897]: GET https://metadata.packet.net/metadata: attempt #1 Aug 13 01:17:21.571185 systemd[1]: Starting dracut-pre-mount.service... Aug 13 01:17:21.411521 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58653->[::1]:53: read: connection refused Aug 13 01:17:21.613488 systemd[1]: Finished dracut-pre-mount.service. Aug 13 01:17:21.612029 ignition[897]: GET https://metadata.packet.net/metadata: attempt #2 Aug 13 01:17:21.639746 systemd-networkd[884]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:17:21.612481 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52402->[::1]:53: read: connection refused Aug 13 01:17:21.668541 systemd-networkd[884]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:17:21.698864 systemd-networkd[884]: enp2s0f1np1: Link UP Aug 13 01:17:21.699310 systemd-networkd[884]: enp2s0f1np1: Gained carrier Aug 13 01:17:21.713739 systemd-networkd[884]: enp2s0f0np0: Link UP Aug 13 01:17:21.714146 systemd-networkd[884]: eno2: Link UP Aug 13 01:17:21.714546 systemd-networkd[884]: eno1: Link UP Aug 13 01:17:22.013159 ignition[897]: GET https://metadata.packet.net/metadata: attempt #3 Aug 13 01:17:22.014406 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47457->[::1]:53: read: connection refused Aug 13 01:17:22.462961 systemd-networkd[884]: enp2s0f0np0: Gained carrier Aug 13 01:17:22.471350 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Aug 13 01:17:22.506465 systemd-networkd[884]: enp2s0f0np0: DHCPv4 address 147.75.71.225/31, gateway 147.75.71.224 acquired from 145.40.83.140 Aug 13 01:17:22.814823 ignition[897]: GET https://metadata.packet.net/metadata: attempt #4 Aug 13 01:17:22.816269 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39904->[::1]:53: read: connection refused Aug 13 01:17:22.823443 systemd-networkd[884]: enp2s0f1np1: Gained IPv6LL Aug 13 01:17:24.167717 systemd-networkd[884]: enp2s0f0np0: Gained IPv6LL Aug 13 01:17:24.417487 ignition[897]: GET https://metadata.packet.net/metadata: attempt #5 Aug 13 01:17:24.418829 ignition[897]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46951->[::1]:53: read: connection refused Aug 13 01:17:27.622172 ignition[897]: GET https://metadata.packet.net/metadata: attempt #6 Aug 13 01:17:28.687004 ignition[897]: GET result: OK Aug 13 01:17:29.538180 ignition[897]: Ignition finished successfully Aug 13 01:17:29.542691 systemd[1]: Finished ignition-kargs.service. Aug 13 01:17:29.632984 kernel: kauditd_printk_skb: 3 callbacks suppressed Aug 13 01:17:29.633002 kernel: audit: type=1130 audit(1755047849.554:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:29.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:29.563920 ignition[924]: Ignition 2.14.0 Aug 13 01:17:29.557691 systemd[1]: Starting ignition-disks.service... Aug 13 01:17:29.563923 ignition[924]: Stage: disks Aug 13 01:17:29.563979 ignition[924]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:17:29.563988 ignition[924]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Aug 13 01:17:29.565567 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 01:17:29.567121 ignition[924]: disks: disks passed Aug 13 01:17:29.567124 ignition[924]: POST message to Packet Timeline Aug 13 01:17:29.567138 ignition[924]: GET https://metadata.packet.net/metadata: attempt #1 Aug 13 01:17:30.922104 ignition[924]: GET result: OK Aug 13 01:17:31.367766 ignition[924]: Ignition finished successfully Aug 13 01:17:31.371045 systemd[1]: Finished ignition-disks.service. Aug 13 01:17:31.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:31.383987 systemd[1]: Reached target initrd-root-device.target. Aug 13 01:17:31.462457 kernel: audit: type=1130 audit(1755047851.382:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:31.447436 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:17:31.447546 systemd[1]: Reached target local-fs.target. Aug 13 01:17:31.462578 systemd[1]: Reached target sysinit.target. Aug 13 01:17:31.485456 systemd[1]: Reached target basic.target. Aug 13 01:17:31.499285 systemd[1]: Starting systemd-fsck-root.service... Aug 13 01:17:31.517523 systemd-fsck[941]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 01:17:31.531731 systemd[1]: Finished systemd-fsck-root.service. Aug 13 01:17:31.621667 kernel: audit: type=1130 audit(1755047851.539:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:31.621687 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 01:17:31.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:31.560953 systemd[1]: Mounting sysroot.mount... Aug 13 01:17:31.628950 systemd[1]: Mounted sysroot.mount. Aug 13 01:17:31.643576 systemd[1]: Reached target initrd-root-fs.target. Aug 13 01:17:31.660117 systemd[1]: Mounting sysroot-usr.mount... Aug 13 01:17:31.668272 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 01:17:31.686923 systemd[1]: Starting flatcar-static-network.service... Aug 13 01:17:31.702375 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:17:31.702437 systemd[1]: Reached target ignition-diskful.target. Aug 13 01:17:31.721797 systemd[1]: Mounted sysroot-usr.mount. Aug 13 01:17:31.746190 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:17:31.817349 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (954) Aug 13 01:17:31.817367 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:17:31.757199 systemd[1]: Starting initrd-setup-root.service... Aug 13 01:17:31.885339 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:17:31.885427 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:17:31.885435 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:17:31.817637 systemd[1]: Finished initrd-setup-root.service. Aug 13 01:17:31.946434 kernel: audit: type=1130 audit(1755047851.892:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:31.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:31.946546 coreos-metadata[949]: Aug 13 01:17:31.822 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 01:17:31.968495 coreos-metadata[948]: Aug 13 01:17:31.822 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 01:17:31.987348 initrd-setup-root[959]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:17:31.894580 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:17:32.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:32.039428 initrd-setup-root[967]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:17:32.069472 kernel: audit: type=1130 audit(1755047852.002:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:31.955854 systemd[1]: Starting ignition-mount.service... Aug 13 01:17:32.076528 initrd-setup-root[975]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:17:31.975828 systemd[1]: Starting sysroot-boot.service... Aug 13 01:17:32.093495 initrd-setup-root[983]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:17:31.987859 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 01:17:32.114408 ignition[1022]: INFO : Ignition 2.14.0 Aug 13 01:17:32.114408 ignition[1022]: INFO : Stage: mount Aug 13 01:17:32.114408 ignition[1022]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:17:32.114408 ignition[1022]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Aug 13 01:17:32.114408 ignition[1022]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 01:17:32.114408 ignition[1022]: INFO : mount: mount passed Aug 13 01:17:32.114408 ignition[1022]: INFO : POST message to Packet Timeline Aug 13 01:17:32.114408 ignition[1022]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 01:17:31.987909 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 01:17:31.994409 systemd[1]: Finished sysroot-boot.service. Aug 13 01:17:32.898277 ignition[1022]: INFO : GET result: OK Aug 13 01:17:33.329107 ignition[1022]: INFO : Ignition finished successfully Aug 13 01:17:33.332011 systemd[1]: Finished ignition-mount.service. Aug 13 01:17:33.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:33.404427 kernel: audit: type=1130 audit(1755047853.345:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:33.657598 coreos-metadata[949]: Aug 13 01:17:33.657 INFO Fetch successful Aug 13 01:17:33.685955 coreos-metadata[948]: Aug 13 01:17:33.685 INFO Fetch successful Aug 13 01:17:33.735671 systemd[1]: flatcar-static-network.service: Deactivated successfully. Aug 13 01:17:33.735735 systemd[1]: Finished flatcar-static-network.service. Aug 13 01:17:33.868199 kernel: audit: type=1130 audit(1755047853.753:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:33.868293 kernel: audit: type=1131 audit(1755047853.753:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:33.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:33.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:33.868340 coreos-metadata[948]: Aug 13 01:17:33.743 INFO wrote hostname ci-3510.3.8-a-9864ec3500 to /sysroot/etc/hostname Aug 13 01:17:33.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:33.754581 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 01:17:33.951353 kernel: audit: type=1130 audit(1755047853.876:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:33.877858 systemd[1]: Starting ignition-files.service... Aug 13 01:17:33.995318 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1037) Aug 13 01:17:33.995329 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:17:33.945106 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:17:34.045311 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:17:34.045322 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:17:34.045329 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:17:34.080206 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:17:34.097338 ignition[1056]: INFO : Ignition 2.14.0 Aug 13 01:17:34.097338 ignition[1056]: INFO : Stage: files Aug 13 01:17:34.097338 ignition[1056]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:17:34.097338 ignition[1056]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Aug 13 01:17:34.097338 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 01:17:34.097338 ignition[1056]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:17:34.097338 ignition[1056]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:17:34.097338 ignition[1056]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:17:34.099773 unknown[1056]: wrote ssh authorized keys file for user: core Aug 13 01:17:34.200400 ignition[1056]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:17:34.200400 ignition[1056]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:17:34.200400 ignition[1056]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:17:34.200400 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 01:17:34.200400 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 01:17:34.200400 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:17:34.279336 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 01:17:34.279336 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:17:34.279336 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:17:34.593575 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:17:34.679341 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:17:34.679341 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:17:34.711462 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem589072337" Aug 13 01:17:34.711462 ignition[1056]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem589072337": device or resource busy Aug 13 01:17:34.964576 ignition[1056]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem589072337", trying btrfs: device or resource busy Aug 13 01:17:34.964576 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem589072337" Aug 13 01:17:34.964576 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem589072337" Aug 13 01:17:34.964576 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem589072337" Aug 13 01:17:34.964576 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem589072337" Aug 13 01:17:34.964576 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Aug 13 01:17:34.964576 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:17:34.964576 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 01:17:35.101502 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Aug 13 01:17:35.392330 ignition[1056]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:17:35.392330 ignition[1056]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 01:17:35.392330 ignition[1056]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 01:17:35.392330 ignition[1056]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Aug 13 01:17:35.392330 ignition[1056]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Aug 13 01:17:35.392330 ignition[1056]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(14): [started] setting preset to enabled for "packet-phone-home.service" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(14): [finished] setting preset to enabled for "packet-phone-home.service" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 01:17:35.474536 ignition[1056]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 01:17:35.474536 ignition[1056]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:17:35.474536 ignition[1056]: INFO : files: files passed Aug 13 01:17:35.474536 ignition[1056]: INFO : POST message to Packet Timeline Aug 13 01:17:35.474536 ignition[1056]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 01:17:36.430357 ignition[1056]: INFO : GET result: OK Aug 13 01:17:36.926687 ignition[1056]: INFO : Ignition finished successfully Aug 13 01:17:36.929624 systemd[1]: Finished ignition-files.service. Aug 13 01:17:36.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:36.949282 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 01:17:37.020474 kernel: audit: type=1130 audit(1755047856.942:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.010508 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 01:17:37.044485 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:17:37.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.010829 systemd[1]: Starting ignition-quench.service... Aug 13 01:17:37.234631 kernel: audit: type=1130 audit(1755047857.053:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.234649 kernel: audit: type=1130 audit(1755047857.120:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.234658 kernel: audit: type=1131 audit(1755047857.120:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.027601 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 01:17:37.054653 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:17:37.054730 systemd[1]: Finished ignition-quench.service. Aug 13 01:17:37.398134 kernel: audit: type=1130 audit(1755047857.274:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.398148 kernel: audit: type=1131 audit(1755047857.274:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.121516 systemd[1]: Reached target ignition-complete.target. Aug 13 01:17:37.243866 systemd[1]: Starting initrd-parse-etc.service... Aug 13 01:17:37.265111 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:17:37.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.265164 systemd[1]: Finished initrd-parse-etc.service. Aug 13 01:17:37.516485 kernel: audit: type=1130 audit(1755047857.444:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.275908 systemd[1]: Reached target initrd-fs.target. Aug 13 01:17:37.406476 systemd[1]: Reached target initrd.target. Aug 13 01:17:37.406535 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 01:17:37.406907 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 01:17:37.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.427614 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 01:17:37.648462 kernel: audit: type=1131 audit(1755047857.573:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.446085 systemd[1]: Starting initrd-cleanup.service... Aug 13 01:17:37.512359 systemd[1]: Stopped target nss-lookup.target. Aug 13 01:17:37.525509 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 01:17:37.541513 systemd[1]: Stopped target timers.target. Aug 13 01:17:37.555537 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:17:37.555637 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 01:17:37.574659 systemd[1]: Stopped target initrd.target. Aug 13 01:17:37.641503 systemd[1]: Stopped target basic.target. Aug 13 01:17:37.661593 systemd[1]: Stopped target ignition-complete.target. Aug 13 01:17:37.677525 systemd[1]: Stopped target ignition-diskful.target. Aug 13 01:17:37.696574 systemd[1]: Stopped target initrd-root-device.target. Aug 13 01:17:37.711612 systemd[1]: Stopped target remote-fs.target. Aug 13 01:17:37.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.729782 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 01:17:37.915485 kernel: audit: type=1131 audit(1755047857.828:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.745871 systemd[1]: Stopped target sysinit.target. Aug 13 01:17:37.985307 kernel: audit: type=1131 audit(1755047857.923:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.761868 systemd[1]: Stopped target local-fs.target. Aug 13 01:17:37.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.777847 systemd[1]: Stopped target local-fs-pre.target. Aug 13 01:17:37.795989 systemd[1]: Stopped target swap.target. Aug 13 01:17:37.811735 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:17:37.812114 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 01:17:37.830076 systemd[1]: Stopped target cryptsetup.target. Aug 13 01:17:38.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.906545 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:17:38.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.906627 systemd[1]: Stopped dracut-initqueue.service. Aug 13 01:17:38.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.924622 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:17:38.130299 ignition[1100]: INFO : Ignition 2.14.0 Aug 13 01:17:38.130299 ignition[1100]: INFO : Stage: umount Aug 13 01:17:38.130299 ignition[1100]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:17:38.130299 ignition[1100]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Aug 13 01:17:38.130299 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Aug 13 01:17:38.130299 ignition[1100]: INFO : umount: umount passed Aug 13 01:17:38.130299 ignition[1100]: INFO : POST message to Packet Timeline Aug 13 01:17:38.130299 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Aug 13 01:17:38.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:38.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:38.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:38.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:37.924698 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 01:17:38.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:38.271613 iscsid[907]: iscsid shutting down. Aug 13 01:17:37.993653 systemd[1]: Stopped target paths.target. Aug 13 01:17:38.007531 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:17:38.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:38.011479 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 01:17:38.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:38.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:38.023548 systemd[1]: Stopped target slices.target. Aug 13 01:17:38.037535 systemd[1]: Stopped target sockets.target. Aug 13 01:17:38.055688 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:17:38.055821 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 01:17:38.072716 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:17:38.072911 systemd[1]: Stopped ignition-files.service. Aug 13 01:17:38.089962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 01:17:38.090365 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 01:17:38.109028 systemd[1]: Stopping ignition-mount.service... Aug 13 01:17:38.120444 systemd[1]: Stopping iscsid.service... Aug 13 01:17:38.137418 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:17:38.137527 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 01:17:38.159528 systemd[1]: Stopping sysroot-boot.service... Aug 13 01:17:38.170487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:17:38.170697 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 01:17:38.203970 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:17:38.204353 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 01:17:38.230981 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:17:38.232775 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 01:17:38.233022 systemd[1]: Stopped iscsid.service. Aug 13 01:17:38.246732 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:17:38.247042 systemd[1]: Stopped sysroot-boot.service. Aug 13 01:17:38.263730 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:17:38.264002 systemd[1]: Closed iscsid.socket. Aug 13 01:17:38.278675 systemd[1]: Stopping iscsiuio.service... Aug 13 01:17:38.293924 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 01:17:38.294161 systemd[1]: Stopped iscsiuio.service. Aug 13 01:17:38.308275 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:17:38.308502 systemd[1]: Finished initrd-cleanup.service. Aug 13 01:17:38.328741 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:17:38.328833 systemd[1]: Closed iscsiuio.socket. Aug 13 01:17:39.044289 ignition[1100]: INFO : GET result: OK Aug 13 01:17:40.001446 ignition[1100]: INFO : Ignition finished successfully Aug 13 01:17:40.004548 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:17:40.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.004819 systemd[1]: Stopped ignition-mount.service. Aug 13 01:17:40.018768 systemd[1]: Stopped target network.target. Aug 13 01:17:40.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.034532 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:17:40.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.034708 systemd[1]: Stopped ignition-disks.service. Aug 13 01:17:40.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.049575 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:17:40.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.049705 systemd[1]: Stopped ignition-kargs.service. Aug 13 01:17:40.064566 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:17:40.064703 systemd[1]: Stopped ignition-setup.service. Aug 13 01:17:40.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.081786 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:17:40.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.159000 audit: BPF prog-id=6 op=UNLOAD Aug 13 01:17:40.081939 systemd[1]: Stopped initrd-setup-root.service. Aug 13 01:17:40.098118 systemd[1]: Stopping systemd-networkd.service... Aug 13 01:17:40.109379 systemd-networkd[884]: enp2s0f1np1: DHCPv6 lease lost Aug 13 01:17:40.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.113702 systemd[1]: Stopping systemd-resolved.service... Aug 13 01:17:40.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.119401 systemd-networkd[884]: enp2s0f0np0: DHCPv6 lease lost Aug 13 01:17:40.234000 audit: BPF prog-id=9 op=UNLOAD Aug 13 01:17:40.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.128102 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:17:40.128381 systemd[1]: Stopped systemd-resolved.service. Aug 13 01:17:40.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.145039 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:17:40.145337 systemd[1]: Stopped systemd-networkd.service. Aug 13 01:17:40.160904 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:17:40.161039 systemd[1]: Closed systemd-networkd.socket. Aug 13 01:17:40.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.181020 systemd[1]: Stopping network-cleanup.service... Aug 13 01:17:40.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.194464 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:17:40.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.194600 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 01:17:40.210630 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:17:40.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.210768 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:17:40.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.227968 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:17:40.228117 systemd[1]: Stopped systemd-modules-load.service. Aug 13 01:17:40.246027 systemd[1]: Stopping systemd-udevd.service... Aug 13 01:17:40.265324 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:17:40.266818 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:17:40.267172 systemd[1]: Stopped systemd-udevd.service. Aug 13 01:17:40.281003 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:17:40.281136 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 01:17:40.294566 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:17:40.294672 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 01:17:40.311466 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:17:40.311502 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 01:17:40.335514 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:17:40.335561 systemd[1]: Stopped dracut-cmdline.service. Aug 13 01:17:40.350369 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:17:40.350412 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 01:17:40.367643 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 01:17:40.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:40.385420 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:17:40.385572 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 01:17:40.403558 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:17:40.403792 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 01:17:40.560082 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:17:40.560381 systemd[1]: Stopped network-cleanup.service. Aug 13 01:17:40.572847 systemd[1]: Reached target initrd-switch-root.target. Aug 13 01:17:40.592456 systemd[1]: Starting initrd-switch-root.service... Aug 13 01:17:40.634576 systemd[1]: Switching root. Aug 13 01:17:40.688182 systemd-journald[269]: Journal stopped Aug 13 01:17:44.574537 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Aug 13 01:17:44.574551 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 01:17:44.574560 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 01:17:44.574566 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 01:17:44.574571 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:17:44.574576 kernel: SELinux: policy capability open_perms=1 Aug 13 01:17:44.574582 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:17:44.574588 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:17:44.574593 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:17:44.574599 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:17:44.574605 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:17:44.574610 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:17:44.574616 systemd[1]: Successfully loaded SELinux policy in 320.942ms. Aug 13 01:17:44.574622 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.996ms. Aug 13 01:17:44.574630 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:17:44.574637 systemd[1]: Detected architecture x86-64. Aug 13 01:17:44.574643 systemd[1]: Detected first boot. Aug 13 01:17:44.574649 systemd[1]: Hostname set to . Aug 13 01:17:44.574655 systemd[1]: Initializing machine ID from random generator. Aug 13 01:17:44.574661 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 01:17:44.574667 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:17:44.574674 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:17:44.574680 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:17:44.574687 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:17:44.574694 kernel: kauditd_printk_skb: 49 callbacks suppressed Aug 13 01:17:44.574699 kernel: audit: type=1334 audit(1755047863.085:92): prog-id=12 op=LOAD Aug 13 01:17:44.574705 kernel: audit: type=1334 audit(1755047863.085:93): prog-id=3 op=UNLOAD Aug 13 01:17:44.574712 kernel: audit: type=1334 audit(1755047863.129:94): prog-id=13 op=LOAD Aug 13 01:17:44.574718 kernel: audit: type=1334 audit(1755047863.174:95): prog-id=14 op=LOAD Aug 13 01:17:44.574723 kernel: audit: type=1334 audit(1755047863.174:96): prog-id=4 op=UNLOAD Aug 13 01:17:44.574729 kernel: audit: type=1334 audit(1755047863.174:97): prog-id=5 op=UNLOAD Aug 13 01:17:44.574735 kernel: audit: type=1334 audit(1755047863.239:98): prog-id=15 op=LOAD Aug 13 01:17:44.574740 kernel: audit: type=1334 audit(1755047863.239:99): prog-id=12 op=UNLOAD Aug 13 01:17:44.574746 kernel: audit: type=1334 audit(1755047863.280:100): prog-id=16 op=LOAD Aug 13 01:17:44.574751 kernel: audit: type=1334 audit(1755047863.300:101): prog-id=17 op=LOAD Aug 13 01:17:44.574757 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:17:44.574765 systemd[1]: Stopped initrd-switch-root.service. Aug 13 01:17:44.574771 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:17:44.574778 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 01:17:44.574784 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 01:17:44.574792 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 01:17:44.574799 systemd[1]: Created slice system-getty.slice. Aug 13 01:17:44.574805 systemd[1]: Created slice system-modprobe.slice. Aug 13 01:17:44.574811 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 01:17:44.574818 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 01:17:44.574825 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 01:17:44.574832 systemd[1]: Created slice user.slice. Aug 13 01:17:44.574838 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:17:44.574844 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 01:17:44.574851 systemd[1]: Set up automount boot.automount. Aug 13 01:17:44.574857 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 01:17:44.574863 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 01:17:44.574869 systemd[1]: Stopped target initrd-fs.target. Aug 13 01:17:44.574877 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 01:17:44.574883 systemd[1]: Reached target integritysetup.target. Aug 13 01:17:44.574890 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:17:44.574896 systemd[1]: Reached target remote-fs.target. Aug 13 01:17:44.574902 systemd[1]: Reached target slices.target. Aug 13 01:17:44.574909 systemd[1]: Reached target swap.target. Aug 13 01:17:44.574915 systemd[1]: Reached target torcx.target. Aug 13 01:17:44.574923 systemd[1]: Reached target veritysetup.target. Aug 13 01:17:44.574929 systemd[1]: Listening on systemd-coredump.socket. Aug 13 01:17:44.574935 systemd[1]: Listening on systemd-initctl.socket. Aug 13 01:17:44.574942 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:17:44.574949 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:17:44.574956 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:17:44.574963 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 01:17:44.574969 systemd[1]: Mounting dev-hugepages.mount... Aug 13 01:17:44.574976 systemd[1]: Mounting dev-mqueue.mount... Aug 13 01:17:44.574983 systemd[1]: Mounting media.mount... Aug 13 01:17:44.574989 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:17:44.574996 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 01:17:44.575002 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 01:17:44.575009 systemd[1]: Mounting tmp.mount... Aug 13 01:17:44.575016 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 01:17:44.575023 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:17:44.575030 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:17:44.575036 systemd[1]: Starting modprobe@configfs.service... Aug 13 01:17:44.575043 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:17:44.575049 systemd[1]: Starting modprobe@drm.service... Aug 13 01:17:44.575056 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:17:44.575062 systemd[1]: Starting modprobe@fuse.service... Aug 13 01:17:44.575069 kernel: fuse: init (API version 7.34) Aug 13 01:17:44.575076 systemd[1]: Starting modprobe@loop.service... Aug 13 01:17:44.575082 kernel: loop: module loaded Aug 13 01:17:44.575088 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:17:44.575095 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:17:44.575102 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 01:17:44.575108 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:17:44.575115 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:17:44.575121 systemd[1]: Stopped systemd-journald.service. Aug 13 01:17:44.575128 systemd[1]: Starting systemd-journald.service... Aug 13 01:17:44.575135 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:17:44.575143 systemd-journald[1251]: Journal started Aug 13 01:17:44.575168 systemd-journald[1251]: Runtime Journal (/run/log/journal/6dbe60a978644eb897a85f0e8deca9ab) is 8.0M, max 640.1M, 632.1M free. Aug 13 01:17:41.152000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:17:41.451000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 01:17:41.453000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:17:41.453000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:17:41.453000 audit: BPF prog-id=10 op=LOAD Aug 13 01:17:41.453000 audit: BPF prog-id=10 op=UNLOAD Aug 13 01:17:41.453000 audit: BPF prog-id=11 op=LOAD Aug 13 01:17:41.453000 audit: BPF prog-id=11 op=UNLOAD Aug 13 01:17:41.521000 audit[1140]: AVC avc: denied { associate } for pid=1140 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 01:17:41.521000 audit[1140]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1123 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:17:41.521000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 01:17:41.546000 audit[1140]: AVC avc: denied { associate } for pid=1140 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 01:17:41.546000 audit[1140]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79c9 a2=1ed a3=0 items=2 ppid=1123 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:17:41.546000 audit: CWD cwd="/" Aug 13 01:17:41.546000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:41.546000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:41.546000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 01:17:43.085000 audit: BPF prog-id=12 op=LOAD Aug 13 01:17:43.085000 audit: BPF prog-id=3 op=UNLOAD Aug 13 01:17:43.129000 audit: BPF prog-id=13 op=LOAD Aug 13 01:17:43.174000 audit: BPF prog-id=14 op=LOAD Aug 13 01:17:43.174000 audit: BPF prog-id=4 op=UNLOAD Aug 13 01:17:43.174000 audit: BPF prog-id=5 op=UNLOAD Aug 13 01:17:43.239000 audit: BPF prog-id=15 op=LOAD Aug 13 01:17:43.239000 audit: BPF prog-id=12 op=UNLOAD Aug 13 01:17:43.280000 audit: BPF prog-id=16 op=LOAD Aug 13 01:17:43.300000 audit: BPF prog-id=17 op=LOAD Aug 13 01:17:43.300000 audit: BPF prog-id=13 op=UNLOAD Aug 13 01:17:43.300000 audit: BPF prog-id=14 op=UNLOAD Aug 13 01:17:43.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:43.360000 audit: BPF prog-id=15 op=UNLOAD Aug 13 01:17:43.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:43.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.536000 audit: BPF prog-id=18 op=LOAD Aug 13 01:17:44.537000 audit: BPF prog-id=19 op=LOAD Aug 13 01:17:44.537000 audit: BPF prog-id=20 op=LOAD Aug 13 01:17:44.537000 audit: BPF prog-id=16 op=UNLOAD Aug 13 01:17:44.537000 audit: BPF prog-id=17 op=UNLOAD Aug 13 01:17:44.570000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 01:17:44.570000 audit[1251]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd1b3afed0 a2=4000 a3=7ffd1b3aff6c items=0 ppid=1 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:17:44.570000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 01:17:43.084852 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:17:41.519750 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:17:43.084859 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 01:17:41.520202 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 01:17:43.302097 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:17:41.520218 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 01:17:41.520247 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 01:17:41.520255 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 01:17:41.520279 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 01:17:41.520289 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 01:17:41.520445 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 01:17:41.520476 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 01:17:41.520486 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 01:17:41.521729 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 01:17:41.521761 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 01:17:41.521779 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 01:17:41.521791 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 01:17:41.521805 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 01:17:41.521816 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:41Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 01:17:42.731187 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:42Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:17:42.731400 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:42Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:17:42.731457 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:42Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:17:42.731554 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:42Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 01:17:42.731584 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:42Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 01:17:42.731619 /usr/lib/systemd/system-generators/torcx-generator[1140]: time="2025-08-13T01:17:42Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 01:17:44.600459 systemd[1]: Starting systemd-network-generator.service... Aug 13 01:17:44.622314 systemd[1]: Starting systemd-remount-fs.service... Aug 13 01:17:44.644279 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:17:44.676772 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:17:44.676793 systemd[1]: Stopped verity-setup.service. Aug 13 01:17:44.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.711289 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:17:44.726435 systemd[1]: Started systemd-journald.service. Aug 13 01:17:44.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.733796 systemd[1]: Mounted dev-hugepages.mount. Aug 13 01:17:44.741513 systemd[1]: Mounted dev-mqueue.mount. Aug 13 01:17:44.748503 systemd[1]: Mounted media.mount. Aug 13 01:17:44.755503 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 01:17:44.764517 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 01:17:44.773489 systemd[1]: Mounted tmp.mount. Aug 13 01:17:44.780546 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 01:17:44.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.789658 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:17:44.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.798645 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:17:44.798757 systemd[1]: Finished modprobe@configfs.service. Aug 13 01:17:44.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.807658 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:17:44.807799 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:17:44.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.816824 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:17:44.817018 systemd[1]: Finished modprobe@drm.service. Aug 13 01:17:44.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.826096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:17:44.826429 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:17:44.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.835099 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:17:44.835445 systemd[1]: Finished modprobe@fuse.service. Aug 13 01:17:44.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.844102 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:17:44.844443 systemd[1]: Finished modprobe@loop.service. Aug 13 01:17:44.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.853106 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:17:44.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.862082 systemd[1]: Finished systemd-network-generator.service. Aug 13 01:17:44.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.871075 systemd[1]: Finished systemd-remount-fs.service. Aug 13 01:17:44.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.880065 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:17:44.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:44.889757 systemd[1]: Reached target network-pre.target. Aug 13 01:17:44.901188 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 01:17:44.909990 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 01:17:44.917460 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:17:44.919201 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 01:17:44.926914 systemd[1]: Starting systemd-journal-flush.service... Aug 13 01:17:44.930340 systemd-journald[1251]: Time spent on flushing to /var/log/journal/6dbe60a978644eb897a85f0e8deca9ab is 15.650ms for 1598 entries. Aug 13 01:17:44.930340 systemd-journald[1251]: System Journal (/var/log/journal/6dbe60a978644eb897a85f0e8deca9ab) is 8.0M, max 195.6M, 187.6M free. Aug 13 01:17:44.965029 systemd-journald[1251]: Received client request to flush runtime journal. Aug 13 01:17:44.943372 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:17:44.943861 systemd[1]: Starting systemd-random-seed.service... Aug 13 01:17:44.954343 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:17:44.954844 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:17:44.961853 systemd[1]: Starting systemd-sysusers.service... Aug 13 01:17:44.968850 systemd[1]: Starting systemd-udev-settle.service... Aug 13 01:17:44.976649 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 01:17:44.984421 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 01:17:44.992467 systemd[1]: Finished systemd-journal-flush.service. Aug 13 01:17:44.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:45.000445 systemd[1]: Finished systemd-random-seed.service. Aug 13 01:17:45.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:45.008487 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:17:45.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:45.016444 systemd[1]: Finished systemd-sysusers.service. Aug 13 01:17:45.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:45.025428 systemd[1]: Reached target first-boot-complete.target. Aug 13 01:17:45.033582 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 01:17:45.251251 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 01:17:45.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:45.259000 audit: BPF prog-id=21 op=LOAD Aug 13 01:17:45.259000 audit: BPF prog-id=22 op=LOAD Aug 13 01:17:45.259000 audit: BPF prog-id=7 op=UNLOAD Aug 13 01:17:45.259000 audit: BPF prog-id=8 op=UNLOAD Aug 13 01:17:45.261125 systemd[1]: Starting systemd-udevd.service... Aug 13 01:17:45.275703 systemd-udevd[1268]: Using default interface naming scheme 'v252'. Aug 13 01:17:45.291893 systemd[1]: Started systemd-udevd.service. Aug 13 01:17:45.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:45.302514 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Aug 13 01:17:45.301000 audit: BPF prog-id=23 op=LOAD Aug 13 01:17:45.303674 systemd[1]: Starting systemd-networkd.service... Aug 13 01:17:45.323240 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:17:45.323287 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Aug 13 01:17:45.338000 audit: BPF prog-id=24 op=LOAD Aug 13 01:17:45.340299 kernel: ACPI: button: Sleep Button [SLPB] Aug 13 01:17:45.352000 audit: BPF prog-id=25 op=LOAD Aug 13 01:17:45.370551 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:17:45.369000 audit: BPF prog-id=26 op=LOAD Aug 13 01:17:45.371236 kernel: IPMI message handler: version 39.2 Aug 13 01:17:45.371413 systemd[1]: Starting systemd-userdbd.service... Aug 13 01:17:45.385258 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:17:45.347000 audit[1337]: AVC avc: denied { confidentiality } for pid=1337 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 01:17:45.417243 kernel: ipmi device interface Aug 13 01:17:45.417304 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Aug 13 01:17:45.470583 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Aug 13 01:17:45.470683 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Aug 13 01:17:45.470769 kernel: ipmi_si: IPMI System Interface driver Aug 13 01:17:45.473296 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:17:45.500077 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Aug 13 01:17:45.500231 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Aug 13 01:17:45.500277 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Aug 13 01:17:45.500291 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Aug 13 01:17:45.628268 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Aug 13 01:17:45.628399 kernel: iTCO_vendor_support: vendor-support=0 Aug 13 01:17:45.628420 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Aug 13 01:17:45.628530 kernel: ipmi_si: Adding ACPI-specified kcs state machine Aug 13 01:17:45.347000 audit[1337]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d6186c1d30 a1=4d9cc a2=7f332939abc5 a3=5 items=42 ppid=1268 pid=1337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:17:45.347000 audit: CWD cwd="/" Aug 13 01:17:45.347000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=1 name=(null) inode=20950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=2 name=(null) inode=20950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=3 name=(null) inode=20951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=4 name=(null) inode=20950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=5 name=(null) inode=20952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=6 name=(null) inode=20950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=7 name=(null) inode=20953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=8 name=(null) inode=20953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=9 name=(null) inode=20954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=10 name=(null) inode=20953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=11 name=(null) inode=20955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=12 name=(null) inode=20953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=13 name=(null) inode=20956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=14 name=(null) inode=20953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=15 name=(null) inode=20957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=16 name=(null) inode=20953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=17 name=(null) inode=20958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=18 name=(null) inode=20950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=19 name=(null) inode=20959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=20 name=(null) inode=20959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=21 name=(null) inode=20960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=22 name=(null) inode=20959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=23 name=(null) inode=20961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=24 name=(null) inode=20959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=25 name=(null) inode=20962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=26 name=(null) inode=20959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=27 name=(null) inode=20963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=28 name=(null) inode=20959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=29 name=(null) inode=20964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=30 name=(null) inode=20950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=31 name=(null) inode=20965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=32 name=(null) inode=20965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=33 name=(null) inode=20966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=34 name=(null) inode=20965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=35 name=(null) inode=20967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=36 name=(null) inode=20965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=37 name=(null) inode=20968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=38 name=(null) inode=20965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=39 name=(null) inode=20969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=40 name=(null) inode=20965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PATH item=41 name=(null) inode=20970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:17:45.347000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 01:17:45.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:45.577567 systemd[1]: Started systemd-userdbd.service. Aug 13 01:17:45.645771 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Aug 13 01:17:45.645940 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Aug 13 01:17:45.645955 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Aug 13 01:17:45.728240 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Aug 13 01:17:45.728398 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Aug 13 01:17:45.803133 kernel: intel_rapl_common: Found RAPL domain package Aug 13 01:17:45.803185 kernel: intel_rapl_common: Found RAPL domain core Aug 13 01:17:45.803199 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Aug 13 01:17:45.808617 systemd-networkd[1313]: bond0: netdev ready Aug 13 01:17:45.811823 systemd-networkd[1313]: lo: Link UP Aug 13 01:17:45.811826 systemd-networkd[1313]: lo: Gained carrier Aug 13 01:17:45.812440 systemd-networkd[1313]: Enumeration completed Aug 13 01:17:45.812503 systemd[1]: Started systemd-networkd.service. Aug 13 01:17:45.812813 systemd-networkd[1313]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Aug 13 01:17:45.826185 systemd-networkd[1313]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d7:6d:f3.network. Aug 13 01:17:45.826235 kernel: intel_rapl_common: Found RAPL domain dram Aug 13 01:17:45.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:45.903236 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Aug 13 01:17:45.921263 kernel: ipmi_ssif: IPMI SSIF Interface driver Aug 13 01:17:46.818270 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Aug 13 01:17:46.841276 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Aug 13 01:17:46.843882 systemd-networkd[1313]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d7:6d:f2.network. Aug 13 01:17:46.868239 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Aug 13 01:17:46.871578 systemd[1]: Finished systemd-udev-settle.service. Aug 13 01:17:46.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:46.882279 systemd[1]: Starting lvm2-activation-early.service... Aug 13 01:17:46.923205 lvm[1373]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:17:46.966673 systemd[1]: Finished lvm2-activation-early.service. Aug 13 01:17:46.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:46.975397 systemd[1]: Reached target cryptsetup.target. Aug 13 01:17:46.996265 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Aug 13 01:17:47.004871 systemd[1]: Starting lvm2-activation.service... Aug 13 01:17:47.006696 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:17:47.043712 systemd[1]: Finished lvm2-activation.service. Aug 13 01:17:47.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.052373 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:17:47.060288 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:17:47.060311 systemd[1]: Reached target local-fs.target. Aug 13 01:17:47.068322 systemd[1]: Reached target machines.target. Aug 13 01:17:47.076889 systemd[1]: Starting ldconfig.service... Aug 13 01:17:47.083623 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:17:47.083643 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:17:47.084150 systemd[1]: Starting systemd-boot-update.service... Aug 13 01:17:47.091858 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 01:17:47.101791 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 01:17:47.102553 systemd[1]: Starting systemd-sysext.service... Aug 13 01:17:47.102768 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1376 (bootctl) Aug 13 01:17:47.103581 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 01:17:47.113985 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 01:17:47.122997 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 01:17:47.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.135616 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 01:17:47.135856 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 01:17:47.192249 kernel: loop0: detected capacity change from 0 to 229808 Aug 13 01:17:47.228233 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Aug 13 01:17:47.260502 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:17:47.260896 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 01:17:47.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.294235 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:17:47.306053 systemd-fsck[1385]: fsck.fat 4.2 (2021-01-31) Aug 13 01:17:47.306053 systemd-fsck[1385]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 01:17:47.306789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 01:17:47.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.319205 systemd[1]: Mounting boot.mount... Aug 13 01:17:47.343272 kernel: loop1: detected capacity change from 0 to 229808 Aug 13 01:17:47.344144 systemd[1]: Mounted boot.mount. Aug 13 01:17:47.358738 (sd-sysext)[1390]: Using extensions 'kubernetes'. Aug 13 01:17:47.358940 (sd-sysext)[1390]: Merged extensions into '/usr'. Aug 13 01:17:47.363943 systemd[1]: Finished systemd-boot-update.service. Aug 13 01:17:47.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.379375 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:17:47.380180 systemd[1]: Mounting usr-share-oem.mount... Aug 13 01:17:47.388463 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:17:47.389225 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:17:47.397811 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:17:47.404792 systemd[1]: Starting modprobe@loop.service... Aug 13 01:17:47.411297 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:17:47.411363 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:17:47.411428 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:17:47.413166 systemd[1]: Mounted usr-share-oem.mount. Aug 13 01:17:47.420453 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:17:47.420519 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:17:47.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.428495 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:17:47.428557 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:17:47.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.436458 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:17:47.436518 systemd[1]: Finished modprobe@loop.service. Aug 13 01:17:47.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.445501 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:17:47.445584 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:17:47.446123 systemd[1]: Finished systemd-sysext.service. Aug 13 01:17:47.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.454869 systemd[1]: Starting ensure-sysext.service... Aug 13 01:17:47.461830 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 01:17:47.471440 systemd[1]: Reloading. Aug 13 01:17:47.472808 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 01:17:47.475699 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:17:47.480960 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:17:47.501147 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2025-08-13T01:17:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:17:47.501173 /usr/lib/systemd/system-generators/torcx-generator[1416]: time="2025-08-13T01:17:47Z" level=info msg="torcx already run" Aug 13 01:17:47.511559 ldconfig[1375]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:17:47.554848 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:17:47.554856 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:17:47.568580 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:17:47.611000 audit: BPF prog-id=27 op=LOAD Aug 13 01:17:47.612000 audit: BPF prog-id=28 op=LOAD Aug 13 01:17:47.612000 audit: BPF prog-id=21 op=UNLOAD Aug 13 01:17:47.612000 audit: BPF prog-id=22 op=UNLOAD Aug 13 01:17:47.612000 audit: BPF prog-id=29 op=LOAD Aug 13 01:17:47.612000 audit: BPF prog-id=24 op=UNLOAD Aug 13 01:17:47.612000 audit: BPF prog-id=30 op=LOAD Aug 13 01:17:47.612000 audit: BPF prog-id=31 op=LOAD Aug 13 01:17:47.612000 audit: BPF prog-id=25 op=UNLOAD Aug 13 01:17:47.612000 audit: BPF prog-id=26 op=UNLOAD Aug 13 01:17:47.613000 audit: BPF prog-id=32 op=LOAD Aug 13 01:17:47.613000 audit: BPF prog-id=18 op=UNLOAD Aug 13 01:17:47.613000 audit: BPF prog-id=33 op=LOAD Aug 13 01:17:47.613000 audit: BPF prog-id=34 op=LOAD Aug 13 01:17:47.613000 audit: BPF prog-id=19 op=UNLOAD Aug 13 01:17:47.613000 audit: BPF prog-id=20 op=UNLOAD Aug 13 01:17:47.614000 audit: BPF prog-id=35 op=LOAD Aug 13 01:17:47.614000 audit: BPF prog-id=23 op=UNLOAD Aug 13 01:17:47.617284 systemd[1]: Finished ldconfig.service. Aug 13 01:17:47.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.624870 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 01:17:47.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:17:47.635736 systemd[1]: Starting audit-rules.service... Aug 13 01:17:47.642890 systemd[1]: Starting clean-ca-certificates.service... Aug 13 01:17:47.652010 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 01:17:47.652000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 01:17:47.652000 audit[1495]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdbc7d6f0 a2=420 a3=0 items=0 ppid=1478 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:17:47.652000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 01:17:47.653912 augenrules[1495]: No rules Aug 13 01:17:47.661344 systemd[1]: Starting systemd-resolved.service... Aug 13 01:17:47.669303 systemd[1]: Starting systemd-timesyncd.service... Aug 13 01:17:47.676897 systemd[1]: Starting systemd-update-utmp.service... Aug 13 01:17:47.683846 systemd[1]: Finished audit-rules.service. Aug 13 01:17:47.690530 systemd[1]: Finished clean-ca-certificates.service. Aug 13 01:17:47.698531 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 01:17:47.710940 systemd[1]: Finished systemd-update-utmp.service. Aug 13 01:17:47.719963 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 01:17:47.720646 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:17:47.727830 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:17:47.734872 systemd[1]: Starting modprobe@loop.service... Aug 13 01:17:47.740259 systemd-resolved[1500]: Positive Trust Anchors: Aug 13 01:17:47.740265 systemd-resolved[1500]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:17:47.740285 systemd-resolved[1500]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:17:47.741348 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:17:47.741416 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:17:47.742123 systemd[1]: Starting systemd-update-done.service... Aug 13 01:17:47.744265 systemd-resolved[1500]: Using system hostname 'ci-3510.3.8-a-9864ec3500'. Aug 13 01:17:47.749335 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:17:47.749857 systemd[1]: Started systemd-timesyncd.service. Aug 13 01:17:47.761430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:17:47.761502 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:17:47.773278 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Aug 13 01:17:47.787185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:17:47.787280 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:17:47.799244 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Aug 13 01:17:47.799297 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Aug 13 01:17:47.813540 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:17:47.813604 systemd[1]: Finished modprobe@loop.service. Aug 13 01:17:47.818779 systemd-networkd[1313]: bond0: Link UP Aug 13 01:17:47.818989 systemd-networkd[1313]: enp2s0f1np1: Link UP Aug 13 01:17:47.819135 systemd-networkd[1313]: enp2s0f1np1: Gained carrier Aug 13 01:17:47.820138 systemd-networkd[1313]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d7:6d:f2.network. Aug 13 01:17:47.833489 systemd[1]: Started systemd-resolved.service. Aug 13 01:17:47.842234 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Aug 13 01:17:47.842263 kernel: bond0: active interface up! Aug 13 01:17:47.874555 systemd[1]: Finished systemd-update-done.service. Aug 13 01:17:47.878266 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Aug 13 01:17:47.885708 systemd[1]: Reached target network.target. Aug 13 01:17:47.894362 systemd[1]: Reached target nss-lookup.target. Aug 13 01:17:47.902313 systemd[1]: Reached target time-set.target. Aug 13 01:17:47.910438 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:17:47.910511 systemd[1]: Reached target sysinit.target. Aug 13 01:17:47.918419 systemd[1]: Started motdgen.path. Aug 13 01:17:47.925333 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 01:17:47.935390 systemd[1]: Started logrotate.timer. Aug 13 01:17:47.943388 systemd[1]: Started mdadm.timer. Aug 13 01:17:47.950350 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 01:17:47.958346 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:17:47.958407 systemd[1]: Reached target paths.target. Aug 13 01:17:47.965312 systemd[1]: Reached target timers.target. Aug 13 01:17:47.973462 systemd[1]: Listening on dbus.socket. Aug 13 01:17:47.981910 systemd[1]: Starting docker.socket... Aug 13 01:17:47.983435 systemd-networkd[1313]: enp2s0f0np0: Link UP Aug 13 01:17:47.983622 systemd-networkd[1313]: bond0: Gained carrier Aug 13 01:17:47.983718 systemd-networkd[1313]: enp2s0f0np0: Gained carrier Aug 13 01:17:47.983766 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:47.998840 systemd[1]: Listening on sshd.socket. Aug 13 01:17:48.006312 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Aug 13 01:17:48.006336 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Aug 13 01:17:48.025427 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:48.025572 systemd-networkd[1313]: enp2s0f1np1: Link DOWN Aug 13 01:17:48.025575 systemd-networkd[1313]: enp2s0f1np1: Lost carrier Aug 13 01:17:48.031412 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:17:48.031489 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:17:48.032613 systemd[1]: Listening on docker.socket. Aug 13 01:17:48.034491 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:48.034664 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:48.040343 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:17:48.040409 systemd[1]: Reached target sockets.target. Aug 13 01:17:48.048314 systemd[1]: Reached target basic.target. Aug 13 01:17:48.055362 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:17:48.055423 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:17:48.056103 systemd[1]: Starting containerd.service... Aug 13 01:17:48.063849 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 01:17:48.072892 systemd[1]: Starting coreos-metadata.service... Aug 13 01:17:48.079917 systemd[1]: Starting dbus.service... Aug 13 01:17:48.085919 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 01:17:48.090622 jq[1520]: false Aug 13 01:17:48.092939 systemd[1]: Starting extend-filesystems.service... Aug 13 01:17:48.095536 coreos-metadata[1513]: Aug 13 01:17:48.095 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 01:17:48.098481 dbus-daemon[1519]: [system] SELinux support is enabled Aug 13 01:17:48.099287 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 01:17:48.100030 systemd[1]: Starting modprobe@drm.service... Aug 13 01:17:48.100353 extend-filesystems[1521]: Found loop1 Aug 13 01:17:48.120372 extend-filesystems[1521]: Found sda Aug 13 01:17:48.120372 extend-filesystems[1521]: Found sda1 Aug 13 01:17:48.120372 extend-filesystems[1521]: Found sda2 Aug 13 01:17:48.120372 extend-filesystems[1521]: Found sda3 Aug 13 01:17:48.120372 extend-filesystems[1521]: Found usr Aug 13 01:17:48.120372 extend-filesystems[1521]: Found sda4 Aug 13 01:17:48.120372 extend-filesystems[1521]: Found sda6 Aug 13 01:17:48.120372 extend-filesystems[1521]: Found sda7 Aug 13 01:17:48.120372 extend-filesystems[1521]: Found sda9 Aug 13 01:17:48.120372 extend-filesystems[1521]: Checking size of /dev/sda9 Aug 13 01:17:48.120372 extend-filesystems[1521]: Resized partition /dev/sda9 Aug 13 01:17:48.326337 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Aug 13 01:17:48.326359 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Aug 13 01:17:48.326470 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Aug 13 01:17:48.326484 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Aug 13 01:17:48.326500 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Aug 13 01:17:48.326512 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Aug 13 01:17:48.326533 coreos-metadata[1516]: Aug 13 01:17:48.102 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Aug 13 01:17:48.107073 systemd[1]: Starting motdgen.service... Aug 13 01:17:48.326701 extend-filesystems[1531]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 01:17:48.136142 systemd[1]: Starting prepare-helm.service... Aug 13 01:17:48.154007 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 01:17:48.172968 systemd[1]: Starting sshd-keygen.service... Aug 13 01:17:48.187072 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 01:17:48.205321 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:17:48.206307 systemd[1]: Starting tcsd.service... Aug 13 01:17:48.342745 update_engine[1551]: I0813 01:17:48.286618 1551 main.cc:92] Flatcar Update Engine starting Aug 13 01:17:48.342745 update_engine[1551]: I0813 01:17:48.290146 1551 update_check_scheduler.cc:74] Next update check in 7m35s Aug 13 01:17:48.240508 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:17:48.342902 jq[1552]: true Aug 13 01:17:48.240895 systemd[1]: Starting update-engine.service... Aug 13 01:17:48.265668 systemd-networkd[1313]: enp2s0f1np1: Link UP Aug 13 01:17:48.265836 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:48.265896 systemd-networkd[1313]: enp2s0f1np1: Gained carrier Aug 13 01:17:48.265901 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:48.311066 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 01:17:48.313452 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:48.313545 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:48.336481 systemd[1]: Started dbus.service. Aug 13 01:17:48.351206 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:17:48.351304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 01:17:48.351555 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:17:48.351622 systemd[1]: Finished modprobe@drm.service. Aug 13 01:17:48.359499 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:17:48.359581 systemd[1]: Finished motdgen.service. Aug 13 01:17:48.366821 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:17:48.366905 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 01:17:48.378144 jq[1556]: true Aug 13 01:17:48.378505 systemd[1]: Finished ensure-sysext.service. Aug 13 01:17:48.386547 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Aug 13 01:17:48.386649 systemd[1]: Condition check resulted in tcsd.service being skipped. Aug 13 01:17:48.386811 env[1557]: time="2025-08-13T01:17:48.386789124Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 01:17:48.387481 tar[1554]: linux-amd64/LICENSE Aug 13 01:17:48.387781 tar[1554]: linux-amd64/helm Aug 13 01:17:48.398015 env[1557]: time="2025-08-13T01:17:48.397989413Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:17:48.400084 systemd[1]: Started update-engine.service. Aug 13 01:17:48.405518 env[1557]: time="2025-08-13T01:17:48.405473715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:17:48.406155 env[1557]: time="2025-08-13T01:17:48.406113427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:17:48.406155 env[1557]: time="2025-08-13T01:17:48.406127707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:17:48.407939 env[1557]: time="2025-08-13T01:17:48.407889215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:17:48.407939 env[1557]: time="2025-08-13T01:17:48.407901796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:17:48.407939 env[1557]: time="2025-08-13T01:17:48.407909855Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 01:17:48.407939 env[1557]: time="2025-08-13T01:17:48.407915558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:17:48.408061 env[1557]: time="2025-08-13T01:17:48.407956748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:17:48.408090 env[1557]: time="2025-08-13T01:17:48.408079063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:17:48.408158 env[1557]: time="2025-08-13T01:17:48.408149434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:17:48.408181 env[1557]: time="2025-08-13T01:17:48.408158973Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:17:48.409289 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:17:48.410187 env[1557]: time="2025-08-13T01:17:48.410171509Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 01:17:48.410240 env[1557]: time="2025-08-13T01:17:48.410185834Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:17:48.410313 systemd[1]: Started locksmithd.service. Aug 13 01:17:48.417326 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:17:48.417345 systemd[1]: Reached target system-config.target. Aug 13 01:17:48.423839 bash[1586]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:17:48.426980 systemd[1]: Starting systemd-logind.service... Aug 13 01:17:48.430519 env[1557]: time="2025-08-13T01:17:48.430473584Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:17:48.430519 env[1557]: time="2025-08-13T01:17:48.430495624Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:17:48.430519 env[1557]: time="2025-08-13T01:17:48.430509859Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:17:48.430590 env[1557]: time="2025-08-13T01:17:48.430535172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430590 env[1557]: time="2025-08-13T01:17:48.430550071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430590 env[1557]: time="2025-08-13T01:17:48.430563040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430590 env[1557]: time="2025-08-13T01:17:48.430575136Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430590 env[1557]: time="2025-08-13T01:17:48.430584119Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430681 env[1557]: time="2025-08-13T01:17:48.430591423Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430681 env[1557]: time="2025-08-13T01:17:48.430600081Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430681 env[1557]: time="2025-08-13T01:17:48.430607688Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430681 env[1557]: time="2025-08-13T01:17:48.430614484Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:17:48.430681 env[1557]: time="2025-08-13T01:17:48.430664427Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:17:48.430764 env[1557]: time="2025-08-13T01:17:48.430715080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:17:48.430864 env[1557]: time="2025-08-13T01:17:48.430855352Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:17:48.430890 env[1557]: time="2025-08-13T01:17:48.430870275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.430890 env[1557]: time="2025-08-13T01:17:48.430878269Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:17:48.430930 env[1557]: time="2025-08-13T01:17:48.430908268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.430930 env[1557]: time="2025-08-13T01:17:48.430915732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.430930 env[1557]: time="2025-08-13T01:17:48.430922679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.430930 env[1557]: time="2025-08-13T01:17:48.430928788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431005 env[1557]: time="2025-08-13T01:17:48.430935757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431005 env[1557]: time="2025-08-13T01:17:48.430943123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431005 env[1557]: time="2025-08-13T01:17:48.430949395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431005 env[1557]: time="2025-08-13T01:17:48.430955667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431005 env[1557]: time="2025-08-13T01:17:48.430963336Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:17:48.431089 env[1557]: time="2025-08-13T01:17:48.431029886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431089 env[1557]: time="2025-08-13T01:17:48.431061607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431089 env[1557]: time="2025-08-13T01:17:48.431080891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431140 env[1557]: time="2025-08-13T01:17:48.431096196Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:17:48.431140 env[1557]: time="2025-08-13T01:17:48.431106341Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 01:17:48.431140 env[1557]: time="2025-08-13T01:17:48.431112556Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:17:48.431140 env[1557]: time="2025-08-13T01:17:48.431122233Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 01:17:48.431201 env[1557]: time="2025-08-13T01:17:48.431143827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:17:48.431286 env[1557]: time="2025-08-13T01:17:48.431258920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431291632Z" level=info msg="Connect containerd service" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431308895Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431589028Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431697866Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431702131Z" level=info msg="Start subscribing containerd event" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431719236Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431736298Z" level=info msg="Start recovering state" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431748691Z" level=info msg="containerd successfully booted in 0.045331s" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431768444Z" level=info msg="Start event monitor" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431777576Z" level=info msg="Start snapshots syncer" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431782724Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:17:48.433044 env[1557]: time="2025-08-13T01:17:48.431786680Z" level=info msg="Start streaming server" Aug 13 01:17:48.433303 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:17:48.433328 systemd[1]: Reached target user-config.target. Aug 13 01:17:48.441277 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:17:48.441450 systemd[1]: Started containerd.service. Aug 13 01:17:48.448412 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 01:17:48.453500 systemd-logind[1590]: Watching system buttons on /dev/input/event3 (Power Button) Aug 13 01:17:48.453511 systemd-logind[1590]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 13 01:17:48.453523 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Aug 13 01:17:48.453633 systemd-logind[1590]: New seat seat0. Aug 13 01:17:48.459692 systemd[1]: Started systemd-logind.service. Aug 13 01:17:48.479549 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:17:48.651263 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Aug 13 01:17:48.677769 extend-filesystems[1531]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:17:48.677769 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 56 Aug 13 01:17:48.677769 extend-filesystems[1531]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Aug 13 01:17:48.726346 extend-filesystems[1521]: Resized filesystem in /dev/sda9 Aug 13 01:17:48.726346 extend-filesystems[1521]: Found sdb Aug 13 01:17:48.678302 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:17:48.741365 tar[1554]: linux-amd64/README.md Aug 13 01:17:48.678394 systemd[1]: Finished extend-filesystems.service. Aug 13 01:17:48.698942 systemd[1]: Finished prepare-helm.service. Aug 13 01:17:48.820196 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:17:48.832196 systemd[1]: Finished sshd-keygen.service. Aug 13 01:17:48.840091 systemd[1]: Starting issuegen.service... Aug 13 01:17:48.847484 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:17:48.847559 systemd[1]: Finished issuegen.service. Aug 13 01:17:48.856038 systemd[1]: Starting systemd-user-sessions.service... Aug 13 01:17:48.865504 systemd[1]: Finished systemd-user-sessions.service. Aug 13 01:17:48.873927 systemd[1]: Started getty@tty1.service. Aug 13 01:17:48.880907 systemd[1]: Started serial-getty@ttyS1.service. Aug 13 01:17:48.889380 systemd[1]: Reached target getty.target. Aug 13 01:17:48.999389 systemd-networkd[1313]: bond0: Gained IPv6LL Aug 13 01:17:48.999680 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:49.640449 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:49.641030 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:17:49.644416 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 01:17:49.656346 systemd[1]: Reached target network-online.target. Aug 13 01:17:49.668109 systemd[1]: Starting kubelet.service... Aug 13 01:17:50.480880 systemd[1]: Started kubelet.service. Aug 13 01:17:50.952704 kubelet[1627]: E0813 01:17:50.952680 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:17:50.953830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:17:50.953903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:17:51.652458 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Aug 13 01:17:53.901454 login[1622]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 01:17:53.909573 systemd-logind[1590]: New session 1 of user core. Aug 13 01:17:53.909935 login[1621]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 01:17:53.910073 systemd[1]: Created slice user-500.slice. Aug 13 01:17:53.910743 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 01:17:53.912884 systemd-logind[1590]: New session 2 of user core. Aug 13 01:17:53.915991 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 01:17:53.916705 systemd[1]: Starting user@500.service... Aug 13 01:17:53.918526 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:53.989762 systemd[1645]: Queued start job for default target default.target. Aug 13 01:17:53.989995 systemd[1645]: Reached target paths.target. Aug 13 01:17:53.990006 systemd[1645]: Reached target sockets.target. Aug 13 01:17:53.990014 systemd[1645]: Reached target timers.target. Aug 13 01:17:53.990020 systemd[1645]: Reached target basic.target. Aug 13 01:17:53.990056 systemd[1645]: Reached target default.target. Aug 13 01:17:53.990071 systemd[1645]: Startup finished in 68ms. Aug 13 01:17:53.990096 systemd[1]: Started user@500.service. Aug 13 01:17:53.990704 systemd[1]: Started session-1.scope. Aug 13 01:17:53.991074 systemd[1]: Started session-2.scope. Aug 13 01:17:54.248556 coreos-metadata[1513]: Aug 13 01:17:54.248 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Aug 13 01:17:54.249374 coreos-metadata[1516]: Aug 13 01:17:54.248 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Aug 13 01:17:55.174241 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Aug 13 01:17:55.181258 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Aug 13 01:17:55.248640 coreos-metadata[1513]: Aug 13 01:17:55.248 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Aug 13 01:17:55.249035 coreos-metadata[1516]: Aug 13 01:17:55.248 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Aug 13 01:17:56.019374 systemd[1]: Created slice system-sshd.slice. Aug 13 01:17:56.019964 systemd[1]: Started sshd@0-147.75.71.225:22-139.178.89.65:57802.service. Aug 13 01:17:56.078774 sshd[1666]: Accepted publickey for core from 139.178.89.65 port 57802 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:17:56.079534 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:56.082016 systemd-logind[1590]: New session 3 of user core. Aug 13 01:17:56.082662 systemd[1]: Started session-3.scope. Aug 13 01:17:56.135160 systemd[1]: Started sshd@1-147.75.71.225:22-139.178.89.65:57814.service. Aug 13 01:17:56.163941 sshd[1671]: Accepted publickey for core from 139.178.89.65 port 57814 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:17:56.164700 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:56.166906 systemd-logind[1590]: New session 4 of user core. Aug 13 01:17:56.167470 systemd[1]: Started session-4.scope. Aug 13 01:17:56.215633 sshd[1671]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:56.217210 systemd[1]: sshd@1-147.75.71.225:22-139.178.89.65:57814.service: Deactivated successfully. Aug 13 01:17:56.217569 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:17:56.217887 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:17:56.218411 systemd[1]: Started sshd@2-147.75.71.225:22-139.178.89.65:57826.service. Aug 13 01:17:56.218876 systemd-logind[1590]: Removed session 4. Aug 13 01:17:56.245673 sshd[1677]: Accepted publickey for core from 139.178.89.65 port 57826 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:17:56.246524 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:17:56.249591 systemd-logind[1590]: New session 5 of user core. Aug 13 01:17:56.250271 systemd[1]: Started session-5.scope. Aug 13 01:17:56.316689 sshd[1677]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:56.322732 systemd[1]: sshd@2-147.75.71.225:22-139.178.89.65:57826.service: Deactivated successfully. Aug 13 01:17:56.324598 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:17:56.326332 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:17:56.328543 systemd-logind[1590]: Removed session 5. Aug 13 01:17:56.354361 coreos-metadata[1516]: Aug 13 01:17:56.354 INFO Fetch successful Aug 13 01:17:56.421290 coreos-metadata[1513]: Aug 13 01:17:56.421 INFO Fetch successful Aug 13 01:17:56.434649 systemd[1]: Finished coreos-metadata.service. Aug 13 01:17:56.435498 systemd[1]: Started packet-phone-home.service. Aug 13 01:17:56.440637 curl[1685]: % Total % Received % Xferd Average Speed Time Time Time Current Aug 13 01:17:56.440794 curl[1685]: Dload Upload Total Spent Left Speed Aug 13 01:17:56.458890 unknown[1513]: wrote ssh authorized keys file for user: core Aug 13 01:17:56.471201 update-ssh-keys[1686]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:17:56.471473 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 01:17:56.471672 systemd[1]: Reached target multi-user.target. Aug 13 01:17:56.472393 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 01:17:56.476623 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 01:17:56.476695 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 01:17:56.476843 systemd[1]: Startup finished in 1.964s (kernel) + 25.969s (initrd) + 15.668s (userspace) = 43.602s. Aug 13 01:17:56.918676 curl[1685]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Aug 13 01:17:56.921140 systemd[1]: packet-phone-home.service: Deactivated successfully. Aug 13 01:18:01.092046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:18:01.092614 systemd[1]: Stopped kubelet.service. Aug 13 01:18:01.096133 systemd[1]: Starting kubelet.service... Aug 13 01:18:01.318123 systemd[1]: Started kubelet.service. Aug 13 01:18:01.351981 kubelet[1692]: E0813 01:18:01.351914 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:18:01.353935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:18:01.354011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:18:06.325807 systemd[1]: Started sshd@3-147.75.71.225:22-139.178.89.65:52292.service. Aug 13 01:18:06.353370 sshd[1709]: Accepted publickey for core from 139.178.89.65 port 52292 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:18:06.354327 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:06.357471 systemd-logind[1590]: New session 6 of user core. Aug 13 01:18:06.358473 systemd[1]: Started session-6.scope. Aug 13 01:18:06.415414 sshd[1709]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:06.417195 systemd[1]: sshd@3-147.75.71.225:22-139.178.89.65:52292.service: Deactivated successfully. Aug 13 01:18:06.417532 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:18:06.417874 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:18:06.418505 systemd[1]: Started sshd@4-147.75.71.225:22-139.178.89.65:52300.service. Aug 13 01:18:06.418930 systemd-logind[1590]: Removed session 6. Aug 13 01:18:06.446375 sshd[1715]: Accepted publickey for core from 139.178.89.65 port 52300 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:18:06.447390 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:06.450901 systemd-logind[1590]: New session 7 of user core. Aug 13 01:18:06.451869 systemd[1]: Started session-7.scope. Aug 13 01:18:06.506986 sshd[1715]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:06.508682 systemd[1]: sshd@4-147.75.71.225:22-139.178.89.65:52300.service: Deactivated successfully. Aug 13 01:18:06.509010 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:18:06.509318 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:18:06.509993 systemd[1]: Started sshd@5-147.75.71.225:22-139.178.89.65:52308.service. Aug 13 01:18:06.510490 systemd-logind[1590]: Removed session 7. Aug 13 01:18:06.538406 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 52308 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:18:06.539529 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:06.543254 systemd-logind[1590]: New session 8 of user core. Aug 13 01:18:06.544480 systemd[1]: Started session-8.scope. Aug 13 01:18:06.601580 sshd[1721]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:06.603351 systemd[1]: sshd@5-147.75.71.225:22-139.178.89.65:52308.service: Deactivated successfully. Aug 13 01:18:06.603696 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:18:06.604028 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:18:06.604696 systemd[1]: Started sshd@6-147.75.71.225:22-139.178.89.65:52314.service. Aug 13 01:18:06.605103 systemd-logind[1590]: Removed session 8. Aug 13 01:18:06.632968 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 52314 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:18:06.634130 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:18:06.638020 systemd-logind[1590]: New session 9 of user core. Aug 13 01:18:06.639206 systemd[1]: Started session-9.scope. Aug 13 01:18:06.705116 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:18:06.705275 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:18:06.720684 systemd[1]: Starting docker.service... Aug 13 01:18:06.745219 env[1743]: time="2025-08-13T01:18:06.745179561Z" level=info msg="Starting up" Aug 13 01:18:06.746125 env[1743]: time="2025-08-13T01:18:06.746105456Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:18:06.746125 env[1743]: time="2025-08-13T01:18:06.746121546Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:18:06.746211 env[1743]: time="2025-08-13T01:18:06.746139047Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:18:06.746211 env[1743]: time="2025-08-13T01:18:06.746148264Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:18:06.747656 env[1743]: time="2025-08-13T01:18:06.747632146Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:18:06.747656 env[1743]: time="2025-08-13T01:18:06.747650752Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:18:06.747753 env[1743]: time="2025-08-13T01:18:06.747667838Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:18:06.747753 env[1743]: time="2025-08-13T01:18:06.747681059Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:18:06.779638 env[1743]: time="2025-08-13T01:18:06.779626253Z" level=info msg="Loading containers: start." Aug 13 01:18:06.924282 kernel: Initializing XFRM netlink socket Aug 13 01:18:06.992368 env[1743]: time="2025-08-13T01:18:06.992351369Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 01:18:06.992947 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. Aug 13 01:18:07.048357 systemd-networkd[1313]: docker0: Link UP Aug 13 01:18:07.074459 env[1743]: time="2025-08-13T01:18:07.074351253Z" level=info msg="Loading containers: done." Aug 13 01:18:07.094288 env[1743]: time="2025-08-13T01:18:07.094157471Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:18:07.094635 env[1743]: time="2025-08-13T01:18:07.094573163Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 01:18:07.094890 env[1743]: time="2025-08-13T01:18:07.094802436Z" level=info msg="Daemon has completed initialization" Aug 13 01:18:07.102221 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3952525261-merged.mount: Deactivated successfully. Aug 13 01:18:07.120622 systemd[1]: Started docker.service. Aug 13 01:18:07.125933 systemd-timesyncd[1501]: Contacted time server [2605:6400:20:6a0:fd72:d2ee:3d50:31c9]:123 (2.flatcar.pool.ntp.org). Aug 13 01:18:07.126136 systemd-timesyncd[1501]: Initial clock synchronization to Wed 2025-08-13 01:18:07.262056 UTC. Aug 13 01:18:07.137970 env[1743]: time="2025-08-13T01:18:07.137820774Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:18:07.989852 env[1557]: time="2025-08-13T01:18:07.989694530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 01:18:08.688284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33163994.mount: Deactivated successfully. Aug 13 01:18:09.811168 env[1557]: time="2025-08-13T01:18:09.811136396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:09.811778 env[1557]: time="2025-08-13T01:18:09.811765199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:09.813217 env[1557]: time="2025-08-13T01:18:09.813182548Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:09.814124 env[1557]: time="2025-08-13T01:18:09.814092815Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:09.814551 env[1557]: time="2025-08-13T01:18:09.814526095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 01:18:09.815033 env[1557]: time="2025-08-13T01:18:09.815003367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 01:18:11.311992 env[1557]: time="2025-08-13T01:18:11.311960206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:11.312719 env[1557]: time="2025-08-13T01:18:11.312705831Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:11.313704 env[1557]: time="2025-08-13T01:18:11.313691464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:11.314897 env[1557]: time="2025-08-13T01:18:11.314858522Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:11.315325 env[1557]: time="2025-08-13T01:18:11.315299039Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 01:18:11.315716 env[1557]: time="2025-08-13T01:18:11.315687239Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 01:18:11.592626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:18:11.593141 systemd[1]: Stopped kubelet.service. Aug 13 01:18:11.596458 systemd[1]: Starting kubelet.service... Aug 13 01:18:11.896916 systemd[1]: Started kubelet.service. Aug 13 01:18:11.927586 kubelet[1902]: E0813 01:18:11.927526 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:18:11.928783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:18:11.928874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:18:12.473533 env[1557]: time="2025-08-13T01:18:12.473481013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:12.474334 env[1557]: time="2025-08-13T01:18:12.474296150Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:12.475805 env[1557]: time="2025-08-13T01:18:12.475769806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:12.476862 env[1557]: time="2025-08-13T01:18:12.476814680Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:12.477275 env[1557]: time="2025-08-13T01:18:12.477216672Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 01:18:12.477574 env[1557]: time="2025-08-13T01:18:12.477516320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 01:18:13.442937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691297199.mount: Deactivated successfully. Aug 13 01:18:13.867154 env[1557]: time="2025-08-13T01:18:13.867102405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:13.867695 env[1557]: time="2025-08-13T01:18:13.867651411Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:13.868270 env[1557]: time="2025-08-13T01:18:13.868229462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:13.868905 env[1557]: time="2025-08-13T01:18:13.868861126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:13.869165 env[1557]: time="2025-08-13T01:18:13.869119312Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 01:18:13.869483 env[1557]: time="2025-08-13T01:18:13.869443207Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:18:14.427196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350065084.mount: Deactivated successfully. Aug 13 01:18:15.263068 env[1557]: time="2025-08-13T01:18:15.263014040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:15.263640 env[1557]: time="2025-08-13T01:18:15.263607561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:15.264778 env[1557]: time="2025-08-13T01:18:15.264744125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:15.265828 env[1557]: time="2025-08-13T01:18:15.265781075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:15.266360 env[1557]: time="2025-08-13T01:18:15.266317551Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:18:15.266796 env[1557]: time="2025-08-13T01:18:15.266741224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:18:15.852664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928333471.mount: Deactivated successfully. Aug 13 01:18:15.854058 env[1557]: time="2025-08-13T01:18:15.854014915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:15.854605 env[1557]: time="2025-08-13T01:18:15.854592893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:15.855210 env[1557]: time="2025-08-13T01:18:15.855175173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:15.855916 env[1557]: time="2025-08-13T01:18:15.855865836Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:15.856288 env[1557]: time="2025-08-13T01:18:15.856231614Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:18:15.856628 env[1557]: time="2025-08-13T01:18:15.856588571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 01:18:16.363560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169923285.mount: Deactivated successfully. Aug 13 01:18:17.980259 env[1557]: time="2025-08-13T01:18:17.980207410Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:17.980917 env[1557]: time="2025-08-13T01:18:17.980879898Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:17.981970 env[1557]: time="2025-08-13T01:18:17.981938163Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:17.983011 env[1557]: time="2025-08-13T01:18:17.982977086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:17.983530 env[1557]: time="2025-08-13T01:18:17.983486024Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 01:18:20.587093 systemd[1]: Stopped kubelet.service. Aug 13 01:18:20.588411 systemd[1]: Starting kubelet.service... Aug 13 01:18:20.604372 systemd[1]: Reloading. Aug 13 01:18:20.640100 /usr/lib/systemd/system-generators/torcx-generator[1988]: time="2025-08-13T01:18:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:18:20.640116 /usr/lib/systemd/system-generators/torcx-generator[1988]: time="2025-08-13T01:18:20Z" level=info msg="torcx already run" Aug 13 01:18:20.697883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:18:20.697893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:18:20.710369 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:18:20.773161 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:18:20.773200 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:18:20.773301 systemd[1]: Stopped kubelet.service. Aug 13 01:18:20.774139 systemd[1]: Starting kubelet.service... Aug 13 01:18:21.017160 systemd[1]: Started kubelet.service. Aug 13 01:18:21.044135 kubelet[2053]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:18:21.044135 kubelet[2053]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:18:21.044135 kubelet[2053]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:18:21.044469 kubelet[2053]: I0813 01:18:21.044177 2053 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:18:21.478626 kubelet[2053]: I0813 01:18:21.478571 2053 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:18:21.478626 kubelet[2053]: I0813 01:18:21.478586 2053 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:18:21.478739 kubelet[2053]: I0813 01:18:21.478734 2053 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:18:21.520961 kubelet[2053]: I0813 01:18:21.520914 2053 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:18:21.521214 kubelet[2053]: E0813 01:18:21.521198 2053 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://147.75.71.225:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.71.225:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 01:18:21.525002 kubelet[2053]: E0813 01:18:21.524984 2053 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:18:21.525055 kubelet[2053]: I0813 01:18:21.525003 2053 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:18:21.547040 kubelet[2053]: I0813 01:18:21.547023 2053 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:18:21.547263 kubelet[2053]: I0813 01:18:21.547238 2053 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:18:21.547436 kubelet[2053]: I0813 01:18:21.547266 2053 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-9864ec3500","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:18:21.547572 kubelet[2053]: I0813 01:18:21.547448 2053 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:18:21.547572 kubelet[2053]: I0813 01:18:21.547463 2053 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:18:21.548689 kubelet[2053]: I0813 01:18:21.548673 2053 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:18:21.553791 kubelet[2053]: I0813 01:18:21.553773 2053 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:18:21.553791 kubelet[2053]: I0813 01:18:21.553791 2053 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:18:21.553913 kubelet[2053]: I0813 01:18:21.553809 2053 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:18:21.553913 kubelet[2053]: I0813 01:18:21.553818 2053 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:18:21.567833 kubelet[2053]: E0813 01:18:21.567780 2053 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://147.75.71.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-9864ec3500&limit=500&resourceVersion=0\": dial tcp 147.75.71.225:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 01:18:21.572255 kubelet[2053]: E0813 01:18:21.572196 2053 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://147.75.71.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.71.225:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 01:18:21.577192 kubelet[2053]: I0813 01:18:21.577136 2053 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:18:21.577812 kubelet[2053]: I0813 01:18:21.577759 2053 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:18:21.582897 kubelet[2053]: W0813 01:18:21.582841 2053 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:18:21.585635 kubelet[2053]: I0813 01:18:21.585581 2053 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:18:21.585748 kubelet[2053]: I0813 01:18:21.585652 2053 server.go:1289] "Started kubelet" Aug 13 01:18:21.585832 kubelet[2053]: I0813 01:18:21.585736 2053 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:18:21.585925 kubelet[2053]: I0813 01:18:21.585813 2053 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:18:21.586429 kubelet[2053]: I0813 01:18:21.586400 2053 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:18:21.587794 kubelet[2053]: I0813 01:18:21.587765 2053 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:18:21.588323 kubelet[2053]: E0813 01:18:21.588300 2053 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:18:21.594748 kubelet[2053]: E0813 01:18:21.593804 2053 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.71.225:6443/api/v1/namespaces/default/events\": dial tcp 147.75.71.225:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-9864ec3500.185b2ec491ce04fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-9864ec3500,UID:ci-3510.3.8-a-9864ec3500,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-9864ec3500,},FirstTimestamp:2025-08-13 01:18:21.585605885 +0000 UTC m=+0.565266527,LastTimestamp:2025-08-13 01:18:21.585605885 +0000 UTC m=+0.565266527,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-9864ec3500,}" Aug 13 01:18:21.596669 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 01:18:21.596743 kubelet[2053]: I0813 01:18:21.596703 2053 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:18:21.596782 kubelet[2053]: I0813 01:18:21.596744 2053 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:18:21.596816 kubelet[2053]: E0813 01:18:21.596797 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:21.596890 kubelet[2053]: I0813 01:18:21.596881 2053 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:18:21.597016 kubelet[2053]: I0813 01:18:21.596996 2053 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:18:21.597074 kubelet[2053]: I0813 01:18:21.597067 2053 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:18:21.597127 kubelet[2053]: I0813 01:18:21.597119 2053 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:18:21.597213 kubelet[2053]: I0813 01:18:21.597202 2053 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:18:21.597384 kubelet[2053]: E0813 01:18:21.597367 2053 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://147.75.71.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.71.225:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 01:18:21.597817 kubelet[2053]: E0813 01:18:21.597791 2053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-9864ec3500?timeout=10s\": dial tcp 147.75.71.225:6443: connect: connection refused" interval="200ms" Aug 13 01:18:21.598109 kubelet[2053]: I0813 01:18:21.598094 2053 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:18:21.607142 kubelet[2053]: I0813 01:18:21.607123 2053 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:18:21.607647 kubelet[2053]: I0813 01:18:21.607636 2053 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:18:21.607685 kubelet[2053]: I0813 01:18:21.607650 2053 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:18:21.607685 kubelet[2053]: I0813 01:18:21.607664 2053 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:18:21.607685 kubelet[2053]: I0813 01:18:21.607670 2053 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:18:21.607762 kubelet[2053]: E0813 01:18:21.607699 2053 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:18:21.608006 kubelet[2053]: E0813 01:18:21.607964 2053 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://147.75.71.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.71.225:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 01:18:21.698088 kubelet[2053]: E0813 01:18:21.698013 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:21.708643 kubelet[2053]: E0813 01:18:21.708567 2053 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 01:18:21.740864 kubelet[2053]: I0813 01:18:21.740703 2053 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:18:21.740864 kubelet[2053]: I0813 01:18:21.740744 2053 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:18:21.740864 kubelet[2053]: I0813 01:18:21.740799 2053 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:18:21.742748 kubelet[2053]: I0813 01:18:21.742692 2053 policy_none.go:49] "None policy: Start" Aug 13 01:18:21.742748 kubelet[2053]: I0813 01:18:21.742728 2053 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:18:21.742748 kubelet[2053]: I0813 01:18:21.742756 2053 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:18:21.752553 systemd[1]: Created slice kubepods.slice. Aug 13 01:18:21.763644 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 01:18:21.771357 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 01:18:21.787199 kubelet[2053]: E0813 01:18:21.787101 2053 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:18:21.787573 kubelet[2053]: I0813 01:18:21.787503 2053 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:18:21.787783 kubelet[2053]: I0813 01:18:21.787536 2053 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:18:21.787999 kubelet[2053]: I0813 01:18:21.787936 2053 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:18:21.789026 kubelet[2053]: E0813 01:18:21.788977 2053 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:18:21.789197 kubelet[2053]: E0813 01:18:21.789073 2053 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:21.799739 kubelet[2053]: E0813 01:18:21.799635 2053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-9864ec3500?timeout=10s\": dial tcp 147.75.71.225:6443: connect: connection refused" interval="400ms" Aug 13 01:18:21.891889 kubelet[2053]: I0813 01:18:21.891791 2053 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:21.892674 kubelet[2053]: E0813 01:18:21.892551 2053 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.225:6443/api/v1/nodes\": dial tcp 147.75.71.225:6443: connect: connection refused" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:21.930843 systemd[1]: Created slice kubepods-burstable-pod4b4bfa68026266d10dd087653918fb93.slice. Aug 13 01:18:21.955319 kubelet[2053]: E0813 01:18:21.955219 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:21.961898 systemd[1]: Created slice kubepods-burstable-pod681c146f5657410c5512a51071dbd18a.slice. Aug 13 01:18:21.966305 kubelet[2053]: E0813 01:18:21.966223 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:21.970595 systemd[1]: Created slice kubepods-burstable-pod53dcf69a3152c58bba0277578321827d.slice. Aug 13 01:18:21.974079 kubelet[2053]: E0813 01:18:21.973996 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:21.999777 kubelet[2053]: I0813 01:18:21.999589 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b4bfa68026266d10dd087653918fb93-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-9864ec3500\" (UID: \"4b4bfa68026266d10dd087653918fb93\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:21.999777 kubelet[2053]: I0813 01:18:21.999684 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:21.999777 kubelet[2053]: I0813 01:18:21.999743 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.000309 kubelet[2053]: I0813 01:18:21.999794 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.000309 kubelet[2053]: I0813 01:18:21.999842 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b4bfa68026266d10dd087653918fb93-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-9864ec3500\" (UID: \"4b4bfa68026266d10dd087653918fb93\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.000309 kubelet[2053]: I0813 01:18:21.999886 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b4bfa68026266d10dd087653918fb93-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-9864ec3500\" (UID: \"4b4bfa68026266d10dd087653918fb93\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.000309 kubelet[2053]: I0813 01:18:21.999928 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.000309 kubelet[2053]: I0813 01:18:21.999970 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.000787 kubelet[2053]: I0813 01:18:22.000010 2053 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53dcf69a3152c58bba0277578321827d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-9864ec3500\" (UID: \"53dcf69a3152c58bba0277578321827d\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.096089 kubelet[2053]: I0813 01:18:22.096035 2053 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.096934 kubelet[2053]: E0813 01:18:22.096670 2053 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.225:6443/api/v1/nodes\": dial tcp 147.75.71.225:6443: connect: connection refused" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.200951 kubelet[2053]: E0813 01:18:22.200869 2053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.71.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-9864ec3500?timeout=10s\": dial tcp 147.75.71.225:6443: connect: connection refused" interval="800ms" Aug 13 01:18:22.258187 env[1557]: time="2025-08-13T01:18:22.257945336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-9864ec3500,Uid:4b4bfa68026266d10dd087653918fb93,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:22.268253 env[1557]: time="2025-08-13T01:18:22.268139053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-9864ec3500,Uid:681c146f5657410c5512a51071dbd18a,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:22.279230 env[1557]: time="2025-08-13T01:18:22.279153489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-9864ec3500,Uid:53dcf69a3152c58bba0277578321827d,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:22.501196 kubelet[2053]: I0813 01:18:22.501138 2053 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.502011 kubelet[2053]: E0813 01:18:22.501892 2053 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.75.71.225:6443/api/v1/nodes\": dial tcp 147.75.71.225:6443: connect: connection refused" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:22.830957 kubelet[2053]: E0813 01:18:22.830877 2053 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://147.75.71.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.71.225:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 01:18:22.835988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614767164.mount: Deactivated successfully. Aug 13 01:18:22.836989 env[1557]: time="2025-08-13T01:18:22.836969546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.837731 env[1557]: time="2025-08-13T01:18:22.837717920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.838506 env[1557]: time="2025-08-13T01:18:22.838493502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.839153 env[1557]: time="2025-08-13T01:18:22.839133853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.839951 env[1557]: time="2025-08-13T01:18:22.839937755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.840742 env[1557]: time="2025-08-13T01:18:22.840730638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.842602 env[1557]: time="2025-08-13T01:18:22.842563715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.843032 env[1557]: time="2025-08-13T01:18:22.843017946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.844075 env[1557]: time="2025-08-13T01:18:22.844061893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.844596 env[1557]: time="2025-08-13T01:18:22.844558232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.845143 env[1557]: time="2025-08-13T01:18:22.845129441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.845943 env[1557]: time="2025-08-13T01:18:22.845930121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:22.849937 env[1557]: time="2025-08-13T01:18:22.849903875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:22.849937 env[1557]: time="2025-08-13T01:18:22.849926080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:22.849937 env[1557]: time="2025-08-13T01:18:22.849933433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:22.850056 env[1557]: time="2025-08-13T01:18:22.849997168Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79111bea9b82e12540d07f52e5779ed5dc0caa48f7c11caa8f0c135cbd1aa7be pid=2106 runtime=io.containerd.runc.v2 Aug 13 01:18:22.851672 env[1557]: time="2025-08-13T01:18:22.851632669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:22.851672 env[1557]: time="2025-08-13T01:18:22.851656244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:22.851672 env[1557]: time="2025-08-13T01:18:22.851666962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:22.851816 env[1557]: time="2025-08-13T01:18:22.851746836Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47580f2c2655a2a7d4d57c543ba8e6f5dd5fc07ae30eb233f4f5f1bcd302368a pid=2120 runtime=io.containerd.runc.v2 Aug 13 01:18:22.852671 kubelet[2053]: E0813 01:18:22.852652 2053 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://147.75.71.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-9864ec3500&limit=500&resourceVersion=0\": dial tcp 147.75.71.225:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 01:18:22.853186 env[1557]: time="2025-08-13T01:18:22.853158911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:22.853186 env[1557]: time="2025-08-13T01:18:22.853176781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:22.853186 env[1557]: time="2025-08-13T01:18:22.853183640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:22.853296 env[1557]: time="2025-08-13T01:18:22.853273653Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c55fe2d34fba52d714036599e979c7537c1e5d31a0128ed3abfb20401f2c90b8 pid=2143 runtime=io.containerd.runc.v2 Aug 13 01:18:22.856088 systemd[1]: Started cri-containerd-79111bea9b82e12540d07f52e5779ed5dc0caa48f7c11caa8f0c135cbd1aa7be.scope. Aug 13 01:18:22.857871 systemd[1]: Started cri-containerd-47580f2c2655a2a7d4d57c543ba8e6f5dd5fc07ae30eb233f4f5f1bcd302368a.scope. Aug 13 01:18:22.859744 systemd[1]: Started cri-containerd-c55fe2d34fba52d714036599e979c7537c1e5d31a0128ed3abfb20401f2c90b8.scope. Aug 13 01:18:22.880045 env[1557]: time="2025-08-13T01:18:22.880017194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-9864ec3500,Uid:681c146f5657410c5512a51071dbd18a,Namespace:kube-system,Attempt:0,} returns sandbox id \"79111bea9b82e12540d07f52e5779ed5dc0caa48f7c11caa8f0c135cbd1aa7be\"" Aug 13 01:18:22.881226 env[1557]: time="2025-08-13T01:18:22.881210721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-9864ec3500,Uid:53dcf69a3152c58bba0277578321827d,Namespace:kube-system,Attempt:0,} returns sandbox id \"47580f2c2655a2a7d4d57c543ba8e6f5dd5fc07ae30eb233f4f5f1bcd302368a\"" Aug 13 01:18:22.882205 env[1557]: time="2025-08-13T01:18:22.882183598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-9864ec3500,Uid:4b4bfa68026266d10dd087653918fb93,Namespace:kube-system,Attempt:0,} returns sandbox id \"c55fe2d34fba52d714036599e979c7537c1e5d31a0128ed3abfb20401f2c90b8\"" Aug 13 01:18:22.882412 env[1557]: time="2025-08-13T01:18:22.882397659Z" level=info msg="CreateContainer within sandbox \"79111bea9b82e12540d07f52e5779ed5dc0caa48f7c11caa8f0c135cbd1aa7be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:18:22.882553 kubelet[2053]: E0813 01:18:22.882538 2053 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://147.75.71.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.71.225:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 01:18:22.882817 env[1557]: time="2025-08-13T01:18:22.882802779Z" level=info msg="CreateContainer within sandbox \"47580f2c2655a2a7d4d57c543ba8e6f5dd5fc07ae30eb233f4f5f1bcd302368a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:18:22.883574 env[1557]: time="2025-08-13T01:18:22.883560041Z" level=info msg="CreateContainer within sandbox \"c55fe2d34fba52d714036599e979c7537c1e5d31a0128ed3abfb20401f2c90b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:18:22.888374 env[1557]: time="2025-08-13T01:18:22.888332122Z" level=info msg="CreateContainer within sandbox \"47580f2c2655a2a7d4d57c543ba8e6f5dd5fc07ae30eb233f4f5f1bcd302368a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4d5901f34e509494bdf6eda234166310b60d82a74ef993a2403fb1483731419\"" Aug 13 01:18:22.888607 env[1557]: time="2025-08-13T01:18:22.888594111Z" level=info msg="StartContainer for \"c4d5901f34e509494bdf6eda234166310b60d82a74ef993a2403fb1483731419\"" Aug 13 01:18:22.889468 env[1557]: time="2025-08-13T01:18:22.889440846Z" level=info msg="CreateContainer within sandbox \"79111bea9b82e12540d07f52e5779ed5dc0caa48f7c11caa8f0c135cbd1aa7be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da11e1c9013403ec96a2960c891240a196e79bc70ed32b5e16507841afb85a24\"" Aug 13 01:18:22.889717 env[1557]: time="2025-08-13T01:18:22.889699831Z" level=info msg="StartContainer for \"da11e1c9013403ec96a2960c891240a196e79bc70ed32b5e16507841afb85a24\"" Aug 13 01:18:22.891379 env[1557]: time="2025-08-13T01:18:22.891360842Z" level=info msg="CreateContainer within sandbox \"c55fe2d34fba52d714036599e979c7537c1e5d31a0128ed3abfb20401f2c90b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b1cab77164ef27f773cbde32ebf18bd79ef0c9f733353427d1aef98613a16e02\"" Aug 13 01:18:22.891570 env[1557]: time="2025-08-13T01:18:22.891560091Z" level=info msg="StartContainer for \"b1cab77164ef27f773cbde32ebf18bd79ef0c9f733353427d1aef98613a16e02\"" Aug 13 01:18:22.897884 systemd[1]: Started cri-containerd-c4d5901f34e509494bdf6eda234166310b60d82a74ef993a2403fb1483731419.scope. Aug 13 01:18:22.898491 systemd[1]: Started cri-containerd-da11e1c9013403ec96a2960c891240a196e79bc70ed32b5e16507841afb85a24.scope. Aug 13 01:18:22.899775 systemd[1]: Started cri-containerd-b1cab77164ef27f773cbde32ebf18bd79ef0c9f733353427d1aef98613a16e02.scope. Aug 13 01:18:22.923837 env[1557]: time="2025-08-13T01:18:22.923808700Z" level=info msg="StartContainer for \"c4d5901f34e509494bdf6eda234166310b60d82a74ef993a2403fb1483731419\" returns successfully" Aug 13 01:18:22.923951 env[1557]: time="2025-08-13T01:18:22.923934361Z" level=info msg="StartContainer for \"b1cab77164ef27f773cbde32ebf18bd79ef0c9f733353427d1aef98613a16e02\" returns successfully" Aug 13 01:18:22.923990 env[1557]: time="2025-08-13T01:18:22.923947636Z" level=info msg="StartContainer for \"da11e1c9013403ec96a2960c891240a196e79bc70ed32b5e16507841afb85a24\" returns successfully" Aug 13 01:18:23.304089 kubelet[2053]: I0813 01:18:23.304041 2053 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:23.612301 kubelet[2053]: E0813 01:18:23.612285 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:23.612519 kubelet[2053]: E0813 01:18:23.612511 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:23.612973 kubelet[2053]: E0813 01:18:23.612963 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:23.932339 kubelet[2053]: E0813 01:18:23.932272 2053 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:24.034760 kubelet[2053]: I0813 01:18:24.034743 2053 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:24.034760 kubelet[2053]: E0813 01:18:24.034762 2053 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-a-9864ec3500\": node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.040132 kubelet[2053]: E0813 01:18:24.040087 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.141049 kubelet[2053]: E0813 01:18:24.140980 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.241477 kubelet[2053]: E0813 01:18:24.241262 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.341882 kubelet[2053]: E0813 01:18:24.341773 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.442545 kubelet[2053]: E0813 01:18:24.442447 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.542815 kubelet[2053]: E0813 01:18:24.542596 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.618208 kubelet[2053]: E0813 01:18:24.618156 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:24.618511 kubelet[2053]: E0813 01:18:24.618461 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:24.643628 kubelet[2053]: E0813 01:18:24.643504 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.743816 kubelet[2053]: E0813 01:18:24.743705 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.844490 kubelet[2053]: E0813 01:18:24.844396 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:24.945293 kubelet[2053]: E0813 01:18:24.945187 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.045686 kubelet[2053]: E0813 01:18:25.045587 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.146410 kubelet[2053]: E0813 01:18:25.146187 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.247333 kubelet[2053]: E0813 01:18:25.247255 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.348445 kubelet[2053]: E0813 01:18:25.348330 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.449441 kubelet[2053]: E0813 01:18:25.449216 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.550411 kubelet[2053]: E0813 01:18:25.550303 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.620639 kubelet[2053]: E0813 01:18:25.620558 2053 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-a-9864ec3500\" not found" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:25.651033 kubelet[2053]: E0813 01:18:25.650950 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.752337 kubelet[2053]: E0813 01:18:25.752076 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.852431 kubelet[2053]: E0813 01:18:25.852358 2053 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:25.897971 kubelet[2053]: I0813 01:18:25.897890 2053 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:25.914454 kubelet[2053]: I0813 01:18:25.914403 2053 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 01:18:25.914793 kubelet[2053]: I0813 01:18:25.914726 2053 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:25.922478 kubelet[2053]: I0813 01:18:25.922197 2053 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 01:18:25.924644 kubelet[2053]: I0813 01:18:25.923168 2053 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:25.931617 kubelet[2053]: I0813 01:18:25.931526 2053 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 01:18:26.496813 systemd[1]: Reloading. Aug 13 01:18:26.533600 /usr/lib/systemd/system-generators/torcx-generator[2398]: time="2025-08-13T01:18:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:18:26.533617 /usr/lib/systemd/system-generators/torcx-generator[2398]: time="2025-08-13T01:18:26Z" level=info msg="torcx already run" Aug 13 01:18:26.564459 kubelet[2053]: I0813 01:18:26.564407 2053 apiserver.go:52] "Watching apiserver" Aug 13 01:18:26.597287 kubelet[2053]: I0813 01:18:26.597260 2053 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:18:26.601801 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:18:26.601813 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:18:26.617887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:18:26.690023 systemd[1]: Stopping kubelet.service... Aug 13 01:18:26.713711 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:18:26.713812 systemd[1]: Stopped kubelet.service. Aug 13 01:18:26.713836 systemd[1]: kubelet.service: Consumed 1.022s CPU time. Aug 13 01:18:26.714707 systemd[1]: Starting kubelet.service... Aug 13 01:18:26.973068 systemd[1]: Started kubelet.service. Aug 13 01:18:26.997714 kubelet[2463]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:18:26.997714 kubelet[2463]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:18:26.997714 kubelet[2463]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:18:26.998016 kubelet[2463]: I0813 01:18:26.997747 2463 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:18:27.002189 kubelet[2463]: I0813 01:18:27.002171 2463 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:18:27.002189 kubelet[2463]: I0813 01:18:27.002186 2463 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:18:27.002366 kubelet[2463]: I0813 01:18:27.002358 2463 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:18:27.003306 kubelet[2463]: I0813 01:18:27.003244 2463 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 01:18:27.006617 kubelet[2463]: I0813 01:18:27.006575 2463 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:18:27.008217 kubelet[2463]: E0813 01:18:27.008203 2463 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:18:27.008271 kubelet[2463]: I0813 01:18:27.008218 2463 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:18:27.026112 kubelet[2463]: I0813 01:18:27.026075 2463 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:18:27.026207 kubelet[2463]: I0813 01:18:27.026192 2463 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:18:27.026344 kubelet[2463]: I0813 01:18:27.026209 2463 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-9864ec3500","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:18:27.026344 kubelet[2463]: I0813 01:18:27.026326 2463 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:18:27.026344 kubelet[2463]: I0813 01:18:27.026333 2463 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:18:27.026456 kubelet[2463]: I0813 01:18:27.026365 2463 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:18:27.026540 kubelet[2463]: I0813 01:18:27.026485 2463 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:18:27.026540 kubelet[2463]: I0813 01:18:27.026494 2463 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:18:27.026540 kubelet[2463]: I0813 01:18:27.026507 2463 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:18:27.026540 kubelet[2463]: I0813 01:18:27.026516 2463 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:18:27.027118 kubelet[2463]: I0813 01:18:27.027106 2463 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:18:27.027743 kubelet[2463]: I0813 01:18:27.027725 2463 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:18:27.029647 kubelet[2463]: I0813 01:18:27.029635 2463 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:18:27.029708 kubelet[2463]: I0813 01:18:27.029669 2463 server.go:1289] "Started kubelet" Aug 13 01:18:27.029753 kubelet[2463]: I0813 01:18:27.029714 2463 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:18:27.029810 kubelet[2463]: I0813 01:18:27.029778 2463 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:18:27.029990 kubelet[2463]: I0813 01:18:27.029977 2463 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:18:27.030505 kubelet[2463]: I0813 01:18:27.030491 2463 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:18:27.030571 kubelet[2463]: I0813 01:18:27.030532 2463 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:18:27.030640 kubelet[2463]: I0813 01:18:27.030625 2463 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:18:27.030743 kubelet[2463]: I0813 01:18:27.030728 2463 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:18:27.030845 kubelet[2463]: I0813 01:18:27.030831 2463 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:18:27.030985 kubelet[2463]: I0813 01:18:27.030970 2463 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:18:27.031053 kubelet[2463]: I0813 01:18:27.030834 2463 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:18:27.031106 kubelet[2463]: E0813 01:18:27.030611 2463 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-9864ec3500\" not found" Aug 13 01:18:27.031106 kubelet[2463]: E0813 01:18:27.031095 2463 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:18:27.031181 kubelet[2463]: I0813 01:18:27.031106 2463 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:18:27.031930 kubelet[2463]: I0813 01:18:27.031912 2463 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:18:27.037138 kubelet[2463]: I0813 01:18:27.037119 2463 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:18:27.037624 kubelet[2463]: I0813 01:18:27.037617 2463 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:18:27.037668 kubelet[2463]: I0813 01:18:27.037629 2463 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:18:27.037668 kubelet[2463]: I0813 01:18:27.037642 2463 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:18:27.037668 kubelet[2463]: I0813 01:18:27.037649 2463 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:18:27.037765 kubelet[2463]: E0813 01:18:27.037681 2463 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:18:27.045227 kubelet[2463]: I0813 01:18:27.045212 2463 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:18:27.045227 kubelet[2463]: I0813 01:18:27.045222 2463 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:18:27.045329 kubelet[2463]: I0813 01:18:27.045254 2463 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:18:27.045350 kubelet[2463]: I0813 01:18:27.045333 2463 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:18:27.045350 kubelet[2463]: I0813 01:18:27.045340 2463 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:18:27.045387 kubelet[2463]: I0813 01:18:27.045351 2463 policy_none.go:49] "None policy: Start" Aug 13 01:18:27.045387 kubelet[2463]: I0813 01:18:27.045357 2463 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:18:27.045387 kubelet[2463]: I0813 01:18:27.045362 2463 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:18:27.045475 kubelet[2463]: I0813 01:18:27.045411 2463 state_mem.go:75] "Updated machine memory state" Aug 13 01:18:27.047094 kubelet[2463]: E0813 01:18:27.047086 2463 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:18:27.047171 kubelet[2463]: I0813 01:18:27.047165 2463 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:18:27.047196 kubelet[2463]: I0813 01:18:27.047172 2463 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:18:27.047537 kubelet[2463]: I0813 01:18:27.047527 2463 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:18:27.047976 kubelet[2463]: E0813 01:18:27.047956 2463 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:18:27.139890 kubelet[2463]: I0813 01:18:27.139784 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.139890 kubelet[2463]: I0813 01:18:27.139844 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.140370 kubelet[2463]: I0813 01:18:27.140145 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.149525 kubelet[2463]: I0813 01:18:27.149433 2463 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 01:18:27.149757 kubelet[2463]: E0813 01:18:27.149567 2463 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-a-9864ec3500\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.150426 kubelet[2463]: I0813 01:18:27.150361 2463 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 01:18:27.150716 kubelet[2463]: E0813 01:18:27.150459 2463 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-a-9864ec3500\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.150716 kubelet[2463]: I0813 01:18:27.150598 2463 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 01:18:27.151060 kubelet[2463]: E0813 01:18:27.150748 2463 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.153419 kubelet[2463]: I0813 01:18:27.153364 2463 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.163904 kubelet[2463]: I0813 01:18:27.163826 2463 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.164107 kubelet[2463]: I0813 01:18:27.163958 2463 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.232980 kubelet[2463]: I0813 01:18:27.232722 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.232980 kubelet[2463]: I0813 01:18:27.232821 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.232980 kubelet[2463]: I0813 01:18:27.232888 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.233501 kubelet[2463]: I0813 01:18:27.233041 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.233501 kubelet[2463]: I0813 01:18:27.233130 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53dcf69a3152c58bba0277578321827d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-9864ec3500\" (UID: \"53dcf69a3152c58bba0277578321827d\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.233501 kubelet[2463]: I0813 01:18:27.233187 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/681c146f5657410c5512a51071dbd18a-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" (UID: \"681c146f5657410c5512a51071dbd18a\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.233501 kubelet[2463]: I0813 01:18:27.233287 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b4bfa68026266d10dd087653918fb93-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-9864ec3500\" (UID: \"4b4bfa68026266d10dd087653918fb93\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.233501 kubelet[2463]: I0813 01:18:27.233387 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b4bfa68026266d10dd087653918fb93-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-9864ec3500\" (UID: \"4b4bfa68026266d10dd087653918fb93\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.234050 kubelet[2463]: I0813 01:18:27.233445 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b4bfa68026266d10dd087653918fb93-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-9864ec3500\" (UID: \"4b4bfa68026266d10dd087653918fb93\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:27.498065 sudo[2511]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:18:27.498198 sudo[2511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 01:18:27.824077 sudo[2511]: pam_unix(sudo:session): session closed for user root Aug 13 01:18:28.027369 kubelet[2463]: I0813 01:18:28.027318 2463 apiserver.go:52] "Watching apiserver" Aug 13 01:18:28.031834 kubelet[2463]: I0813 01:18:28.031825 2463 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:18:28.041159 kubelet[2463]: I0813 01:18:28.041147 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:28.041224 kubelet[2463]: I0813 01:18:28.041217 2463 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:28.045634 kubelet[2463]: I0813 01:18:28.045606 2463 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 01:18:28.045634 kubelet[2463]: E0813 01:18:28.045629 2463 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-a-9864ec3500\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:28.046210 kubelet[2463]: I0813 01:18:28.046182 2463 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 01:18:28.046281 kubelet[2463]: E0813 01:18:28.046216 2463 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-a-9864ec3500\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-a-9864ec3500" Aug 13 01:18:28.057876 kubelet[2463]: I0813 01:18:28.057829 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-9864ec3500" podStartSLOduration=3.057819664 podStartE2EDuration="3.057819664s" podCreationTimestamp="2025-08-13 01:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:18:28.057778515 +0000 UTC m=+1.082286740" watchObservedRunningTime="2025-08-13 01:18:28.057819664 +0000 UTC m=+1.082327886" Aug 13 01:18:28.063162 kubelet[2463]: I0813 01:18:28.063144 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-a-9864ec3500" podStartSLOduration=3.063137508 podStartE2EDuration="3.063137508s" podCreationTimestamp="2025-08-13 01:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:18:28.063089887 +0000 UTC m=+1.087598111" watchObservedRunningTime="2025-08-13 01:18:28.063137508 +0000 UTC m=+1.087645730" Aug 13 01:18:28.068128 kubelet[2463]: I0813 01:18:28.068103 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-a-9864ec3500" podStartSLOduration=3.06809532 podStartE2EDuration="3.06809532s" podCreationTimestamp="2025-08-13 01:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:18:28.067899134 +0000 UTC m=+1.092407356" watchObservedRunningTime="2025-08-13 01:18:28.06809532 +0000 UTC m=+1.092603541" Aug 13 01:18:29.393628 sudo[1730]: pam_unix(sudo:session): session closed for user root Aug 13 01:18:29.394997 sshd[1727]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:29.397132 systemd[1]: sshd@6-147.75.71.225:22-139.178.89.65:52314.service: Deactivated successfully. Aug 13 01:18:29.397857 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:18:29.397989 systemd[1]: session-9.scope: Consumed 4.120s CPU time. Aug 13 01:18:29.398514 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:18:29.399414 systemd-logind[1590]: Removed session 9. Aug 13 01:18:33.013594 kubelet[2463]: I0813 01:18:33.013500 2463 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:18:33.014499 env[1557]: time="2025-08-13T01:18:33.014154866Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:18:33.015296 kubelet[2463]: I0813 01:18:33.014566 2463 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:18:33.019829 systemd[1]: Created slice kubepods-besteffort-pod0d6b1aa0_5466_4481_97bd_528461309b1e.slice. Aug 13 01:18:33.037634 systemd[1]: Created slice kubepods-burstable-podfc5d714b_fc13_404d_ac63_be597cf9ff4d.slice. Aug 13 01:18:33.071943 kubelet[2463]: I0813 01:18:33.071885 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d6b1aa0-5466-4481-97bd-528461309b1e-kube-proxy\") pod \"kube-proxy-m2lsl\" (UID: \"0d6b1aa0-5466-4481-97bd-528461309b1e\") " pod="kube-system/kube-proxy-m2lsl" Aug 13 01:18:33.071943 kubelet[2463]: I0813 01:18:33.071922 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d6b1aa0-5466-4481-97bd-528461309b1e-xtables-lock\") pod \"kube-proxy-m2lsl\" (UID: \"0d6b1aa0-5466-4481-97bd-528461309b1e\") " pod="kube-system/kube-proxy-m2lsl" Aug 13 01:18:33.071943 kubelet[2463]: I0813 01:18:33.071944 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-cgroup\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072147 kubelet[2463]: I0813 01:18:33.071960 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-etc-cni-netd\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072147 kubelet[2463]: I0813 01:18:33.071976 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-lib-modules\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072147 kubelet[2463]: I0813 01:18:33.072008 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d6b1aa0-5466-4481-97bd-528461309b1e-lib-modules\") pod \"kube-proxy-m2lsl\" (UID: \"0d6b1aa0-5466-4481-97bd-528461309b1e\") " pod="kube-system/kube-proxy-m2lsl" Aug 13 01:18:33.072147 kubelet[2463]: I0813 01:18:33.072023 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cni-path\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072147 kubelet[2463]: I0813 01:18:33.072038 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-xtables-lock\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072147 kubelet[2463]: I0813 01:18:33.072057 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprr9\" (UniqueName: \"kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-kube-api-access-fprr9\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072362 kubelet[2463]: I0813 01:18:33.072074 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-run\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072362 kubelet[2463]: I0813 01:18:33.072090 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5dt\" (UniqueName: \"kubernetes.io/projected/0d6b1aa0-5466-4481-97bd-528461309b1e-kube-api-access-5g5dt\") pod \"kube-proxy-m2lsl\" (UID: \"0d6b1aa0-5466-4481-97bd-528461309b1e\") " pod="kube-system/kube-proxy-m2lsl" Aug 13 01:18:33.072362 kubelet[2463]: I0813 01:18:33.072107 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc5d714b-fc13-404d-ac63-be597cf9ff4d-clustermesh-secrets\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072362 kubelet[2463]: I0813 01:18:33.072122 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-host-proc-sys-kernel\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072362 kubelet[2463]: I0813 01:18:33.072137 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-bpf-maps\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072362 kubelet[2463]: I0813 01:18:33.072151 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-hostproc\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072700 kubelet[2463]: I0813 01:18:33.072167 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-config-path\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072700 kubelet[2463]: I0813 01:18:33.072181 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-host-proc-sys-net\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.072700 kubelet[2463]: I0813 01:18:33.072197 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-hubble-tls\") pod \"cilium-sc5nx\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " pod="kube-system/cilium-sc5nx" Aug 13 01:18:33.173544 kubelet[2463]: I0813 01:18:33.173463 2463 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 01:18:33.186724 kubelet[2463]: E0813 01:18:33.186666 2463 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 01:18:33.186724 kubelet[2463]: E0813 01:18:33.186725 2463 projected.go:194] Error preparing data for projected volume kube-api-access-5g5dt for pod kube-system/kube-proxy-m2lsl: configmap "kube-root-ca.crt" not found Aug 13 01:18:33.187109 kubelet[2463]: E0813 01:18:33.186912 2463 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d6b1aa0-5466-4481-97bd-528461309b1e-kube-api-access-5g5dt podName:0d6b1aa0-5466-4481-97bd-528461309b1e nodeName:}" failed. No retries permitted until 2025-08-13 01:18:33.686857755 +0000 UTC m=+6.711366038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5g5dt" (UniqueName: "kubernetes.io/projected/0d6b1aa0-5466-4481-97bd-528461309b1e-kube-api-access-5g5dt") pod "kube-proxy-m2lsl" (UID: "0d6b1aa0-5466-4481-97bd-528461309b1e") : configmap "kube-root-ca.crt" not found Aug 13 01:18:33.189262 kubelet[2463]: E0813 01:18:33.189177 2463 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 01:18:33.189262 kubelet[2463]: E0813 01:18:33.189256 2463 projected.go:194] Error preparing data for projected volume kube-api-access-fprr9 for pod kube-system/cilium-sc5nx: configmap "kube-root-ca.crt" not found Aug 13 01:18:33.189550 kubelet[2463]: E0813 01:18:33.189367 2463 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-kube-api-access-fprr9 podName:fc5d714b-fc13-404d-ac63-be597cf9ff4d nodeName:}" failed. No retries permitted until 2025-08-13 01:18:33.689329085 +0000 UTC m=+6.713837363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fprr9" (UniqueName: "kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-kube-api-access-fprr9") pod "cilium-sc5nx" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d") : configmap "kube-root-ca.crt" not found Aug 13 01:18:33.778422 kubelet[2463]: E0813 01:18:33.778352 2463 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 01:18:33.778422 kubelet[2463]: E0813 01:18:33.778379 2463 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 01:18:33.778422 kubelet[2463]: E0813 01:18:33.778416 2463 projected.go:194] Error preparing data for projected volume kube-api-access-5g5dt for pod kube-system/kube-proxy-m2lsl: configmap "kube-root-ca.crt" not found Aug 13 01:18:33.778923 kubelet[2463]: E0813 01:18:33.778442 2463 projected.go:194] Error preparing data for projected volume kube-api-access-fprr9 for pod kube-system/cilium-sc5nx: configmap "kube-root-ca.crt" not found Aug 13 01:18:33.778923 kubelet[2463]: E0813 01:18:33.778538 2463 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d6b1aa0-5466-4481-97bd-528461309b1e-kube-api-access-5g5dt podName:0d6b1aa0-5466-4481-97bd-528461309b1e nodeName:}" failed. No retries permitted until 2025-08-13 01:18:34.778498802 +0000 UTC m=+7.803007080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5g5dt" (UniqueName: "kubernetes.io/projected/0d6b1aa0-5466-4481-97bd-528461309b1e-kube-api-access-5g5dt") pod "kube-proxy-m2lsl" (UID: "0d6b1aa0-5466-4481-97bd-528461309b1e") : configmap "kube-root-ca.crt" not found Aug 13 01:18:33.778923 kubelet[2463]: E0813 01:18:33.778580 2463 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-kube-api-access-fprr9 podName:fc5d714b-fc13-404d-ac63-be597cf9ff4d nodeName:}" failed. No retries permitted until 2025-08-13 01:18:34.778558647 +0000 UTC m=+7.803066924 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fprr9" (UniqueName: "kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-kube-api-access-fprr9") pod "cilium-sc5nx" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d") : configmap "kube-root-ca.crt" not found Aug 13 01:18:33.879510 update_engine[1551]: I0813 01:18:33.879398 1551 update_attempter.cc:509] Updating boot flags... Aug 13 01:18:34.289084 systemd[1]: Created slice kubepods-besteffort-podfa01eaaa_c94a_49ba_96d5_8b03bc62ac1d.slice. Aug 13 01:18:34.381632 kubelet[2463]: I0813 01:18:34.381534 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jvtmv\" (UID: \"fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d\") " pod="kube-system/cilium-operator-6c4d7847fc-jvtmv" Aug 13 01:18:34.382582 kubelet[2463]: I0813 01:18:34.381677 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wc2m\" (UniqueName: \"kubernetes.io/projected/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d-kube-api-access-6wc2m\") pod \"cilium-operator-6c4d7847fc-jvtmv\" (UID: \"fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d\") " pod="kube-system/cilium-operator-6c4d7847fc-jvtmv" Aug 13 01:18:34.596431 env[1557]: time="2025-08-13T01:18:34.596307794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jvtmv,Uid:fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:34.836620 env[1557]: time="2025-08-13T01:18:34.836542260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m2lsl,Uid:0d6b1aa0-5466-4481-97bd-528461309b1e,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:34.840292 env[1557]: time="2025-08-13T01:18:34.840252658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sc5nx,Uid:fc5d714b-fc13-404d-ac63-be597cf9ff4d,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:35.012568 env[1557]: time="2025-08-13T01:18:35.012471046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:35.012568 env[1557]: time="2025-08-13T01:18:35.012507893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:35.012568 env[1557]: time="2025-08-13T01:18:35.012532083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:35.012677 env[1557]: time="2025-08-13T01:18:35.012590711Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb pid=2648 runtime=io.containerd.runc.v2 Aug 13 01:18:35.019091 systemd[1]: Started cri-containerd-4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb.scope. Aug 13 01:18:35.042037 env[1557]: time="2025-08-13T01:18:35.042013008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jvtmv,Uid:fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\"" Aug 13 01:18:35.042886 env[1557]: time="2025-08-13T01:18:35.042841136Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:18:35.177026 env[1557]: time="2025-08-13T01:18:35.176826250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:35.177026 env[1557]: time="2025-08-13T01:18:35.176926384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:35.177026 env[1557]: time="2025-08-13T01:18:35.176965971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:35.177579 env[1557]: time="2025-08-13T01:18:35.177352958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04 pid=2691 runtime=io.containerd.runc.v2 Aug 13 01:18:35.180110 env[1557]: time="2025-08-13T01:18:35.179938023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:35.180110 env[1557]: time="2025-08-13T01:18:35.180031281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:35.180110 env[1557]: time="2025-08-13T01:18:35.180070316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:35.180572 env[1557]: time="2025-08-13T01:18:35.180442851Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5fd4e7f56e910a181f4055f283f09823bbf21e042bff9dcbdeb4c2d6cb48565 pid=2699 runtime=io.containerd.runc.v2 Aug 13 01:18:35.207056 systemd[1]: Started cri-containerd-25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04.scope. Aug 13 01:18:35.210537 systemd[1]: Started cri-containerd-e5fd4e7f56e910a181f4055f283f09823bbf21e042bff9dcbdeb4c2d6cb48565.scope. Aug 13 01:18:35.235973 env[1557]: time="2025-08-13T01:18:35.235919277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sc5nx,Uid:fc5d714b-fc13-404d-ac63-be597cf9ff4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\"" Aug 13 01:18:35.239841 env[1557]: time="2025-08-13T01:18:35.239768794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m2lsl,Uid:0d6b1aa0-5466-4481-97bd-528461309b1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5fd4e7f56e910a181f4055f283f09823bbf21e042bff9dcbdeb4c2d6cb48565\"" Aug 13 01:18:35.243925 env[1557]: time="2025-08-13T01:18:35.243882342Z" level=info msg="CreateContainer within sandbox \"e5fd4e7f56e910a181f4055f283f09823bbf21e042bff9dcbdeb4c2d6cb48565\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:18:35.253934 env[1557]: time="2025-08-13T01:18:35.253848663Z" level=info msg="CreateContainer within sandbox \"e5fd4e7f56e910a181f4055f283f09823bbf21e042bff9dcbdeb4c2d6cb48565\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39e576a5e8d4709acd2bf117c445502d7aa8fc838857c6c30e31e34a4b531b37\"" Aug 13 01:18:35.254418 env[1557]: time="2025-08-13T01:18:35.254378475Z" level=info msg="StartContainer for \"39e576a5e8d4709acd2bf117c445502d7aa8fc838857c6c30e31e34a4b531b37\"" Aug 13 01:18:35.275701 systemd[1]: Started cri-containerd-39e576a5e8d4709acd2bf117c445502d7aa8fc838857c6c30e31e34a4b531b37.scope. Aug 13 01:18:35.310762 env[1557]: time="2025-08-13T01:18:35.310708303Z" level=info msg="StartContainer for \"39e576a5e8d4709acd2bf117c445502d7aa8fc838857c6c30e31e34a4b531b37\" returns successfully" Aug 13 01:18:36.081776 kubelet[2463]: I0813 01:18:36.081592 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m2lsl" podStartSLOduration=4.081541756 podStartE2EDuration="4.081541756s" podCreationTimestamp="2025-08-13 01:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:18:36.081284704 +0000 UTC m=+9.105792997" watchObservedRunningTime="2025-08-13 01:18:36.081541756 +0000 UTC m=+9.106050034" Aug 13 01:18:36.387680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416446848.mount: Deactivated successfully. Aug 13 01:18:37.321732 env[1557]: time="2025-08-13T01:18:37.321677429Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:37.322317 env[1557]: time="2025-08-13T01:18:37.322227143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:37.322880 env[1557]: time="2025-08-13T01:18:37.322866324Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:37.323478 env[1557]: time="2025-08-13T01:18:37.323462986Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:18:37.324097 env[1557]: time="2025-08-13T01:18:37.324082639Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:18:37.325426 env[1557]: time="2025-08-13T01:18:37.325393349Z" level=info msg="CreateContainer within sandbox \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:18:37.330307 env[1557]: time="2025-08-13T01:18:37.330224888Z" level=info msg="CreateContainer within sandbox \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\"" Aug 13 01:18:37.330718 env[1557]: time="2025-08-13T01:18:37.330665178Z" level=info msg="StartContainer for \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\"" Aug 13 01:18:37.361491 systemd[1]: Started cri-containerd-963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783.scope. Aug 13 01:18:37.373926 env[1557]: time="2025-08-13T01:18:37.373898934Z" level=info msg="StartContainer for \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\" returns successfully" Aug 13 01:18:38.086511 kubelet[2463]: I0813 01:18:38.086364 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jvtmv" podStartSLOduration=1.805043425 podStartE2EDuration="4.086327682s" podCreationTimestamp="2025-08-13 01:18:34 +0000 UTC" firstStartedPulling="2025-08-13 01:18:35.042660791 +0000 UTC m=+8.067169013" lastFinishedPulling="2025-08-13 01:18:37.323945045 +0000 UTC m=+10.348453270" observedRunningTime="2025-08-13 01:18:38.085761413 +0000 UTC m=+11.110269716" watchObservedRunningTime="2025-08-13 01:18:38.086327682 +0000 UTC m=+11.110835956" Aug 13 01:18:42.454251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24039032.mount: Deactivated successfully. Aug 13 01:18:44.181997 env[1557]: time="2025-08-13T01:18:44.181944136Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:44.182435 env[1557]: time="2025-08-13T01:18:44.182394082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:44.183222 env[1557]: time="2025-08-13T01:18:44.183174952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:18:44.183571 env[1557]: time="2025-08-13T01:18:44.183528200Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:18:44.185176 env[1557]: time="2025-08-13T01:18:44.185161991Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:18:44.189350 env[1557]: time="2025-08-13T01:18:44.189272604Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4\"" Aug 13 01:18:44.189518 env[1557]: time="2025-08-13T01:18:44.189502920Z" level=info msg="StartContainer for \"be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4\"" Aug 13 01:18:44.190723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084354430.mount: Deactivated successfully. Aug 13 01:18:44.199389 systemd[1]: Started cri-containerd-be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4.scope. Aug 13 01:18:44.209401 env[1557]: time="2025-08-13T01:18:44.209352233Z" level=info msg="StartContainer for \"be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4\" returns successfully" Aug 13 01:18:44.214168 systemd[1]: cri-containerd-be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4.scope: Deactivated successfully. Aug 13 01:18:45.194007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4-rootfs.mount: Deactivated successfully. Aug 13 01:18:45.470063 env[1557]: time="2025-08-13T01:18:45.469871477Z" level=info msg="shim disconnected" id=be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4 Aug 13 01:18:45.470063 env[1557]: time="2025-08-13T01:18:45.469963359Z" level=warning msg="cleaning up after shim disconnected" id=be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4 namespace=k8s.io Aug 13 01:18:45.470063 env[1557]: time="2025-08-13T01:18:45.469996633Z" level=info msg="cleaning up dead shim" Aug 13 01:18:45.485340 env[1557]: time="2025-08-13T01:18:45.485243924Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3038 runtime=io.containerd.runc.v2\n" Aug 13 01:18:46.102085 env[1557]: time="2025-08-13T01:18:46.101991907Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:18:46.117124 env[1557]: time="2025-08-13T01:18:46.117029739Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036\"" Aug 13 01:18:46.118095 env[1557]: time="2025-08-13T01:18:46.117998585Z" level=info msg="StartContainer for \"790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036\"" Aug 13 01:18:46.144101 systemd[1]: Started cri-containerd-790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036.scope. Aug 13 01:18:46.161035 env[1557]: time="2025-08-13T01:18:46.161000975Z" level=info msg="StartContainer for \"790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036\" returns successfully" Aug 13 01:18:46.170882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:18:46.171193 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:18:46.171380 systemd[1]: Stopping systemd-sysctl.service... Aug 13 01:18:46.172634 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:18:46.173023 systemd[1]: cri-containerd-790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036.scope: Deactivated successfully. Aug 13 01:18:46.178328 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:18:46.189569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036-rootfs.mount: Deactivated successfully. Aug 13 01:18:46.214622 env[1557]: time="2025-08-13T01:18:46.214583810Z" level=info msg="shim disconnected" id=790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036 Aug 13 01:18:46.214622 env[1557]: time="2025-08-13T01:18:46.214621165Z" level=warning msg="cleaning up after shim disconnected" id=790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036 namespace=k8s.io Aug 13 01:18:46.214780 env[1557]: time="2025-08-13T01:18:46.214630918Z" level=info msg="cleaning up dead shim" Aug 13 01:18:46.220667 env[1557]: time="2025-08-13T01:18:46.220603433Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3102 runtime=io.containerd.runc.v2\n" Aug 13 01:18:47.116488 env[1557]: time="2025-08-13T01:18:47.116370695Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:18:47.135549 env[1557]: time="2025-08-13T01:18:47.135518782Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0\"" Aug 13 01:18:47.135818 env[1557]: time="2025-08-13T01:18:47.135795109Z" level=info msg="StartContainer for \"7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0\"" Aug 13 01:18:47.146308 systemd[1]: Started cri-containerd-7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0.scope. Aug 13 01:18:47.158211 env[1557]: time="2025-08-13T01:18:47.158185704Z" level=info msg="StartContainer for \"7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0\" returns successfully" Aug 13 01:18:47.159567 systemd[1]: cri-containerd-7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0.scope: Deactivated successfully. Aug 13 01:18:47.169586 env[1557]: time="2025-08-13T01:18:47.169557219Z" level=info msg="shim disconnected" id=7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0 Aug 13 01:18:47.169586 env[1557]: time="2025-08-13T01:18:47.169586107Z" level=warning msg="cleaning up after shim disconnected" id=7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0 namespace=k8s.io Aug 13 01:18:47.169700 env[1557]: time="2025-08-13T01:18:47.169591975Z" level=info msg="cleaning up dead shim" Aug 13 01:18:47.173059 env[1557]: time="2025-08-13T01:18:47.173035528Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3161 runtime=io.containerd.runc.v2\n" Aug 13 01:18:47.196383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0-rootfs.mount: Deactivated successfully. Aug 13 01:18:48.116360 env[1557]: time="2025-08-13T01:18:48.116270957Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:18:48.133932 env[1557]: time="2025-08-13T01:18:48.133830037Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4\"" Aug 13 01:18:48.134953 env[1557]: time="2025-08-13T01:18:48.134883068Z" level=info msg="StartContainer for \"ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4\"" Aug 13 01:18:48.176944 systemd[1]: Started cri-containerd-ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4.scope. Aug 13 01:18:48.218328 env[1557]: time="2025-08-13T01:18:48.218253621Z" level=info msg="StartContainer for \"ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4\" returns successfully" Aug 13 01:18:48.220124 systemd[1]: cri-containerd-ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4.scope: Deactivated successfully. Aug 13 01:18:48.248752 env[1557]: time="2025-08-13T01:18:48.248675829Z" level=info msg="shim disconnected" id=ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4 Aug 13 01:18:48.248752 env[1557]: time="2025-08-13T01:18:48.248748475Z" level=warning msg="cleaning up after shim disconnected" id=ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4 namespace=k8s.io Aug 13 01:18:48.249129 env[1557]: time="2025-08-13T01:18:48.248768683Z" level=info msg="cleaning up dead shim" Aug 13 01:18:48.249046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4-rootfs.mount: Deactivated successfully. Aug 13 01:18:48.261449 env[1557]: time="2025-08-13T01:18:48.261361643Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:18:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3215 runtime=io.containerd.runc.v2\n" Aug 13 01:18:49.116523 env[1557]: time="2025-08-13T01:18:49.116459580Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:18:49.147948 env[1557]: time="2025-08-13T01:18:49.147818172Z" level=info msg="CreateContainer within sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\"" Aug 13 01:18:49.148202 env[1557]: time="2025-08-13T01:18:49.148189585Z" level=info msg="StartContainer for \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\"" Aug 13 01:18:49.156927 systemd[1]: Started cri-containerd-96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788.scope. Aug 13 01:18:49.169995 env[1557]: time="2025-08-13T01:18:49.169969384Z" level=info msg="StartContainer for \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\" returns successfully" Aug 13 01:18:49.222241 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Aug 13 01:18:49.230452 kubelet[2463]: I0813 01:18:49.230437 2463 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:18:49.247453 systemd[1]: Created slice kubepods-burstable-pod3ad750e8_fc8e_4ba9_9021_9ed5b5657bac.slice. Aug 13 01:18:49.273778 systemd[1]: Created slice kubepods-burstable-pod0c9f98da_cd88_4f2d_a1a6_82f2ba6a0998.slice. Aug 13 01:18:49.290837 kubelet[2463]: I0813 01:18:49.290819 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxt9\" (UniqueName: \"kubernetes.io/projected/3ad750e8-fc8e-4ba9-9021-9ed5b5657bac-kube-api-access-wwxt9\") pod \"coredns-674b8bbfcf-66qtf\" (UID: \"3ad750e8-fc8e-4ba9-9021-9ed5b5657bac\") " pod="kube-system/coredns-674b8bbfcf-66qtf" Aug 13 01:18:49.290915 kubelet[2463]: I0813 01:18:49.290844 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c9f98da-cd88-4f2d-a1a6-82f2ba6a0998-config-volume\") pod \"coredns-674b8bbfcf-b94mb\" (UID: \"0c9f98da-cd88-4f2d-a1a6-82f2ba6a0998\") " pod="kube-system/coredns-674b8bbfcf-b94mb" Aug 13 01:18:49.290915 kubelet[2463]: I0813 01:18:49.290863 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62rgz\" (UniqueName: \"kubernetes.io/projected/0c9f98da-cd88-4f2d-a1a6-82f2ba6a0998-kube-api-access-62rgz\") pod \"coredns-674b8bbfcf-b94mb\" (UID: \"0c9f98da-cd88-4f2d-a1a6-82f2ba6a0998\") " pod="kube-system/coredns-674b8bbfcf-b94mb" Aug 13 01:18:49.290915 kubelet[2463]: I0813 01:18:49.290872 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ad750e8-fc8e-4ba9-9021-9ed5b5657bac-config-volume\") pod \"coredns-674b8bbfcf-66qtf\" (UID: \"3ad750e8-fc8e-4ba9-9021-9ed5b5657bac\") " pod="kube-system/coredns-674b8bbfcf-66qtf" Aug 13 01:18:49.391311 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Aug 13 01:18:49.550418 env[1557]: time="2025-08-13T01:18:49.550218579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-66qtf,Uid:3ad750e8-fc8e-4ba9-9021-9ed5b5657bac,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:49.576753 env[1557]: time="2025-08-13T01:18:49.576663089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b94mb,Uid:0c9f98da-cd88-4f2d-a1a6-82f2ba6a0998,Namespace:kube-system,Attempt:0,}" Aug 13 01:18:50.145719 kubelet[2463]: I0813 01:18:50.145570 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sc5nx" podStartSLOduration=9.198934538 podStartE2EDuration="18.145531133s" podCreationTimestamp="2025-08-13 01:18:32 +0000 UTC" firstStartedPulling="2025-08-13 01:18:35.237439575 +0000 UTC m=+8.261947827" lastFinishedPulling="2025-08-13 01:18:44.184036201 +0000 UTC m=+17.208544422" observedRunningTime="2025-08-13 01:18:50.144169603 +0000 UTC m=+23.168677949" watchObservedRunningTime="2025-08-13 01:18:50.145531133 +0000 UTC m=+23.170039395" Aug 13 01:18:50.991072 systemd-networkd[1313]: cilium_host: Link UP Aug 13 01:18:50.991163 systemd-networkd[1313]: cilium_net: Link UP Aug 13 01:18:50.998235 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Aug 13 01:18:50.998269 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 01:18:50.998296 systemd-networkd[1313]: cilium_net: Gained carrier Aug 13 01:18:51.005480 systemd-networkd[1313]: cilium_host: Gained carrier Aug 13 01:18:51.052788 systemd-networkd[1313]: cilium_vxlan: Link UP Aug 13 01:18:51.052792 systemd-networkd[1313]: cilium_vxlan: Gained carrier Aug 13 01:18:51.190311 kernel: NET: Registered PF_ALG protocol family Aug 13 01:18:51.319355 systemd-networkd[1313]: cilium_host: Gained IPv6LL Aug 13 01:18:51.480398 systemd-networkd[1313]: cilium_net: Gained IPv6LL Aug 13 01:18:51.766655 systemd-networkd[1313]: lxc_health: Link UP Aug 13 01:18:51.789246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:18:51.789264 systemd-networkd[1313]: lxc_health: Gained carrier Aug 13 01:18:52.094454 systemd-networkd[1313]: lxc2ebddb3ecbf9: Link UP Aug 13 01:18:52.113308 kernel: eth0: renamed from tmp02d78 Aug 13 01:18:52.140783 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:18:52.140849 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2ebddb3ecbf9: link becomes ready Aug 13 01:18:52.141019 systemd-networkd[1313]: lxc2ebddb3ecbf9: Gained carrier Aug 13 01:18:52.141648 systemd-networkd[1313]: lxc89cf92b21118: Link UP Aug 13 01:18:52.157241 kernel: eth0: renamed from tmpf4078 Aug 13 01:18:52.176248 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc89cf92b21118: link becomes ready Aug 13 01:18:52.176274 systemd-networkd[1313]: lxc89cf92b21118: Gained carrier Aug 13 01:18:52.370516 systemd-networkd[1313]: cilium_vxlan: Gained IPv6LL Aug 13 01:18:53.703414 systemd-networkd[1313]: lxc89cf92b21118: Gained IPv6LL Aug 13 01:18:53.768356 systemd-networkd[1313]: lxc_health: Gained IPv6LL Aug 13 01:18:53.768484 systemd-networkd[1313]: lxc2ebddb3ecbf9: Gained IPv6LL Aug 13 01:18:54.419055 env[1557]: time="2025-08-13T01:18:54.419018561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:54.419055 env[1557]: time="2025-08-13T01:18:54.419042010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:54.419055 env[1557]: time="2025-08-13T01:18:54.419050186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:54.419378 env[1557]: time="2025-08-13T01:18:54.419120649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f40785609d42f87a12652f178e3cc4c376e399e48108e8ffe24b476a9a70d9c3 pid=3896 runtime=io.containerd.runc.v2 Aug 13 01:18:54.419378 env[1557]: time="2025-08-13T01:18:54.419293233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:18:54.419378 env[1557]: time="2025-08-13T01:18:54.419311898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:18:54.419378 env[1557]: time="2025-08-13T01:18:54.419321715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:18:54.419525 env[1557]: time="2025-08-13T01:18:54.419471867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02d78e46f6fc99dda48dca1fd0e2a52749011264e110e85058f1aa6840ccdb8a pid=3902 runtime=io.containerd.runc.v2 Aug 13 01:18:54.428441 systemd[1]: Started cri-containerd-02d78e46f6fc99dda48dca1fd0e2a52749011264e110e85058f1aa6840ccdb8a.scope. Aug 13 01:18:54.429217 systemd[1]: Started cri-containerd-f40785609d42f87a12652f178e3cc4c376e399e48108e8ffe24b476a9a70d9c3.scope. Aug 13 01:18:54.453598 env[1557]: time="2025-08-13T01:18:54.453564981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-66qtf,Uid:3ad750e8-fc8e-4ba9-9021-9ed5b5657bac,Namespace:kube-system,Attempt:0,} returns sandbox id \"02d78e46f6fc99dda48dca1fd0e2a52749011264e110e85058f1aa6840ccdb8a\"" Aug 13 01:18:54.453730 env[1557]: time="2025-08-13T01:18:54.453714280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b94mb,Uid:0c9f98da-cd88-4f2d-a1a6-82f2ba6a0998,Namespace:kube-system,Attempt:0,} returns sandbox id \"f40785609d42f87a12652f178e3cc4c376e399e48108e8ffe24b476a9a70d9c3\"" Aug 13 01:18:54.455787 env[1557]: time="2025-08-13T01:18:54.455767745Z" level=info msg="CreateContainer within sandbox \"f40785609d42f87a12652f178e3cc4c376e399e48108e8ffe24b476a9a70d9c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:18:54.456145 env[1557]: time="2025-08-13T01:18:54.456129068Z" level=info msg="CreateContainer within sandbox \"02d78e46f6fc99dda48dca1fd0e2a52749011264e110e85058f1aa6840ccdb8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:18:54.477815 env[1557]: time="2025-08-13T01:18:54.477792489Z" level=info msg="CreateContainer within sandbox \"f40785609d42f87a12652f178e3cc4c376e399e48108e8ffe24b476a9a70d9c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5e0ad05bf7fa5e3a511c11c6938e1dac434d5f18a8f8be8ae44ec0abbcb1e8d\"" Aug 13 01:18:54.478084 env[1557]: time="2025-08-13T01:18:54.478067987Z" level=info msg="StartContainer for \"c5e0ad05bf7fa5e3a511c11c6938e1dac434d5f18a8f8be8ae44ec0abbcb1e8d\"" Aug 13 01:18:54.479004 env[1557]: time="2025-08-13T01:18:54.478986861Z" level=info msg="CreateContainer within sandbox \"02d78e46f6fc99dda48dca1fd0e2a52749011264e110e85058f1aa6840ccdb8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a2fdbcaddecdb95a9c88223d7d543e6b057d3259e2a642c9ed9238a12f7dc4a\"" Aug 13 01:18:54.479192 env[1557]: time="2025-08-13T01:18:54.479177449Z" level=info msg="StartContainer for \"3a2fdbcaddecdb95a9c88223d7d543e6b057d3259e2a642c9ed9238a12f7dc4a\"" Aug 13 01:18:54.486826 systemd[1]: Started cri-containerd-3a2fdbcaddecdb95a9c88223d7d543e6b057d3259e2a642c9ed9238a12f7dc4a.scope. Aug 13 01:18:54.487571 systemd[1]: Started cri-containerd-c5e0ad05bf7fa5e3a511c11c6938e1dac434d5f18a8f8be8ae44ec0abbcb1e8d.scope. Aug 13 01:18:54.500431 env[1557]: time="2025-08-13T01:18:54.500400023Z" level=info msg="StartContainer for \"3a2fdbcaddecdb95a9c88223d7d543e6b057d3259e2a642c9ed9238a12f7dc4a\" returns successfully" Aug 13 01:18:54.500842 env[1557]: time="2025-08-13T01:18:54.500826226Z" level=info msg="StartContainer for \"c5e0ad05bf7fa5e3a511c11c6938e1dac434d5f18a8f8be8ae44ec0abbcb1e8d\" returns successfully" Aug 13 01:18:55.141792 kubelet[2463]: I0813 01:18:55.141674 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-66qtf" podStartSLOduration=21.141643744 podStartE2EDuration="21.141643744s" podCreationTimestamp="2025-08-13 01:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:18:55.140782033 +0000 UTC m=+28.165290313" watchObservedRunningTime="2025-08-13 01:18:55.141643744 +0000 UTC m=+28.166152007" Aug 13 01:18:55.163226 kubelet[2463]: I0813 01:18:55.163191 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b94mb" podStartSLOduration=21.163174331 podStartE2EDuration="21.163174331s" podCreationTimestamp="2025-08-13 01:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:18:55.162935535 +0000 UTC m=+28.187443760" watchObservedRunningTime="2025-08-13 01:18:55.163174331 +0000 UTC m=+28.187682554" Aug 13 01:24:18.779773 systemd[1]: Started sshd@7-147.75.71.225:22-45.78.196.236:57426.service. Aug 13 01:24:19.478217 sshd[4117]: Invalid user from 45.78.196.236 port 57426 Aug 13 01:24:26.769499 sshd[4117]: Connection closed by invalid user 45.78.196.236 port 57426 [preauth] Aug 13 01:24:26.772587 systemd[1]: sshd@7-147.75.71.225:22-45.78.196.236:57426.service: Deactivated successfully. Aug 13 01:24:41.989761 systemd[1]: Started sshd@8-147.75.71.225:22-139.178.89.65:57928.service. Aug 13 01:24:42.016848 sshd[4126]: Accepted publickey for core from 139.178.89.65 port 57928 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:24:42.017803 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:42.021086 systemd-logind[1590]: New session 10 of user core. Aug 13 01:24:42.021871 systemd[1]: Started session-10.scope. Aug 13 01:24:42.145689 sshd[4126]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:42.151944 systemd[1]: sshd@8-147.75.71.225:22-139.178.89.65:57928.service: Deactivated successfully. Aug 13 01:24:42.153794 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:24:42.155509 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:24:42.158039 systemd-logind[1590]: Removed session 10. Aug 13 01:24:47.156810 systemd[1]: Started sshd@9-147.75.71.225:22-139.178.89.65:57930.service. Aug 13 01:24:47.188906 sshd[4158]: Accepted publickey for core from 139.178.89.65 port 57930 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:24:47.192366 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:47.203266 systemd-logind[1590]: New session 11 of user core. Aug 13 01:24:47.206545 systemd[1]: Started session-11.scope. Aug 13 01:24:47.313324 sshd[4158]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:47.315187 systemd[1]: sshd@9-147.75.71.225:22-139.178.89.65:57930.service: Deactivated successfully. Aug 13 01:24:47.315730 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:24:47.316179 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:24:47.316890 systemd-logind[1590]: Removed session 11. Aug 13 01:24:52.322566 systemd[1]: Started sshd@10-147.75.71.225:22-139.178.89.65:45332.service. Aug 13 01:24:52.349460 sshd[4184]: Accepted publickey for core from 139.178.89.65 port 45332 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:24:52.350383 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:52.353534 systemd-logind[1590]: New session 12 of user core. Aug 13 01:24:52.354389 systemd[1]: Started session-12.scope. Aug 13 01:24:52.441730 sshd[4184]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:52.443421 systemd[1]: sshd@10-147.75.71.225:22-139.178.89.65:45332.service: Deactivated successfully. Aug 13 01:24:52.443898 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:24:52.444225 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:24:52.444860 systemd-logind[1590]: Removed session 12. Aug 13 01:24:57.451378 systemd[1]: Started sshd@11-147.75.71.225:22-139.178.89.65:45346.service. Aug 13 01:24:57.479006 sshd[4211]: Accepted publickey for core from 139.178.89.65 port 45346 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:24:57.479965 sshd[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:57.483168 systemd-logind[1590]: New session 13 of user core. Aug 13 01:24:57.484019 systemd[1]: Started session-13.scope. Aug 13 01:24:57.572072 sshd[4211]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:57.574102 systemd[1]: sshd@11-147.75.71.225:22-139.178.89.65:45346.service: Deactivated successfully. Aug 13 01:24:57.574500 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:24:57.574951 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:24:57.575611 systemd[1]: Started sshd@12-147.75.71.225:22-139.178.89.65:45358.service. Aug 13 01:24:57.576090 systemd-logind[1590]: Removed session 13. Aug 13 01:24:57.604345 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 45358 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:24:57.605569 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:57.609372 systemd-logind[1590]: New session 14 of user core. Aug 13 01:24:57.610257 systemd[1]: Started session-14.scope. Aug 13 01:24:57.737056 sshd[4237]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:57.739462 systemd[1]: sshd@12-147.75.71.225:22-139.178.89.65:45358.service: Deactivated successfully. Aug 13 01:24:57.739934 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:24:57.740364 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:24:57.741271 systemd[1]: Started sshd@13-147.75.71.225:22-139.178.89.65:45368.service. Aug 13 01:24:57.741871 systemd-logind[1590]: Removed session 14. Aug 13 01:24:57.771282 sshd[4261]: Accepted publickey for core from 139.178.89.65 port 45368 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:24:57.772317 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:57.775546 systemd-logind[1590]: New session 15 of user core. Aug 13 01:24:57.776251 systemd[1]: Started session-15.scope. Aug 13 01:24:57.907596 sshd[4261]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:57.912021 systemd[1]: sshd@13-147.75.71.225:22-139.178.89.65:45368.service: Deactivated successfully. Aug 13 01:24:57.913375 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:24:57.914511 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:24:57.916065 systemd-logind[1590]: Removed session 15. Aug 13 01:25:02.916352 systemd[1]: Started sshd@14-147.75.71.225:22-139.178.89.65:33618.service. Aug 13 01:25:02.943755 sshd[4288]: Accepted publickey for core from 139.178.89.65 port 33618 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:02.944732 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:02.948214 systemd-logind[1590]: New session 16 of user core. Aug 13 01:25:02.949021 systemd[1]: Started session-16.scope. Aug 13 01:25:03.038736 sshd[4288]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:03.040121 systemd[1]: sshd@14-147.75.71.225:22-139.178.89.65:33618.service: Deactivated successfully. Aug 13 01:25:03.040550 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:25:03.040832 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:25:03.041178 systemd-logind[1590]: Removed session 16. Aug 13 01:25:08.050523 systemd[1]: Started sshd@15-147.75.71.225:22-139.178.89.65:33624.service. Aug 13 01:25:08.107381 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 33624 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:08.108109 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:08.110605 systemd-logind[1590]: New session 17 of user core. Aug 13 01:25:08.111097 systemd[1]: Started session-17.scope. Aug 13 01:25:08.234296 sshd[4316]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:08.236298 systemd[1]: sshd@15-147.75.71.225:22-139.178.89.65:33624.service: Deactivated successfully. Aug 13 01:25:08.236644 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:25:08.237011 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:25:08.237627 systemd[1]: Started sshd@16-147.75.71.225:22-139.178.89.65:33630.service. Aug 13 01:25:08.238092 systemd-logind[1590]: Removed session 17. Aug 13 01:25:08.264969 sshd[4341]: Accepted publickey for core from 139.178.89.65 port 33630 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:08.265772 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:08.268521 systemd-logind[1590]: New session 18 of user core. Aug 13 01:25:08.269161 systemd[1]: Started session-18.scope. Aug 13 01:25:08.373628 sshd[4341]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:08.375647 systemd[1]: sshd@16-147.75.71.225:22-139.178.89.65:33630.service: Deactivated successfully. Aug 13 01:25:08.376054 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:25:08.376495 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:25:08.377136 systemd[1]: Started sshd@17-147.75.71.225:22-139.178.89.65:33644.service. Aug 13 01:25:08.377675 systemd-logind[1590]: Removed session 18. Aug 13 01:25:08.406694 sshd[4361]: Accepted publickey for core from 139.178.89.65 port 33644 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:08.407676 sshd[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:08.411128 systemd-logind[1590]: New session 19 of user core. Aug 13 01:25:08.411883 systemd[1]: Started session-19.scope. Aug 13 01:25:09.097002 sshd[4361]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:09.103254 systemd[1]: sshd@17-147.75.71.225:22-139.178.89.65:33644.service: Deactivated successfully. Aug 13 01:25:09.104963 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:25:09.106520 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:25:09.108766 systemd[1]: Started sshd@18-147.75.71.225:22-139.178.89.65:49704.service. Aug 13 01:25:09.110926 systemd-logind[1590]: Removed session 19. Aug 13 01:25:09.147890 sshd[4390]: Accepted publickey for core from 139.178.89.65 port 49704 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:09.149155 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:09.153272 systemd-logind[1590]: New session 20 of user core. Aug 13 01:25:09.154208 systemd[1]: Started session-20.scope. Aug 13 01:25:09.372342 sshd[4390]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:09.380977 systemd[1]: sshd@18-147.75.71.225:22-139.178.89.65:49704.service: Deactivated successfully. Aug 13 01:25:09.382863 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:25:09.384532 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:25:09.387427 systemd[1]: Started sshd@19-147.75.71.225:22-139.178.89.65:49718.service. Aug 13 01:25:09.390214 systemd-logind[1590]: Removed session 20. Aug 13 01:25:09.443746 sshd[4415]: Accepted publickey for core from 139.178.89.65 port 49718 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:09.444992 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:09.449126 systemd-logind[1590]: New session 21 of user core. Aug 13 01:25:09.450055 systemd[1]: Started session-21.scope. Aug 13 01:25:09.580597 sshd[4415]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:09.582016 systemd[1]: sshd@19-147.75.71.225:22-139.178.89.65:49718.service: Deactivated successfully. Aug 13 01:25:09.582442 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:25:09.582802 systemd-logind[1590]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:25:09.583191 systemd-logind[1590]: Removed session 21. Aug 13 01:25:14.590834 systemd[1]: Started sshd@20-147.75.71.225:22-139.178.89.65:49734.service. Aug 13 01:25:14.618356 sshd[4442]: Accepted publickey for core from 139.178.89.65 port 49734 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:14.619324 sshd[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:14.622845 systemd-logind[1590]: New session 22 of user core. Aug 13 01:25:14.623640 systemd[1]: Started session-22.scope. Aug 13 01:25:14.712304 sshd[4442]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:14.713828 systemd[1]: sshd@20-147.75.71.225:22-139.178.89.65:49734.service: Deactivated successfully. Aug 13 01:25:14.714249 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:25:14.714618 systemd-logind[1590]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:25:14.715095 systemd-logind[1590]: Removed session 22. Aug 13 01:25:19.722114 systemd[1]: Started sshd@21-147.75.71.225:22-139.178.89.65:41520.service. Aug 13 01:25:19.749685 sshd[4464]: Accepted publickey for core from 139.178.89.65 port 41520 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:19.750545 sshd[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:19.753565 systemd-logind[1590]: New session 23 of user core. Aug 13 01:25:19.754194 systemd[1]: Started session-23.scope. Aug 13 01:25:19.840986 sshd[4464]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:19.842641 systemd[1]: sshd@21-147.75.71.225:22-139.178.89.65:41520.service: Deactivated successfully. Aug 13 01:25:19.843094 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:25:19.843530 systemd-logind[1590]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:25:19.844131 systemd-logind[1590]: Removed session 23. Aug 13 01:25:23.878524 update_engine[1551]: I0813 01:25:23.878412 1551 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 01:25:23.878524 update_engine[1551]: I0813 01:25:23.878477 1551 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 01:25:23.880508 update_engine[1551]: I0813 01:25:23.880415 1551 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 01:25:23.881521 update_engine[1551]: I0813 01:25:23.881431 1551 omaha_request_params.cc:62] Current group set to lts Aug 13 01:25:23.881763 update_engine[1551]: I0813 01:25:23.881734 1551 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 01:25:23.881763 update_engine[1551]: I0813 01:25:23.881757 1551 update_attempter.cc:643] Scheduling an action processor start. Aug 13 01:25:23.881977 update_engine[1551]: I0813 01:25:23.881790 1551 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:25:23.881977 update_engine[1551]: I0813 01:25:23.881857 1551 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 01:25:23.882170 update_engine[1551]: I0813 01:25:23.882000 1551 omaha_request_action.cc:270] Posting an Omaha request to disabled Aug 13 01:25:23.882170 update_engine[1551]: I0813 01:25:23.882019 1551 omaha_request_action.cc:271] Request: Aug 13 01:25:23.882170 update_engine[1551]: Aug 13 01:25:23.882170 update_engine[1551]: Aug 13 01:25:23.882170 update_engine[1551]: Aug 13 01:25:23.882170 update_engine[1551]: Aug 13 01:25:23.882170 update_engine[1551]: Aug 13 01:25:23.882170 update_engine[1551]: Aug 13 01:25:23.882170 update_engine[1551]: Aug 13 01:25:23.882170 update_engine[1551]: Aug 13 01:25:23.882170 update_engine[1551]: I0813 01:25:23.882030 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:25:23.883315 locksmithd[1589]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 01:25:23.885205 update_engine[1551]: I0813 01:25:23.885128 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:25:23.885453 update_engine[1551]: E0813 01:25:23.885391 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:25:23.885583 update_engine[1551]: I0813 01:25:23.885560 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 01:25:24.850388 systemd[1]: Started sshd@22-147.75.71.225:22-139.178.89.65:41530.service. Aug 13 01:25:24.877717 sshd[4489]: Accepted publickey for core from 139.178.89.65 port 41530 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:24.878627 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:24.882022 systemd-logind[1590]: New session 24 of user core. Aug 13 01:25:24.882726 systemd[1]: Started session-24.scope. Aug 13 01:25:24.967233 sshd[4489]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:24.968988 systemd[1]: sshd@22-147.75.71.225:22-139.178.89.65:41530.service: Deactivated successfully. Aug 13 01:25:24.969361 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:25:24.969685 systemd-logind[1590]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:25:24.970261 systemd[1]: Started sshd@23-147.75.71.225:22-139.178.89.65:41532.service. Aug 13 01:25:24.970677 systemd-logind[1590]: Removed session 24. Aug 13 01:25:24.996866 sshd[4511]: Accepted publickey for core from 139.178.89.65 port 41532 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:24.997551 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:25.000161 systemd-logind[1590]: New session 25 of user core. Aug 13 01:25:25.000681 systemd[1]: Started session-25.scope. Aug 13 01:25:26.410191 env[1557]: time="2025-08-13T01:25:26.410113759Z" level=info msg="StopContainer for \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\" with timeout 30 (s)" Aug 13 01:25:26.410788 env[1557]: time="2025-08-13T01:25:26.410514444Z" level=info msg="Stop container \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\" with signal terminated" Aug 13 01:25:26.421196 systemd[1]: cri-containerd-963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783.scope: Deactivated successfully. Aug 13 01:25:26.430248 env[1557]: time="2025-08-13T01:25:26.430191440Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:25:26.431266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783-rootfs.mount: Deactivated successfully. Aug 13 01:25:26.433523 env[1557]: time="2025-08-13T01:25:26.433502840Z" level=info msg="StopContainer for \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\" with timeout 2 (s)" Aug 13 01:25:26.433645 env[1557]: time="2025-08-13T01:25:26.433629467Z" level=info msg="Stop container \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\" with signal terminated" Aug 13 01:25:26.437214 systemd-networkd[1313]: lxc_health: Link DOWN Aug 13 01:25:26.437217 systemd-networkd[1313]: lxc_health: Lost carrier Aug 13 01:25:26.448705 env[1557]: time="2025-08-13T01:25:26.448677571Z" level=info msg="shim disconnected" id=963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783 Aug 13 01:25:26.448759 env[1557]: time="2025-08-13T01:25:26.448706133Z" level=warning msg="cleaning up after shim disconnected" id=963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783 namespace=k8s.io Aug 13 01:25:26.448759 env[1557]: time="2025-08-13T01:25:26.448712998Z" level=info msg="cleaning up dead shim" Aug 13 01:25:26.452552 env[1557]: time="2025-08-13T01:25:26.452533404Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4578 runtime=io.containerd.runc.v2\n" Aug 13 01:25:26.453227 env[1557]: time="2025-08-13T01:25:26.453213463Z" level=info msg="StopContainer for \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\" returns successfully" Aug 13 01:25:26.453583 env[1557]: time="2025-08-13T01:25:26.453570294Z" level=info msg="StopPodSandbox for \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\"" Aug 13 01:25:26.453617 env[1557]: time="2025-08-13T01:25:26.453607874Z" level=info msg="Container to stop \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:25:26.455029 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb-shm.mount: Deactivated successfully. Aug 13 01:25:26.457166 systemd[1]: cri-containerd-4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb.scope: Deactivated successfully. Aug 13 01:25:26.468110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb-rootfs.mount: Deactivated successfully. Aug 13 01:25:26.468724 env[1557]: time="2025-08-13T01:25:26.468657066Z" level=info msg="shim disconnected" id=4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb Aug 13 01:25:26.468724 env[1557]: time="2025-08-13T01:25:26.468690536Z" level=warning msg="cleaning up after shim disconnected" id=4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb namespace=k8s.io Aug 13 01:25:26.468724 env[1557]: time="2025-08-13T01:25:26.468701024Z" level=info msg="cleaning up dead shim" Aug 13 01:25:26.472671 env[1557]: time="2025-08-13T01:25:26.472646749Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4609 runtime=io.containerd.runc.v2\n" Aug 13 01:25:26.472844 env[1557]: time="2025-08-13T01:25:26.472829519Z" level=info msg="TearDown network for sandbox \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\" successfully" Aug 13 01:25:26.472874 env[1557]: time="2025-08-13T01:25:26.472844407Z" level=info msg="StopPodSandbox for \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\" returns successfully" Aug 13 01:25:26.501676 systemd[1]: cri-containerd-96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788.scope: Deactivated successfully. Aug 13 01:25:26.501868 systemd[1]: cri-containerd-96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788.scope: Consumed 6.633s CPU time. Aug 13 01:25:26.514888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788-rootfs.mount: Deactivated successfully. Aug 13 01:25:26.515431 env[1557]: time="2025-08-13T01:25:26.515382913Z" level=info msg="shim disconnected" id=96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788 Aug 13 01:25:26.515548 env[1557]: time="2025-08-13T01:25:26.515432865Z" level=warning msg="cleaning up after shim disconnected" id=96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788 namespace=k8s.io Aug 13 01:25:26.515548 env[1557]: time="2025-08-13T01:25:26.515451397Z" level=info msg="cleaning up dead shim" Aug 13 01:25:26.522691 env[1557]: time="2025-08-13T01:25:26.522655416Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4636 runtime=io.containerd.runc.v2\n" Aug 13 01:25:26.523781 env[1557]: time="2025-08-13T01:25:26.523721143Z" level=info msg="StopContainer for \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\" returns successfully" Aug 13 01:25:26.524264 env[1557]: time="2025-08-13T01:25:26.524201622Z" level=info msg="StopPodSandbox for \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\"" Aug 13 01:25:26.524340 env[1557]: time="2025-08-13T01:25:26.524285179Z" level=info msg="Container to stop \"be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:25:26.524340 env[1557]: time="2025-08-13T01:25:26.524309159Z" level=info msg="Container to stop \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:25:26.524340 env[1557]: time="2025-08-13T01:25:26.524325034Z" level=info msg="Container to stop \"790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:25:26.524494 env[1557]: time="2025-08-13T01:25:26.524339365Z" level=info msg="Container to stop \"7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:25:26.524494 env[1557]: time="2025-08-13T01:25:26.524355257Z" level=info msg="Container to stop \"ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:25:26.532904 systemd[1]: cri-containerd-25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04.scope: Deactivated successfully. Aug 13 01:25:26.564027 kubelet[2463]: I0813 01:25:26.563968 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d-cilium-config-path\") pod \"fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d\" (UID: \"fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d\") " Aug 13 01:25:26.564670 kubelet[2463]: I0813 01:25:26.564040 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wc2m\" (UniqueName: \"kubernetes.io/projected/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d-kube-api-access-6wc2m\") pod \"fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d\" (UID: \"fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d\") " Aug 13 01:25:26.567627 kubelet[2463]: I0813 01:25:26.567547 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d" (UID: "fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:25:26.568709 kubelet[2463]: I0813 01:25:26.568620 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d-kube-api-access-6wc2m" (OuterVolumeSpecName: "kube-api-access-6wc2m") pod "fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d" (UID: "fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d"). InnerVolumeSpecName "kube-api-access-6wc2m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:25:26.577090 env[1557]: time="2025-08-13T01:25:26.576996041Z" level=info msg="shim disconnected" id=25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04 Aug 13 01:25:26.577311 env[1557]: time="2025-08-13T01:25:26.577094862Z" level=warning msg="cleaning up after shim disconnected" id=25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04 namespace=k8s.io Aug 13 01:25:26.577311 env[1557]: time="2025-08-13T01:25:26.577127117Z" level=info msg="cleaning up dead shim" Aug 13 01:25:26.588164 env[1557]: time="2025-08-13T01:25:26.588083524Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4668 runtime=io.containerd.runc.v2\n" Aug 13 01:25:26.588652 env[1557]: time="2025-08-13T01:25:26.588562378Z" level=info msg="TearDown network for sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" successfully" Aug 13 01:25:26.588652 env[1557]: time="2025-08-13T01:25:26.588602320Z" level=info msg="StopPodSandbox for \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" returns successfully" Aug 13 01:25:26.665349 kubelet[2463]: I0813 01:25:26.665111 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fprr9\" (UniqueName: \"kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-kube-api-access-fprr9\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.665349 kubelet[2463]: I0813 01:25:26.665260 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-etc-cni-netd\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.665896 kubelet[2463]: I0813 01:25:26.665330 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.665896 kubelet[2463]: I0813 01:25:26.665387 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-config-path\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.665896 kubelet[2463]: I0813 01:25:26.665474 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-lib-modules\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.665896 kubelet[2463]: I0813 01:25:26.665560 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-host-proc-sys-kernel\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.665896 kubelet[2463]: I0813 01:25:26.665536 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.666748 kubelet[2463]: I0813 01:25:26.665634 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-bpf-maps\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.666748 kubelet[2463]: I0813 01:25:26.665664 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.666748 kubelet[2463]: I0813 01:25:26.665710 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cni-path\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.666748 kubelet[2463]: I0813 01:25:26.665749 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.666748 kubelet[2463]: I0813 01:25:26.665795 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-run\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.667333 kubelet[2463]: I0813 01:25:26.665863 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.667333 kubelet[2463]: I0813 01:25:26.665881 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-host-proc-sys-net\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.667333 kubelet[2463]: I0813 01:25:26.665872 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cni-path" (OuterVolumeSpecName: "cni-path") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.667333 kubelet[2463]: I0813 01:25:26.665960 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.667333 kubelet[2463]: I0813 01:25:26.666037 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc5d714b-fc13-404d-ac63-be597cf9ff4d-clustermesh-secrets\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.668050 kubelet[2463]: I0813 01:25:26.666135 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-cgroup\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.668050 kubelet[2463]: I0813 01:25:26.666245 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-hostproc\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.668050 kubelet[2463]: I0813 01:25:26.666223 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.668050 kubelet[2463]: I0813 01:25:26.666315 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-hostproc" (OuterVolumeSpecName: "hostproc") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.668050 kubelet[2463]: I0813 01:25:26.666349 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-xtables-lock\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.668050 kubelet[2463]: I0813 01:25:26.666447 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-hubble-tls\") pod \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\" (UID: \"fc5d714b-fc13-404d-ac63-be597cf9ff4d\") " Aug 13 01:25:26.668745 kubelet[2463]: I0813 01:25:26.666441 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:26.668745 kubelet[2463]: I0813 01:25:26.666610 2463 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6wc2m\" (UniqueName: \"kubernetes.io/projected/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d-kube-api-access-6wc2m\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.668745 kubelet[2463]: I0813 01:25:26.666671 2463 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-lib-modules\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.668745 kubelet[2463]: I0813 01:25:26.666727 2463 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.668745 kubelet[2463]: I0813 01:25:26.666779 2463 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-bpf-maps\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.668745 kubelet[2463]: I0813 01:25:26.666834 2463 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cni-path\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.668745 kubelet[2463]: I0813 01:25:26.666883 2463 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-run\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.669528 kubelet[2463]: I0813 01:25:26.666927 2463 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-host-proc-sys-net\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.669528 kubelet[2463]: I0813 01:25:26.666954 2463 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-cgroup\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.669528 kubelet[2463]: I0813 01:25:26.666981 2463 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-hostproc\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.669528 kubelet[2463]: I0813 01:25:26.667008 2463 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d-cilium-config-path\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.669528 kubelet[2463]: I0813 01:25:26.667034 2463 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-etc-cni-netd\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.671702 kubelet[2463]: I0813 01:25:26.671633 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:25:26.673303 kubelet[2463]: I0813 01:25:26.673217 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5d714b-fc13-404d-ac63-be597cf9ff4d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:25:26.673599 kubelet[2463]: I0813 01:25:26.673546 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:25:26.673740 kubelet[2463]: I0813 01:25:26.673624 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-kube-api-access-fprr9" (OuterVolumeSpecName: "kube-api-access-fprr9") pod "fc5d714b-fc13-404d-ac63-be597cf9ff4d" (UID: "fc5d714b-fc13-404d-ac63-be597cf9ff4d"). InnerVolumeSpecName "kube-api-access-fprr9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:25:26.767510 kubelet[2463]: I0813 01:25:26.767413 2463 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc5d714b-fc13-404d-ac63-be597cf9ff4d-clustermesh-secrets\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.767510 kubelet[2463]: I0813 01:25:26.767487 2463 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc5d714b-fc13-404d-ac63-be597cf9ff4d-xtables-lock\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.767510 kubelet[2463]: I0813 01:25:26.767521 2463 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-hubble-tls\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.768046 kubelet[2463]: I0813 01:25:26.767555 2463 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fprr9\" (UniqueName: \"kubernetes.io/projected/fc5d714b-fc13-404d-ac63-be597cf9ff4d-kube-api-access-fprr9\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:26.768046 kubelet[2463]: I0813 01:25:26.767585 2463 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc5d714b-fc13-404d-ac63-be597cf9ff4d-cilium-config-path\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:27.040660 kubelet[2463]: I0813 01:25:27.040614 2463 scope.go:117] "RemoveContainer" containerID="be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4" Aug 13 01:25:27.041124 systemd[1]: Removed slice kubepods-burstable-podfc5d714b_fc13_404d_ac63_be597cf9ff4d.slice. Aug 13 01:25:27.041209 env[1557]: time="2025-08-13T01:25:27.041111221Z" level=info msg="RemoveContainer for \"be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4\"" Aug 13 01:25:27.041182 systemd[1]: kubepods-burstable-podfc5d714b_fc13_404d_ac63_be597cf9ff4d.slice: Consumed 6.718s CPU time. Aug 13 01:25:27.041640 systemd[1]: Removed slice kubepods-besteffort-podfa01eaaa_c94a_49ba_96d5_8b03bc62ac1d.slice. Aug 13 01:25:27.051186 env[1557]: time="2025-08-13T01:25:27.051168517Z" level=info msg="RemoveContainer for \"be4672afc4721a3297e2c3f557b152627092d44e8ae9d9accb86b508759b80d4\" returns successfully" Aug 13 01:25:27.051349 kubelet[2463]: I0813 01:25:27.051337 2463 scope.go:117] "RemoveContainer" containerID="96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788" Aug 13 01:25:27.051867 env[1557]: time="2025-08-13T01:25:27.051853402Z" level=info msg="RemoveContainer for \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\"" Aug 13 01:25:27.053075 env[1557]: time="2025-08-13T01:25:27.053061457Z" level=info msg="RemoveContainer for \"96cc2154bfbcb337612e0a05576069293de6d5b5eac2718699ea3c15f145d788\" returns successfully" Aug 13 01:25:27.053130 kubelet[2463]: I0813 01:25:27.053120 2463 scope.go:117] "RemoveContainer" containerID="790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036" Aug 13 01:25:27.053622 env[1557]: time="2025-08-13T01:25:27.053607681Z" level=info msg="RemoveContainer for \"790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036\"" Aug 13 01:25:27.054859 env[1557]: time="2025-08-13T01:25:27.054845829Z" level=info msg="RemoveContainer for \"790801446871bfeaabad2b56b5d71c4e59ef6d8f8661c913ed8470881898c036\" returns successfully" Aug 13 01:25:27.054925 kubelet[2463]: I0813 01:25:27.054911 2463 scope.go:117] "RemoveContainer" containerID="7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0" Aug 13 01:25:27.055481 env[1557]: time="2025-08-13T01:25:27.055464875Z" level=info msg="RemoveContainer for \"7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0\"" Aug 13 01:25:27.056798 env[1557]: time="2025-08-13T01:25:27.056778333Z" level=info msg="RemoveContainer for \"7a465870c46671fc4917b651908a621dd736aa00427a64ce12b7c80a14456dc0\" returns successfully" Aug 13 01:25:27.056860 kubelet[2463]: I0813 01:25:27.056850 2463 scope.go:117] "RemoveContainer" containerID="ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4" Aug 13 01:25:27.057358 env[1557]: time="2025-08-13T01:25:27.057343489Z" level=info msg="RemoveContainer for \"ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4\"" Aug 13 01:25:27.058624 env[1557]: time="2025-08-13T01:25:27.058607679Z" level=info msg="RemoveContainer for \"ad0c95df718c8f6bf77c868d1dd6522f9f61e718e8874c0207bf861021a08fc4\" returns successfully" Aug 13 01:25:27.058721 kubelet[2463]: I0813 01:25:27.058684 2463 scope.go:117] "RemoveContainer" containerID="963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783" Aug 13 01:25:27.059270 env[1557]: time="2025-08-13T01:25:27.059238770Z" level=info msg="RemoveContainer for \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\"" Aug 13 01:25:27.060533 env[1557]: time="2025-08-13T01:25:27.060518138Z" level=info msg="RemoveContainer for \"963a29def491d09a3532dea307c923086bddaaab91b0ef1862ef5c663a101783\" returns successfully" Aug 13 01:25:27.061117 env[1557]: time="2025-08-13T01:25:27.061095664Z" level=info msg="StopPodSandbox for \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\"" Aug 13 01:25:27.061174 env[1557]: time="2025-08-13T01:25:27.061149910Z" level=info msg="TearDown network for sandbox \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\" successfully" Aug 13 01:25:27.061215 env[1557]: time="2025-08-13T01:25:27.061174775Z" level=info msg="StopPodSandbox for \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\" returns successfully" Aug 13 01:25:27.061399 env[1557]: time="2025-08-13T01:25:27.061383471Z" level=info msg="RemovePodSandbox for \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\"" Aug 13 01:25:27.061440 env[1557]: time="2025-08-13T01:25:27.061404499Z" level=info msg="Forcibly stopping sandbox \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\"" Aug 13 01:25:27.061466 env[1557]: time="2025-08-13T01:25:27.061450670Z" level=info msg="TearDown network for sandbox \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\" successfully" Aug 13 01:25:27.062729 env[1557]: time="2025-08-13T01:25:27.062711559Z" level=info msg="RemovePodSandbox \"4b61ce4f1662d72ed9a1723b0bc8aa4425d98f1dd03d35825b256b7766691afb\" returns successfully" Aug 13 01:25:27.062907 env[1557]: time="2025-08-13T01:25:27.062891590Z" level=info msg="StopPodSandbox for \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\"" Aug 13 01:25:27.062963 env[1557]: time="2025-08-13T01:25:27.062936561Z" level=info msg="TearDown network for sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" successfully" Aug 13 01:25:27.062963 env[1557]: time="2025-08-13T01:25:27.062958990Z" level=info msg="StopPodSandbox for \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" returns successfully" Aug 13 01:25:27.063131 env[1557]: time="2025-08-13T01:25:27.063115863Z" level=info msg="RemovePodSandbox for \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\"" Aug 13 01:25:27.063166 env[1557]: time="2025-08-13T01:25:27.063136352Z" level=info msg="Forcibly stopping sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\"" Aug 13 01:25:27.063210 env[1557]: time="2025-08-13T01:25:27.063198966Z" level=info msg="TearDown network for sandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" successfully" Aug 13 01:25:27.065517 env[1557]: time="2025-08-13T01:25:27.065468596Z" level=info msg="RemovePodSandbox \"25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04\" returns successfully" Aug 13 01:25:27.145695 kubelet[2463]: E0813 01:25:27.145577 2463 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:25:27.425166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04-rootfs.mount: Deactivated successfully. Aug 13 01:25:27.425496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25865f5232d9588fd30958ff1ae087484249ff7c722bbdc0ea716e2205376f04-shm.mount: Deactivated successfully. Aug 13 01:25:27.425796 systemd[1]: var-lib-kubelet-pods-fc5d714b\x2dfc13\x2d404d\x2dac63\x2dbe597cf9ff4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfprr9.mount: Deactivated successfully. Aug 13 01:25:27.426013 systemd[1]: var-lib-kubelet-pods-fa01eaaa\x2dc94a\x2d49ba\x2d96d5\x2d8b03bc62ac1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6wc2m.mount: Deactivated successfully. Aug 13 01:25:27.426196 systemd[1]: var-lib-kubelet-pods-fc5d714b\x2dfc13\x2d404d\x2dac63\x2dbe597cf9ff4d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:25:27.426418 systemd[1]: var-lib-kubelet-pods-fc5d714b\x2dfc13\x2d404d\x2dac63\x2dbe597cf9ff4d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:25:28.336587 sshd[4511]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:28.344409 systemd[1]: sshd@23-147.75.71.225:22-139.178.89.65:41532.service: Deactivated successfully. Aug 13 01:25:28.346129 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:25:28.347881 systemd-logind[1590]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:25:28.350934 systemd[1]: Started sshd@24-147.75.71.225:22-139.178.89.65:41548.service. Aug 13 01:25:28.353507 systemd-logind[1590]: Removed session 25. Aug 13 01:25:28.381538 sshd[4688]: Accepted publickey for core from 139.178.89.65 port 41548 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:28.382329 sshd[4688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:28.385004 systemd-logind[1590]: New session 26 of user core. Aug 13 01:25:28.385590 systemd[1]: Started session-26.scope. Aug 13 01:25:28.934695 sshd[4688]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:28.936802 systemd[1]: sshd@24-147.75.71.225:22-139.178.89.65:41548.service: Deactivated successfully. Aug 13 01:25:28.937207 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:25:28.937544 systemd-logind[1590]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:25:28.938248 systemd[1]: Started sshd@25-147.75.71.225:22-139.178.89.65:46964.service. Aug 13 01:25:28.938689 systemd-logind[1590]: Removed session 26. Aug 13 01:25:28.947953 systemd[1]: Created slice kubepods-burstable-pod00fd9eeb_3d1d_4cbd_9aef_9f1cb719c830.slice. Aug 13 01:25:28.966549 sshd[4712]: Accepted publickey for core from 139.178.89.65 port 46964 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:28.967362 sshd[4712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:28.969897 systemd-logind[1590]: New session 27 of user core. Aug 13 01:25:28.970415 systemd[1]: Started session-27.scope. Aug 13 01:25:28.983310 kubelet[2463]: I0813 01:25:28.983288 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-config-path\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983525 kubelet[2463]: I0813 01:25:28.983323 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-etc-cni-netd\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983525 kubelet[2463]: I0813 01:25:28.983337 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-lib-modules\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983525 kubelet[2463]: I0813 01:25:28.983347 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-bpf-maps\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983525 kubelet[2463]: I0813 01:25:28.983357 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-hostproc\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983525 kubelet[2463]: I0813 01:25:28.983392 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-cgroup\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983525 kubelet[2463]: I0813 01:25:28.983422 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-xtables-lock\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983710 kubelet[2463]: I0813 01:25:28.983445 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cni-path\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983710 kubelet[2463]: I0813 01:25:28.983475 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-host-proc-sys-net\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983710 kubelet[2463]: I0813 01:25:28.983497 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r99cn\" (UniqueName: \"kubernetes.io/projected/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-kube-api-access-r99cn\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983710 kubelet[2463]: I0813 01:25:28.983515 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-host-proc-sys-kernel\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983710 kubelet[2463]: I0813 01:25:28.983532 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-hubble-tls\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983710 kubelet[2463]: I0813 01:25:28.983551 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-run\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983832 kubelet[2463]: I0813 01:25:28.983568 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-clustermesh-secrets\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:28.983832 kubelet[2463]: I0813 01:25:28.983584 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-ipsec-secrets\") pod \"cilium-427z5\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " pod="kube-system/cilium-427z5" Aug 13 01:25:29.041441 kubelet[2463]: I0813 01:25:29.041366 2463 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d" path="/var/lib/kubelet/pods/fa01eaaa-c94a-49ba-96d5-8b03bc62ac1d/volumes" Aug 13 01:25:29.042028 kubelet[2463]: I0813 01:25:29.041973 2463 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5d714b-fc13-404d-ac63-be597cf9ff4d" path="/var/lib/kubelet/pods/fc5d714b-fc13-404d-ac63-be597cf9ff4d/volumes" Aug 13 01:25:29.088719 sshd[4712]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:29.093687 systemd[1]: sshd@25-147.75.71.225:22-139.178.89.65:46964.service: Deactivated successfully. Aug 13 01:25:29.094120 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:25:29.094621 systemd-logind[1590]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:25:29.095471 systemd[1]: Started sshd@26-147.75.71.225:22-139.178.89.65:46978.service. Aug 13 01:25:29.096113 systemd-logind[1590]: Removed session 27. Aug 13 01:25:29.096695 env[1557]: time="2025-08-13T01:25:29.096668255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-427z5,Uid:00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:29.102541 env[1557]: time="2025-08-13T01:25:29.102495996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:25:29.102541 env[1557]: time="2025-08-13T01:25:29.102520223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:25:29.102541 env[1557]: time="2025-08-13T01:25:29.102529013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:25:29.102667 env[1557]: time="2025-08-13T01:25:29.102600747Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9 pid=4747 runtime=io.containerd.runc.v2 Aug 13 01:25:29.110205 systemd[1]: Started cri-containerd-77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9.scope. Aug 13 01:25:29.120899 env[1557]: time="2025-08-13T01:25:29.120870923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-427z5,Uid:00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830,Namespace:kube-system,Attempt:0,} returns sandbox id \"77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9\"" Aug 13 01:25:29.123324 env[1557]: time="2025-08-13T01:25:29.123305764Z" level=info msg="CreateContainer within sandbox \"77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:25:29.123383 sshd[4740]: Accepted publickey for core from 139.178.89.65 port 46978 ssh2: RSA SHA256:Vs+DPHRqG21SDQblLM9Mmb7P94OFeuiMsrRDGCeigeE Aug 13 01:25:29.124253 sshd[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:29.126550 systemd-logind[1590]: New session 28 of user core. Aug 13 01:25:29.127152 systemd[1]: Started session-28.scope. Aug 13 01:25:29.127608 env[1557]: time="2025-08-13T01:25:29.127588902Z" level=info msg="CreateContainer within sandbox \"77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab\"" Aug 13 01:25:29.127832 env[1557]: time="2025-08-13T01:25:29.127817979Z" level=info msg="StartContainer for \"9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab\"" Aug 13 01:25:29.135714 systemd[1]: Started cri-containerd-9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab.scope. Aug 13 01:25:29.142079 systemd[1]: cri-containerd-9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab.scope: Deactivated successfully. Aug 13 01:25:29.142239 systemd[1]: Stopped cri-containerd-9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab.scope. Aug 13 01:25:29.170030 env[1557]: time="2025-08-13T01:25:29.169993441Z" level=info msg="shim disconnected" id=9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab Aug 13 01:25:29.170030 env[1557]: time="2025-08-13T01:25:29.170030172Z" level=warning msg="cleaning up after shim disconnected" id=9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab namespace=k8s.io Aug 13 01:25:29.170188 env[1557]: time="2025-08-13T01:25:29.170038790Z" level=info msg="cleaning up dead shim" Aug 13 01:25:29.175511 env[1557]: time="2025-08-13T01:25:29.175472145Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4806 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T01:25:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 01:25:29.175761 env[1557]: time="2025-08-13T01:25:29.175680010Z" level=error msg="copy shim log" error="read /proc/self/fd/26: file already closed" Aug 13 01:25:29.175895 env[1557]: time="2025-08-13T01:25:29.175854893Z" level=error msg="Failed to pipe stdout of container \"9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab\"" error="reading from a closed fifo" Aug 13 01:25:29.175943 env[1557]: time="2025-08-13T01:25:29.175900314Z" level=error msg="Failed to pipe stderr of container \"9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab\"" error="reading from a closed fifo" Aug 13 01:25:29.176497 env[1557]: time="2025-08-13T01:25:29.176461166Z" level=error msg="StartContainer for \"9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 01:25:29.176708 kubelet[2463]: E0813 01:25:29.176673 2463 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab" Aug 13 01:25:29.176835 kubelet[2463]: E0813 01:25:29.176817 2463 kuberuntime_manager.go:1358] "Unhandled Error" err=< Aug 13 01:25:29.176835 kubelet[2463]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 01:25:29.176835 kubelet[2463]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 01:25:29.176835 kubelet[2463]: rm /hostbin/cilium-mount Aug 13 01:25:29.176994 kubelet[2463]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r99cn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-427z5_kube-system(00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 01:25:29.176994 kubelet[2463]: > logger="UnhandledError" Aug 13 01:25:29.178028 kubelet[2463]: E0813 01:25:29.177996 2463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-427z5" podUID="00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" Aug 13 01:25:29.265191 env[1557]: time="2025-08-13T01:25:29.264957055Z" level=info msg="StopPodSandbox for \"77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9\"" Aug 13 01:25:29.265191 env[1557]: time="2025-08-13T01:25:29.265125334Z" level=info msg="Container to stop \"9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:25:29.282834 systemd[1]: cri-containerd-77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9.scope: Deactivated successfully. Aug 13 01:25:29.338351 env[1557]: time="2025-08-13T01:25:29.338194733Z" level=info msg="shim disconnected" id=77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9 Aug 13 01:25:29.338686 env[1557]: time="2025-08-13T01:25:29.338349095Z" level=warning msg="cleaning up after shim disconnected" id=77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9 namespace=k8s.io Aug 13 01:25:29.338686 env[1557]: time="2025-08-13T01:25:29.338393187Z" level=info msg="cleaning up dead shim" Aug 13 01:25:29.354601 env[1557]: time="2025-08-13T01:25:29.354479945Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4854 runtime=io.containerd.runc.v2\n" Aug 13 01:25:29.355279 env[1557]: time="2025-08-13T01:25:29.355159700Z" level=info msg="TearDown network for sandbox \"77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9\" successfully" Aug 13 01:25:29.355279 env[1557]: time="2025-08-13T01:25:29.355217970Z" level=info msg="StopPodSandbox for \"77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9\" returns successfully" Aug 13 01:25:29.386271 kubelet[2463]: I0813 01:25:29.386181 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-bpf-maps\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.386271 kubelet[2463]: I0813 01:25:29.386220 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.386683 kubelet[2463]: I0813 01:25:29.386314 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cni-path\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.386683 kubelet[2463]: I0813 01:25:29.386376 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-xtables-lock\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.386683 kubelet[2463]: I0813 01:25:29.386429 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cni-path" (OuterVolumeSpecName: "cni-path") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.386683 kubelet[2463]: I0813 01:25:29.386446 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r99cn\" (UniqueName: \"kubernetes.io/projected/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-kube-api-access-r99cn\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.386683 kubelet[2463]: I0813 01:25:29.386487 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.386683 kubelet[2463]: I0813 01:25:29.386583 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-clustermesh-secrets\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.386683 kubelet[2463]: I0813 01:25:29.386646 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-lib-modules\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.386705 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.386774 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-hostproc\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.386848 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-hostproc" (OuterVolumeSpecName: "hostproc") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.386856 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-etc-cni-netd\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.386913 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-host-proc-sys-net\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.386967 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-run\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.386983 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.387024 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-cgroup\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.387050 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.387040 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.387108 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.387089 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-ipsec-secrets\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.387288 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-config-path\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.388012 kubelet[2463]: I0813 01:25:29.387406 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-host-proc-sys-kernel\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387507 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387515 2463 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-hubble-tls\") pod \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\" (UID: \"00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830\") " Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387695 2463 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-lib-modules\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387745 2463 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-hostproc\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387777 2463 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-etc-cni-netd\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387808 2463 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-host-proc-sys-net\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387836 2463 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-run\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387863 2463 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-cgroup\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387887 2463 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387914 2463 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-bpf-maps\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387940 2463 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cni-path\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.389879 kubelet[2463]: I0813 01:25:29.387965 2463 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-xtables-lock\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.393128 kubelet[2463]: I0813 01:25:29.393033 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:25:29.393528 kubelet[2463]: I0813 01:25:29.393427 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-kube-api-access-r99cn" (OuterVolumeSpecName: "kube-api-access-r99cn") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "kube-api-access-r99cn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:25:29.393752 kubelet[2463]: I0813 01:25:29.393650 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:25:29.393905 kubelet[2463]: I0813 01:25:29.393745 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:25:29.394528 kubelet[2463]: I0813 01:25:29.394430 2463 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" (UID: "00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:25:29.488704 kubelet[2463]: I0813 01:25:29.488624 2463 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-config-path\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.488704 kubelet[2463]: I0813 01:25:29.488708 2463 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-hubble-tls\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.489178 kubelet[2463]: I0813 01:25:29.488770 2463 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r99cn\" (UniqueName: \"kubernetes.io/projected/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-kube-api-access-r99cn\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.489178 kubelet[2463]: I0813 01:25:29.488821 2463 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-clustermesh-secrets\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:29.489178 kubelet[2463]: I0813 01:25:29.488870 2463 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830-cilium-ipsec-secrets\") on node \"ci-3510.3.8-a-9864ec3500\" DevicePath \"\"" Aug 13 01:25:30.087070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9-rootfs.mount: Deactivated successfully. Aug 13 01:25:30.087149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77600aee83053fce62691d2828023408517d0c2728fa097506d055372ec74ba9-shm.mount: Deactivated successfully. Aug 13 01:25:30.087200 systemd[1]: var-lib-kubelet-pods-00fd9eeb\x2d3d1d\x2d4cbd\x2d9aef\x2d9f1cb719c830-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr99cn.mount: Deactivated successfully. Aug 13 01:25:30.087256 systemd[1]: var-lib-kubelet-pods-00fd9eeb\x2d3d1d\x2d4cbd\x2d9aef\x2d9f1cb719c830-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:25:30.087303 systemd[1]: var-lib-kubelet-pods-00fd9eeb\x2d3d1d\x2d4cbd\x2d9aef\x2d9f1cb719c830-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:25:30.087348 systemd[1]: var-lib-kubelet-pods-00fd9eeb\x2d3d1d\x2d4cbd\x2d9aef\x2d9f1cb719c830-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 01:25:30.270051 kubelet[2463]: I0813 01:25:30.269980 2463 scope.go:117] "RemoveContainer" containerID="9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab" Aug 13 01:25:30.272752 env[1557]: time="2025-08-13T01:25:30.272664186Z" level=info msg="RemoveContainer for \"9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab\"" Aug 13 01:25:30.276220 env[1557]: time="2025-08-13T01:25:30.276200227Z" level=info msg="RemoveContainer for \"9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab\" returns successfully" Aug 13 01:25:30.277357 systemd[1]: Removed slice kubepods-burstable-pod00fd9eeb_3d1d_4cbd_9aef_9f1cb719c830.slice. Aug 13 01:25:30.300335 systemd[1]: Created slice kubepods-burstable-pod727573b0_db13_4c8e_a020_d31e51a98d03.slice. Aug 13 01:25:30.396331 kubelet[2463]: I0813 01:25:30.396180 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-etc-cni-netd\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.396331 kubelet[2463]: I0813 01:25:30.396299 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-host-proc-sys-kernel\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.396726 kubelet[2463]: I0813 01:25:30.396362 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/727573b0-db13-4c8e-a020-d31e51a98d03-hubble-tls\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.396726 kubelet[2463]: I0813 01:25:30.396420 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-cilium-cgroup\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.396726 kubelet[2463]: I0813 01:25:30.396476 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-lib-modules\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.396726 kubelet[2463]: I0813 01:25:30.396585 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/727573b0-db13-4c8e-a020-d31e51a98d03-cilium-ipsec-secrets\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.396726 kubelet[2463]: I0813 01:25:30.396680 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-bpf-maps\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.397463 kubelet[2463]: I0813 01:25:30.396760 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-cilium-run\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.397463 kubelet[2463]: I0813 01:25:30.396831 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-cni-path\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.397463 kubelet[2463]: I0813 01:25:30.396893 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-xtables-lock\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.397463 kubelet[2463]: I0813 01:25:30.396968 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-host-proc-sys-net\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.397463 kubelet[2463]: I0813 01:25:30.397096 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/727573b0-db13-4c8e-a020-d31e51a98d03-hostproc\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.397463 kubelet[2463]: I0813 01:25:30.397196 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/727573b0-db13-4c8e-a020-d31e51a98d03-clustermesh-secrets\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.397463 kubelet[2463]: I0813 01:25:30.397288 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/727573b0-db13-4c8e-a020-d31e51a98d03-cilium-config-path\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.397463 kubelet[2463]: I0813 01:25:30.397338 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7s5h\" (UniqueName: \"kubernetes.io/projected/727573b0-db13-4c8e-a020-d31e51a98d03-kube-api-access-f7s5h\") pod \"cilium-s5z6s\" (UID: \"727573b0-db13-4c8e-a020-d31e51a98d03\") " pod="kube-system/cilium-s5z6s" Aug 13 01:25:30.603930 env[1557]: time="2025-08-13T01:25:30.603837835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s5z6s,Uid:727573b0-db13-4c8e-a020-d31e51a98d03,Namespace:kube-system,Attempt:0,}" Aug 13 01:25:30.612485 env[1557]: time="2025-08-13T01:25:30.612438479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:25:30.612485 env[1557]: time="2025-08-13T01:25:30.612478228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:25:30.612603 env[1557]: time="2025-08-13T01:25:30.612487376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:25:30.612603 env[1557]: time="2025-08-13T01:25:30.612578018Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8 pid=4880 runtime=io.containerd.runc.v2 Aug 13 01:25:30.617866 systemd[1]: Started cri-containerd-27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8.scope. Aug 13 01:25:30.630372 env[1557]: time="2025-08-13T01:25:30.630314397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s5z6s,Uid:727573b0-db13-4c8e-a020-d31e51a98d03,Namespace:kube-system,Attempt:0,} returns sandbox id \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\"" Aug 13 01:25:30.632888 env[1557]: time="2025-08-13T01:25:30.632836486Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:25:30.637293 env[1557]: time="2025-08-13T01:25:30.637270067Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19\"" Aug 13 01:25:30.637538 env[1557]: time="2025-08-13T01:25:30.637519021Z" level=info msg="StartContainer for \"20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19\"" Aug 13 01:25:30.648857 systemd[1]: Started cri-containerd-20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19.scope. Aug 13 01:25:30.662275 env[1557]: time="2025-08-13T01:25:30.662249336Z" level=info msg="StartContainer for \"20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19\" returns successfully" Aug 13 01:25:30.667347 systemd[1]: cri-containerd-20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19.scope: Deactivated successfully. Aug 13 01:25:30.680477 env[1557]: time="2025-08-13T01:25:30.680447376Z" level=info msg="shim disconnected" id=20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19 Aug 13 01:25:30.680477 env[1557]: time="2025-08-13T01:25:30.680476173Z" level=warning msg="cleaning up after shim disconnected" id=20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19 namespace=k8s.io Aug 13 01:25:30.680615 env[1557]: time="2025-08-13T01:25:30.680484978Z" level=info msg="cleaning up dead shim" Aug 13 01:25:30.684687 env[1557]: time="2025-08-13T01:25:30.684640479Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4965 runtime=io.containerd.runc.v2\n" Aug 13 01:25:31.044325 kubelet[2463]: I0813 01:25:31.044122 2463 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830" path="/var/lib/kubelet/pods/00fd9eeb-3d1d-4cbd-9aef-9f1cb719c830/volumes" Aug 13 01:25:31.284852 env[1557]: time="2025-08-13T01:25:31.284758398Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:25:31.302310 env[1557]: time="2025-08-13T01:25:31.302065316Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a\"" Aug 13 01:25:31.303224 env[1557]: time="2025-08-13T01:25:31.303138715Z" level=info msg="StartContainer for \"4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a\"" Aug 13 01:25:31.335785 systemd[1]: Started cri-containerd-4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a.scope. Aug 13 01:25:31.355419 env[1557]: time="2025-08-13T01:25:31.355380756Z" level=info msg="StartContainer for \"4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a\" returns successfully" Aug 13 01:25:31.362397 systemd[1]: cri-containerd-4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a.scope: Deactivated successfully. Aug 13 01:25:31.378160 env[1557]: time="2025-08-13T01:25:31.378114500Z" level=info msg="shim disconnected" id=4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a Aug 13 01:25:31.378160 env[1557]: time="2025-08-13T01:25:31.378157450Z" level=warning msg="cleaning up after shim disconnected" id=4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a namespace=k8s.io Aug 13 01:25:31.378372 env[1557]: time="2025-08-13T01:25:31.378169462Z" level=info msg="cleaning up dead shim" Aug 13 01:25:31.384281 env[1557]: time="2025-08-13T01:25:31.384219526Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5026 runtime=io.containerd.runc.v2\n" Aug 13 01:25:32.087451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a-rootfs.mount: Deactivated successfully. Aug 13 01:25:32.147000 kubelet[2463]: E0813 01:25:32.146904 2463 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:25:32.276343 kubelet[2463]: W0813 01:25:32.276188 2463 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00fd9eeb_3d1d_4cbd_9aef_9f1cb719c830.slice/cri-containerd-9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab.scope WatchSource:0}: container "9990e69ad9d9c88aa2ee8ad5bb7aca98618b14054ac1bb71906b5e94345734ab" in namespace "k8s.io": not found Aug 13 01:25:32.290598 env[1557]: time="2025-08-13T01:25:32.290467762Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:25:32.305829 env[1557]: time="2025-08-13T01:25:32.305785370Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39\"" Aug 13 01:25:32.307698 env[1557]: time="2025-08-13T01:25:32.307665355Z" level=info msg="StartContainer for \"709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39\"" Aug 13 01:25:32.317928 systemd[1]: Started cri-containerd-709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39.scope. Aug 13 01:25:32.330716 env[1557]: time="2025-08-13T01:25:32.330693229Z" level=info msg="StartContainer for \"709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39\" returns successfully" Aug 13 01:25:32.332558 systemd[1]: cri-containerd-709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39.scope: Deactivated successfully. Aug 13 01:25:32.365766 env[1557]: time="2025-08-13T01:25:32.365733857Z" level=info msg="shim disconnected" id=709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39 Aug 13 01:25:32.365897 env[1557]: time="2025-08-13T01:25:32.365771168Z" level=warning msg="cleaning up after shim disconnected" id=709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39 namespace=k8s.io Aug 13 01:25:32.365897 env[1557]: time="2025-08-13T01:25:32.365780764Z" level=info msg="cleaning up dead shim" Aug 13 01:25:32.371035 env[1557]: time="2025-08-13T01:25:32.370991911Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5082 runtime=io.containerd.runc.v2\n" Aug 13 01:25:33.091055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39-rootfs.mount: Deactivated successfully. Aug 13 01:25:33.304805 env[1557]: time="2025-08-13T01:25:33.304778464Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:25:33.309104 env[1557]: time="2025-08-13T01:25:33.309055428Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff\"" Aug 13 01:25:33.309335 env[1557]: time="2025-08-13T01:25:33.309318570Z" level=info msg="StartContainer for \"4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff\"" Aug 13 01:25:33.319522 systemd[1]: Started cri-containerd-4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff.scope. Aug 13 01:25:33.333920 env[1557]: time="2025-08-13T01:25:33.333885708Z" level=info msg="StartContainer for \"4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff\" returns successfully" Aug 13 01:25:33.334445 systemd[1]: cri-containerd-4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff.scope: Deactivated successfully. Aug 13 01:25:33.371352 env[1557]: time="2025-08-13T01:25:33.371197006Z" level=info msg="shim disconnected" id=4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff Aug 13 01:25:33.371352 env[1557]: time="2025-08-13T01:25:33.371275595Z" level=warning msg="cleaning up after shim disconnected" id=4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff namespace=k8s.io Aug 13 01:25:33.371352 env[1557]: time="2025-08-13T01:25:33.371299483Z" level=info msg="cleaning up dead shim" Aug 13 01:25:33.381816 env[1557]: time="2025-08-13T01:25:33.381721105Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:25:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5136 runtime=io.containerd.runc.v2\n" Aug 13 01:25:33.875389 update_engine[1551]: I0813 01:25:33.875298 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:25:33.876293 update_engine[1551]: I0813 01:25:33.875820 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:25:33.876293 update_engine[1551]: E0813 01:25:33.876044 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:25:33.876293 update_engine[1551]: I0813 01:25:33.876221 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 01:25:34.091315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff-rootfs.mount: Deactivated successfully. Aug 13 01:25:34.301621 env[1557]: time="2025-08-13T01:25:34.301387990Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:25:34.312603 env[1557]: time="2025-08-13T01:25:34.312559030Z" level=info msg="CreateContainer within sandbox \"27e090c1c3d7e54505d058bef3a3c157bda29b3c6d3a5d34518d4406b74a73c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f91cf3dc14f0279a857f8b544ca7ca346b5c112fbf70c2b3ffe7638eb26c17d6\"" Aug 13 01:25:34.312939 env[1557]: time="2025-08-13T01:25:34.312891500Z" level=info msg="StartContainer for \"f91cf3dc14f0279a857f8b544ca7ca346b5c112fbf70c2b3ffe7638eb26c17d6\"" Aug 13 01:25:34.314010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932374898.mount: Deactivated successfully. Aug 13 01:25:34.322733 systemd[1]: Started cri-containerd-f91cf3dc14f0279a857f8b544ca7ca346b5c112fbf70c2b3ffe7638eb26c17d6.scope. Aug 13 01:25:34.335132 env[1557]: time="2025-08-13T01:25:34.335104715Z" level=info msg="StartContainer for \"f91cf3dc14f0279a857f8b544ca7ca346b5c112fbf70c2b3ffe7638eb26c17d6\" returns successfully" Aug 13 01:25:34.486242 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 01:25:35.306599 kubelet[2463]: I0813 01:25:35.306557 2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s5z6s" podStartSLOduration=5.306542938 podStartE2EDuration="5.306542938s" podCreationTimestamp="2025-08-13 01:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:25:35.306159529 +0000 UTC m=+428.330667758" watchObservedRunningTime="2025-08-13 01:25:35.306542938 +0000 UTC m=+428.331051177" Aug 13 01:25:35.392906 kubelet[2463]: W0813 01:25:35.392728 2463 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727573b0_db13_4c8e_a020_d31e51a98d03.slice/cri-containerd-20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19.scope WatchSource:0}: task 20cafbf37828d8c0778a64fc9fff0f1b8e2116c9762779096c928f8f4054ca19 not found Aug 13 01:25:36.096852 kubelet[2463]: I0813 01:25:36.096807 2463 setters.go:618] "Node became not ready" node="ci-3510.3.8-a-9864ec3500" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T01:25:36Z","lastTransitionTime":"2025-08-13T01:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 01:25:37.624028 systemd-networkd[1313]: lxc_health: Link UP Aug 13 01:25:37.648099 systemd-networkd[1313]: lxc_health: Gained carrier Aug 13 01:25:37.648256 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 01:25:38.502427 kubelet[2463]: W0813 01:25:38.502399 2463 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727573b0_db13_4c8e_a020_d31e51a98d03.slice/cri-containerd-4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a.scope WatchSource:0}: task 4084aeaf92a22edff90647d50189ad3e883adabdb080f3a497bfa904997e813a not found Aug 13 01:25:39.207368 systemd-networkd[1313]: lxc_health: Gained IPv6LL Aug 13 01:25:41.609080 kubelet[2463]: W0813 01:25:41.608984 2463 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727573b0_db13_4c8e_a020_d31e51a98d03.slice/cri-containerd-709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39.scope WatchSource:0}: task 709a50a42f10a8d0c24462ce161fe377dc032683b1a385a0e2ed6374e96c5f39 not found Aug 13 01:25:43.679215 sshd[4740]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:43.685375 systemd[1]: sshd@26-147.75.71.225:22-139.178.89.65:46978.service: Deactivated successfully. Aug 13 01:25:43.687283 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:25:43.689114 systemd-logind[1590]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:25:43.691507 systemd-logind[1590]: Removed session 28. Aug 13 01:25:43.875404 update_engine[1551]: I0813 01:25:43.875326 1551 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:25:43.876542 update_engine[1551]: I0813 01:25:43.875952 1551 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:25:43.876542 update_engine[1551]: E0813 01:25:43.876219 1551 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:25:43.876542 update_engine[1551]: I0813 01:25:43.876492 1551 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 01:25:44.720438 kubelet[2463]: W0813 01:25:44.720310 2463 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod727573b0_db13_4c8e_a020_d31e51a98d03.slice/cri-containerd-4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff.scope WatchSource:0}: task 4d9746e30d43a1a2f8ac0e49ea6672aa41b68643eaae72e7ed8736d47076cbff not found