Sep 6 01:36:49.556035 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 5 22:53:38 -00 2025 Sep 6 01:36:49.556048 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:36:49.556055 kernel: BIOS-provided physical RAM map: Sep 6 01:36:49.556059 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 6 01:36:49.556063 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 6 01:36:49.556067 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 6 01:36:49.556071 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 6 01:36:49.556075 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 6 01:36:49.556079 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b17fff] usable Sep 6 01:36:49.556083 kernel: BIOS-e820: [mem 0x0000000081b18000-0x0000000081b18fff] ACPI NVS Sep 6 01:36:49.556088 kernel: BIOS-e820: [mem 0x0000000081b19000-0x0000000081b19fff] reserved Sep 6 01:36:49.556092 kernel: BIOS-e820: [mem 0x0000000081b1a000-0x000000008afc4fff] usable Sep 6 01:36:49.556095 kernel: BIOS-e820: [mem 0x000000008afc5000-0x000000008c0a9fff] reserved Sep 6 01:36:49.556099 kernel: BIOS-e820: [mem 0x000000008c0aa000-0x000000008c232fff] usable Sep 6 01:36:49.556104 kernel: BIOS-e820: [mem 0x000000008c233000-0x000000008c664fff] ACPI NVS Sep 6 01:36:49.556110 kernel: BIOS-e820: [mem 0x000000008c665000-0x000000008eefefff] reserved Sep 6 01:36:49.556114 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 6 01:36:49.556118 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 6 01:36:49.556122 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 6 01:36:49.556127 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 6 01:36:49.556131 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 6 01:36:49.556135 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 6 01:36:49.556139 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 6 01:36:49.556143 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 6 01:36:49.556147 kernel: NX (Execute Disable) protection: active Sep 6 01:36:49.556152 kernel: SMBIOS 3.2.1 present. Sep 6 01:36:49.556157 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Sep 6 01:36:49.556161 kernel: tsc: Detected 3400.000 MHz processor Sep 6 01:36:49.556165 kernel: tsc: Detected 3399.906 MHz TSC Sep 6 01:36:49.556170 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 6 01:36:49.556174 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 6 01:36:49.556179 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 6 01:36:49.556183 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 6 01:36:49.556188 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 6 01:36:49.556192 kernel: Using GB pages for direct mapping Sep 6 01:36:49.556196 kernel: ACPI: Early table checksum verification disabled Sep 6 01:36:49.556201 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 6 01:36:49.556206 kernel: ACPI: XSDT 0x000000008C5460C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 6 01:36:49.556210 kernel: ACPI: FACP 0x000000008C582670 000114 (v06 01072009 AMI 00010013) Sep 6 01:36:49.556215 kernel: ACPI: DSDT 0x000000008C546268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 6 01:36:49.556221 kernel: ACPI: FACS 0x000000008C664F80 000040 Sep 6 01:36:49.556226 kernel: ACPI: APIC 0x000000008C582788 00012C (v04 01072009 AMI 00010013) Sep 6 01:36:49.556231 kernel: ACPI: FPDT 0x000000008C5828B8 000044 (v01 01072009 AMI 00010013) Sep 6 01:36:49.556236 kernel: ACPI: FIDT 0x000000008C582900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 6 01:36:49.556241 kernel: ACPI: MCFG 0x000000008C5829A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 6 01:36:49.556246 kernel: ACPI: SPMI 0x000000008C5829E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 6 01:36:49.556250 kernel: ACPI: SSDT 0x000000008C582A28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 6 01:36:49.556255 kernel: ACPI: SSDT 0x000000008C584548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 6 01:36:49.556260 kernel: ACPI: SSDT 0x000000008C587710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 6 01:36:49.556264 kernel: ACPI: HPET 0x000000008C589A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 6 01:36:49.556270 kernel: ACPI: SSDT 0x000000008C589A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 6 01:36:49.556275 kernel: ACPI: SSDT 0x000000008C58AA28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 6 01:36:49.556280 kernel: ACPI: UEFI 0x000000008C58B320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 6 01:36:49.556284 kernel: ACPI: LPIT 0x000000008C58B368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 6 01:36:49.556289 kernel: ACPI: SSDT 0x000000008C58B400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 6 01:36:49.556294 kernel: ACPI: SSDT 0x000000008C58DBE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 6 01:36:49.556299 kernel: ACPI: DBGP 0x000000008C58F0C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 6 01:36:49.556303 kernel: ACPI: DBG2 0x000000008C58F100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 6 01:36:49.556309 kernel: ACPI: SSDT 0x000000008C58F158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 6 01:36:49.556314 kernel: ACPI: DMAR 0x000000008C590CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 6 01:36:49.556319 kernel: ACPI: SSDT 0x000000008C590D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 6 01:36:49.556323 kernel: ACPI: TPM2 0x000000008C590E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 6 01:36:49.556328 kernel: ACPI: SSDT 0x000000008C590EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 6 01:36:49.556333 kernel: ACPI: WSMT 0x000000008C591C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 6 01:36:49.556337 kernel: ACPI: EINJ 0x000000008C591C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 6 01:36:49.556342 kernel: ACPI: ERST 0x000000008C591D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 6 01:36:49.556347 kernel: ACPI: BERT 0x000000008C591FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 6 01:36:49.556352 kernel: ACPI: HEST 0x000000008C591FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 6 01:36:49.556360 kernel: ACPI: SSDT 0x000000008C592278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 6 01:36:49.556365 kernel: ACPI: Reserving FACP table memory at [mem 0x8c582670-0x8c582783] Sep 6 01:36:49.556370 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c546268-0x8c58266b] Sep 6 01:36:49.556374 kernel: ACPI: Reserving FACS table memory at [mem 0x8c664f80-0x8c664fbf] Sep 6 01:36:49.556379 kernel: ACPI: Reserving APIC table memory at [mem 0x8c582788-0x8c5828b3] Sep 6 01:36:49.556384 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c5828b8-0x8c5828fb] Sep 6 01:36:49.556388 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c582900-0x8c58299b] Sep 6 01:36:49.556394 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c5829a0-0x8c5829db] Sep 6 01:36:49.556399 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c5829e0-0x8c582a20] Sep 6 01:36:49.556403 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c582a28-0x8c584543] Sep 6 01:36:49.556408 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c584548-0x8c58770d] Sep 6 01:36:49.556426 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c587710-0x8c589a3a] Sep 6 01:36:49.556431 kernel: ACPI: Reserving HPET table memory at [mem 0x8c589a40-0x8c589a77] Sep 6 01:36:49.556435 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c589a78-0x8c58aa25] Sep 6 01:36:49.556440 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58b31b] Sep 6 01:36:49.556445 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c58b320-0x8c58b361] Sep 6 01:36:49.556450 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c58b368-0x8c58b3fb] Sep 6 01:36:49.556455 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b400-0x8c58dbdd] Sep 6 01:36:49.556459 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58dbe0-0x8c58f0c1] Sep 6 01:36:49.556464 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c58f0c8-0x8c58f0fb] Sep 6 01:36:49.556469 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c58f100-0x8c58f153] Sep 6 01:36:49.556473 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f158-0x8c590cbe] Sep 6 01:36:49.556478 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c590cc0-0x8c590d2f] Sep 6 01:36:49.556482 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590d30-0x8c590e73] Sep 6 01:36:49.556487 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c590e78-0x8c590eab] Sep 6 01:36:49.556492 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590eb0-0x8c591c3e] Sep 6 01:36:49.556497 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c591c40-0x8c591c67] Sep 6 01:36:49.556502 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c591c68-0x8c591d97] Sep 6 01:36:49.556506 kernel: ACPI: Reserving ERST table memory at [mem 0x8c591d98-0x8c591fc7] Sep 6 01:36:49.556511 kernel: ACPI: Reserving BERT table memory at [mem 0x8c591fc8-0x8c591ff7] Sep 6 01:36:49.556515 kernel: ACPI: Reserving HEST table memory at [mem 0x8c591ff8-0x8c592273] Sep 6 01:36:49.556520 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592278-0x8c5923d9] Sep 6 01:36:49.556524 kernel: No NUMA configuration found Sep 6 01:36:49.556529 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 6 01:36:49.556535 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 6 01:36:49.556539 kernel: Zone ranges: Sep 6 01:36:49.556544 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 6 01:36:49.556549 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 6 01:36:49.556553 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 6 01:36:49.556558 kernel: Movable zone start for each node Sep 6 01:36:49.556562 kernel: Early memory node ranges Sep 6 01:36:49.556567 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 6 01:36:49.556572 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 6 01:36:49.556577 kernel: node 0: [mem 0x0000000040400000-0x0000000081b17fff] Sep 6 01:36:49.556582 kernel: node 0: [mem 0x0000000081b1a000-0x000000008afc4fff] Sep 6 01:36:49.556587 kernel: node 0: [mem 0x000000008c0aa000-0x000000008c232fff] Sep 6 01:36:49.556591 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 6 01:36:49.556596 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 6 01:36:49.556600 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 6 01:36:49.556605 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 6 01:36:49.556613 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 6 01:36:49.556619 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 6 01:36:49.556624 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 6 01:36:49.556629 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 6 01:36:49.556635 kernel: On node 0, zone DMA32: 11468 pages in unavailable ranges Sep 6 01:36:49.556640 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 6 01:36:49.556645 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 6 01:36:49.556650 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 6 01:36:49.556654 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 6 01:36:49.556659 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 6 01:36:49.556664 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 6 01:36:49.556670 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 6 01:36:49.556675 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 6 01:36:49.556680 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 6 01:36:49.556685 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 6 01:36:49.556689 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 6 01:36:49.556694 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 6 01:36:49.556699 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 6 01:36:49.556704 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 6 01:36:49.556709 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 6 01:36:49.556715 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 6 01:36:49.556720 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 6 01:36:49.556724 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 6 01:36:49.556729 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 6 01:36:49.556734 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 6 01:36:49.556739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 6 01:36:49.556744 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 6 01:36:49.556749 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 6 01:36:49.556754 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 6 01:36:49.556760 kernel: TSC deadline timer available Sep 6 01:36:49.556765 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 6 01:36:49.556770 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 6 01:36:49.556775 kernel: Booting paravirtualized kernel on bare hardware Sep 6 01:36:49.556780 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 6 01:36:49.556785 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Sep 6 01:36:49.556790 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Sep 6 01:36:49.556795 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Sep 6 01:36:49.556800 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 6 01:36:49.556806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232407 Sep 6 01:36:49.556810 kernel: Policy zone: Normal Sep 6 01:36:49.556816 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:36:49.556821 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 01:36:49.556826 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 6 01:36:49.556831 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 6 01:36:49.556836 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 01:36:49.556842 kernel: Memory: 32722572K/33452948K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 730116K reserved, 0K cma-reserved) Sep 6 01:36:49.556847 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 6 01:36:49.556852 kernel: ftrace: allocating 34612 entries in 136 pages Sep 6 01:36:49.556857 kernel: ftrace: allocated 136 pages with 2 groups Sep 6 01:36:49.556862 kernel: rcu: Hierarchical RCU implementation. Sep 6 01:36:49.556867 kernel: rcu: RCU event tracing is enabled. Sep 6 01:36:49.556873 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 6 01:36:49.556878 kernel: Rude variant of Tasks RCU enabled. Sep 6 01:36:49.556883 kernel: Tracing variant of Tasks RCU enabled. Sep 6 01:36:49.556888 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 01:36:49.556893 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 6 01:36:49.556898 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 6 01:36:49.556903 kernel: random: crng init done Sep 6 01:36:49.556908 kernel: Console: colour dummy device 80x25 Sep 6 01:36:49.556913 kernel: printk: console [tty0] enabled Sep 6 01:36:49.556918 kernel: printk: console [ttyS1] enabled Sep 6 01:36:49.556923 kernel: ACPI: Core revision 20210730 Sep 6 01:36:49.556928 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 6 01:36:49.556933 kernel: APIC: Switch to symmetric I/O mode setup Sep 6 01:36:49.556938 kernel: DMAR: Host address width 39 Sep 6 01:36:49.556943 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 6 01:36:49.556948 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 6 01:36:49.556953 kernel: DMAR: RMRR base: 0x0000008cf10000 end: 0x0000008d159fff Sep 6 01:36:49.556958 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 6 01:36:49.556963 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 6 01:36:49.556968 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 6 01:36:49.556973 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 6 01:36:49.556978 kernel: x2apic enabled Sep 6 01:36:49.556984 kernel: Switched APIC routing to cluster x2apic. Sep 6 01:36:49.556989 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 6 01:36:49.556994 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 6 01:36:49.556999 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 6 01:36:49.557003 kernel: process: using mwait in idle threads Sep 6 01:36:49.557008 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 6 01:36:49.557013 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 6 01:36:49.557018 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 6 01:36:49.557023 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:36:49.557029 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 6 01:36:49.557034 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 6 01:36:49.557039 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 6 01:36:49.557043 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 6 01:36:49.557048 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 6 01:36:49.557053 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 6 01:36:49.557058 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 6 01:36:49.557063 kernel: TAA: Mitigation: TSX disabled Sep 6 01:36:49.557068 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 6 01:36:49.557072 kernel: SRBDS: Mitigation: Microcode Sep 6 01:36:49.557077 kernel: GDS: Vulnerable: No microcode Sep 6 01:36:49.557083 kernel: active return thunk: its_return_thunk Sep 6 01:36:49.557088 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 6 01:36:49.557093 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 6 01:36:49.557098 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 6 01:36:49.557103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 6 01:36:49.557107 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 6 01:36:49.557112 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 6 01:36:49.557117 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 6 01:36:49.557122 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 6 01:36:49.557127 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 6 01:36:49.557132 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 6 01:36:49.557138 kernel: Freeing SMP alternatives memory: 32K Sep 6 01:36:49.557143 kernel: pid_max: default: 32768 minimum: 301 Sep 6 01:36:49.557147 kernel: LSM: Security Framework initializing Sep 6 01:36:49.557152 kernel: SELinux: Initializing. Sep 6 01:36:49.557157 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 01:36:49.557162 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 01:36:49.557167 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 6 01:36:49.557172 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 6 01:36:49.557177 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 6 01:36:49.557181 kernel: ... version: 4 Sep 6 01:36:49.557186 kernel: ... bit width: 48 Sep 6 01:36:49.557192 kernel: ... generic registers: 4 Sep 6 01:36:49.557197 kernel: ... value mask: 0000ffffffffffff Sep 6 01:36:49.557202 kernel: ... max period: 00007fffffffffff Sep 6 01:36:49.557207 kernel: ... fixed-purpose events: 3 Sep 6 01:36:49.557212 kernel: ... event mask: 000000070000000f Sep 6 01:36:49.557217 kernel: signal: max sigframe size: 2032 Sep 6 01:36:49.557222 kernel: rcu: Hierarchical SRCU implementation. Sep 6 01:36:49.557227 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 6 01:36:49.557232 kernel: smp: Bringing up secondary CPUs ... Sep 6 01:36:49.557237 kernel: x86: Booting SMP configuration: Sep 6 01:36:49.557242 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Sep 6 01:36:49.557248 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 6 01:36:49.557253 kernel: #9 #10 #11 #12 #13 #14 #15 Sep 6 01:36:49.557257 kernel: smp: Brought up 1 node, 16 CPUs Sep 6 01:36:49.557262 kernel: smpboot: Max logical packages: 1 Sep 6 01:36:49.557267 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 6 01:36:49.557272 kernel: devtmpfs: initialized Sep 6 01:36:49.557277 kernel: x86/mm: Memory block size: 128MB Sep 6 01:36:49.557283 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b18000-0x81b18fff] (4096 bytes) Sep 6 01:36:49.557288 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c233000-0x8c664fff] (4399104 bytes) Sep 6 01:36:49.557293 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 01:36:49.557298 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 6 01:36:49.557303 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 01:36:49.557308 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 01:36:49.557312 kernel: audit: initializing netlink subsys (disabled) Sep 6 01:36:49.557317 kernel: audit: type=2000 audit(1757122604.041:1): state=initialized audit_enabled=0 res=1 Sep 6 01:36:49.557322 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 01:36:49.557328 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 6 01:36:49.557333 kernel: cpuidle: using governor menu Sep 6 01:36:49.557338 kernel: ACPI: bus type PCI registered Sep 6 01:36:49.557343 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 01:36:49.557348 kernel: dca service started, version 1.12.1 Sep 6 01:36:49.557353 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 6 01:36:49.557360 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Sep 6 01:36:49.557365 kernel: PCI: Using configuration type 1 for base access Sep 6 01:36:49.557389 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 6 01:36:49.557395 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 6 01:36:49.557400 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 01:36:49.557419 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 01:36:49.557424 kernel: ACPI: Added _OSI(Module Device) Sep 6 01:36:49.557429 kernel: ACPI: Added _OSI(Processor Device) Sep 6 01:36:49.557434 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 01:36:49.557439 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 01:36:49.557444 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 01:36:49.557448 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 01:36:49.557454 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 6 01:36:49.557459 kernel: ACPI: Dynamic OEM Table Load: Sep 6 01:36:49.557464 kernel: ACPI: SSDT 0xFFFF8DA8C021B100 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 6 01:36:49.557469 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Sep 6 01:36:49.557474 kernel: ACPI: Dynamic OEM Table Load: Sep 6 01:36:49.557479 kernel: ACPI: SSDT 0xFFFF8DA8C1AE2C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 6 01:36:49.557484 kernel: ACPI: Dynamic OEM Table Load: Sep 6 01:36:49.557489 kernel: ACPI: SSDT 0xFFFF8DA8C1A5F000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 6 01:36:49.557494 kernel: ACPI: Dynamic OEM Table Load: Sep 6 01:36:49.557499 kernel: ACPI: SSDT 0xFFFF8DA8C1B4E000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 6 01:36:49.557504 kernel: ACPI: Dynamic OEM Table Load: Sep 6 01:36:49.557509 kernel: ACPI: SSDT 0xFFFF8DA8C0148000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 6 01:36:49.557514 kernel: ACPI: Dynamic OEM Table Load: Sep 6 01:36:49.557519 kernel: ACPI: SSDT 0xFFFF8DA8C1AE1C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 6 01:36:49.557524 kernel: ACPI: Interpreter enabled Sep 6 01:36:49.557529 kernel: ACPI: PM: (supports S0 S5) Sep 6 01:36:49.557534 kernel: ACPI: Using IOAPIC for interrupt routing Sep 6 01:36:49.557539 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 6 01:36:49.557544 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 6 01:36:49.557549 kernel: HEST: Table parsing has been initialized. Sep 6 01:36:49.557554 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 6 01:36:49.557559 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 6 01:36:49.557564 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 6 01:36:49.557569 kernel: ACPI: PM: Power Resource [USBC] Sep 6 01:36:49.557574 kernel: ACPI: PM: Power Resource [V0PR] Sep 6 01:36:49.557579 kernel: ACPI: PM: Power Resource [V1PR] Sep 6 01:36:49.557584 kernel: ACPI: PM: Power Resource [V2PR] Sep 6 01:36:49.557588 kernel: ACPI: PM: Power Resource [WRST] Sep 6 01:36:49.557594 kernel: ACPI: PM: Power Resource [FN00] Sep 6 01:36:49.557599 kernel: ACPI: PM: Power Resource [FN01] Sep 6 01:36:49.557604 kernel: ACPI: PM: Power Resource [FN02] Sep 6 01:36:49.557609 kernel: ACPI: PM: Power Resource [FN03] Sep 6 01:36:49.557614 kernel: ACPI: PM: Power Resource [FN04] Sep 6 01:36:49.557618 kernel: ACPI: PM: Power Resource [PIN] Sep 6 01:36:49.557623 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 6 01:36:49.557689 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 01:36:49.557738 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 6 01:36:49.557782 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 6 01:36:49.557789 kernel: PCI host bridge to bus 0000:00 Sep 6 01:36:49.557836 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 6 01:36:49.557876 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 6 01:36:49.557915 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 6 01:36:49.557953 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 6 01:36:49.557992 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 6 01:36:49.558031 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 6 01:36:49.558083 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 6 01:36:49.558133 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 6 01:36:49.558178 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 6 01:36:49.558227 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 6 01:36:49.558272 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x95520000-0x95520fff 64bit] Sep 6 01:36:49.558320 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 6 01:36:49.558366 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 6 01:36:49.558452 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 6 01:36:49.558496 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 6 01:36:49.558542 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 6 01:36:49.558590 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 6 01:36:49.558635 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 6 01:36:49.558678 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551e000-0x9551efff 64bit] Sep 6 01:36:49.558725 kernel: pci 0000:00:14.5: [8086:a375] type 00 class 0x080501 Sep 6 01:36:49.558769 kernel: pci 0000:00:14.5: reg 0x10: [mem 0x9551d000-0x9551dfff 64bit] Sep 6 01:36:49.558817 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 6 01:36:49.558862 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 6 01:36:49.558909 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 6 01:36:49.558953 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 6 01:36:49.559000 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 6 01:36:49.559044 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 6 01:36:49.559086 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 6 01:36:49.559137 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 6 01:36:49.559181 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 6 01:36:49.559224 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 6 01:36:49.559271 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 6 01:36:49.559315 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 6 01:36:49.559360 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 6 01:36:49.559441 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 6 01:36:49.559492 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 6 01:36:49.559537 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 6 01:36:49.559580 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 6 01:36:49.559622 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 6 01:36:49.559665 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 6 01:36:49.559709 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 6 01:36:49.559751 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 6 01:36:49.559799 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 6 01:36:49.559844 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 6 01:36:49.559893 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 6 01:36:49.559936 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 6 01:36:49.559984 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 6 01:36:49.560029 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 6 01:36:49.560077 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 6 01:36:49.560122 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 6 01:36:49.560173 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 6 01:36:49.560219 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 6 01:36:49.560267 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 6 01:36:49.560311 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 6 01:36:49.560360 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 6 01:36:49.560443 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 6 01:36:49.560487 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 6 01:36:49.560530 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 6 01:36:49.560579 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 6 01:36:49.560623 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 6 01:36:49.560673 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 6 01:36:49.560719 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 6 01:36:49.560765 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 6 01:36:49.560811 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 6 01:36:49.560857 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 6 01:36:49.560904 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 6 01:36:49.560955 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 6 01:36:49.561001 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 6 01:36:49.561046 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 6 01:36:49.561092 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 6 01:36:49.561137 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 6 01:36:49.561183 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 6 01:36:49.561228 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 6 01:36:49.561272 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 6 01:36:49.561316 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 6 01:36:49.561379 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 6 01:36:49.561449 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 6 01:36:49.561495 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 6 01:36:49.561540 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 6 01:36:49.561587 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 6 01:36:49.561632 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 6 01:36:49.561678 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 6 01:36:49.561722 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 6 01:36:49.561766 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 6 01:36:49.561810 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 6 01:36:49.561858 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 6 01:36:49.561904 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 6 01:36:49.561950 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 6 01:36:49.561996 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 6 01:36:49.562073 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 6 01:36:49.562138 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 6 01:36:49.562182 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 6 01:36:49.562226 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 6 01:36:49.562270 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 6 01:36:49.562316 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 6 01:36:49.562390 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 6 01:36:49.562457 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 6 01:36:49.562505 kernel: pci 0000:06:00.0: supports D1 D2 Sep 6 01:36:49.562549 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 01:36:49.562594 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 6 01:36:49.562637 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 6 01:36:49.562681 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 6 01:36:49.562731 kernel: pci_bus 0000:07: extended config space not accessible Sep 6 01:36:49.562783 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 6 01:36:49.562833 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 6 01:36:49.562881 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 6 01:36:49.562929 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 6 01:36:49.562975 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 6 01:36:49.563023 kernel: pci 0000:07:00.0: supports D1 D2 Sep 6 01:36:49.563072 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 6 01:36:49.563118 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 6 01:36:49.563163 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 6 01:36:49.563209 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 6 01:36:49.563217 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 6 01:36:49.563222 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 6 01:36:49.563227 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 6 01:36:49.563233 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 6 01:36:49.563239 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 6 01:36:49.563245 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 6 01:36:49.563250 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 6 01:36:49.563255 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 6 01:36:49.563261 kernel: iommu: Default domain type: Translated Sep 6 01:36:49.563266 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 6 01:36:49.563312 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 6 01:36:49.563384 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 6 01:36:49.563451 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 6 01:36:49.563459 kernel: vgaarb: loaded Sep 6 01:36:49.563465 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 01:36:49.563470 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 01:36:49.563476 kernel: PTP clock support registered Sep 6 01:36:49.563481 kernel: PCI: Using ACPI for IRQ routing Sep 6 01:36:49.563486 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 6 01:36:49.563491 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 6 01:36:49.563497 kernel: e820: reserve RAM buffer [mem 0x81b18000-0x83ffffff] Sep 6 01:36:49.563502 kernel: e820: reserve RAM buffer [mem 0x8afc5000-0x8bffffff] Sep 6 01:36:49.563508 kernel: e820: reserve RAM buffer [mem 0x8c233000-0x8fffffff] Sep 6 01:36:49.563513 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 6 01:36:49.563518 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 6 01:36:49.563523 kernel: clocksource: Switched to clocksource tsc-early Sep 6 01:36:49.563528 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 01:36:49.563534 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 01:36:49.563539 kernel: pnp: PnP ACPI init Sep 6 01:36:49.563586 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 6 01:36:49.563631 kernel: pnp 00:02: [dma 0 disabled] Sep 6 01:36:49.563674 kernel: pnp 00:03: [dma 0 disabled] Sep 6 01:36:49.563717 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 6 01:36:49.563756 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 6 01:36:49.563800 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Sep 6 01:36:49.563844 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Sep 6 01:36:49.563886 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Sep 6 01:36:49.563925 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Sep 6 01:36:49.563965 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Sep 6 01:36:49.564004 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 6 01:36:49.564043 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 6 01:36:49.564082 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 6 01:36:49.564121 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 6 01:36:49.564166 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Sep 6 01:36:49.564206 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 6 01:36:49.564246 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 6 01:36:49.564285 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 6 01:36:49.564326 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 6 01:36:49.564388 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 6 01:36:49.564447 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Sep 6 01:36:49.564493 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Sep 6 01:36:49.564500 kernel: pnp: PnP ACPI: found 10 devices Sep 6 01:36:49.564506 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 6 01:36:49.564512 kernel: NET: Registered PF_INET protocol family Sep 6 01:36:49.564517 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 01:36:49.564522 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 6 01:36:49.564528 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 01:36:49.564533 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 01:36:49.564540 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Sep 6 01:36:49.564545 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 6 01:36:49.564551 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 6 01:36:49.564556 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 6 01:36:49.564561 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 01:36:49.564566 kernel: NET: Registered PF_XDP protocol family Sep 6 01:36:49.564610 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 6 01:36:49.564654 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 6 01:36:49.564698 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 6 01:36:49.564746 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 6 01:36:49.564790 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 6 01:36:49.564836 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 6 01:36:49.564881 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 6 01:36:49.564926 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 6 01:36:49.564969 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 6 01:36:49.565015 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 6 01:36:49.565059 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 6 01:36:49.565103 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 6 01:36:49.565148 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 6 01:36:49.565191 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 6 01:36:49.565235 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 6 01:36:49.565281 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 6 01:36:49.565325 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 6 01:36:49.565394 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 6 01:36:49.565459 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 6 01:36:49.565505 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 6 01:36:49.565550 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 6 01:36:49.565595 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 6 01:36:49.565638 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 6 01:36:49.565681 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 6 01:36:49.565723 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 6 01:36:49.565762 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 6 01:36:49.565801 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 6 01:36:49.565840 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 6 01:36:49.565879 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 6 01:36:49.565917 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 6 01:36:49.565962 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 6 01:36:49.566004 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 6 01:36:49.566049 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 6 01:36:49.566091 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 6 01:36:49.566135 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 6 01:36:49.566176 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 6 01:36:49.566222 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 6 01:36:49.566264 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 6 01:36:49.566308 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 6 01:36:49.566352 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 6 01:36:49.566383 kernel: PCI: CLS 64 bytes, default 64 Sep 6 01:36:49.566388 kernel: DMAR: No ATSR found Sep 6 01:36:49.566394 kernel: DMAR: No SATC found Sep 6 01:36:49.566399 kernel: DMAR: dmar0: Using Queued invalidation Sep 6 01:36:49.566462 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 6 01:36:49.566509 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 6 01:36:49.566555 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 6 01:36:49.566599 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 6 01:36:49.566642 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 6 01:36:49.566685 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 6 01:36:49.566728 kernel: pci 0000:00:14.5: Adding to iommu group 4 Sep 6 01:36:49.566770 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 6 01:36:49.566812 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 6 01:36:49.566856 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 6 01:36:49.566900 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 6 01:36:49.566943 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 6 01:36:49.566985 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 6 01:36:49.567029 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 6 01:36:49.567072 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 6 01:36:49.567116 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 6 01:36:49.567159 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 6 01:36:49.567202 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 6 01:36:49.567246 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 6 01:36:49.567289 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 6 01:36:49.567333 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 6 01:36:49.567378 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 6 01:36:49.567424 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 6 01:36:49.567468 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 6 01:36:49.567514 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 6 01:36:49.567558 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 6 01:36:49.567605 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 6 01:36:49.567652 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 6 01:36:49.567660 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 6 01:36:49.567665 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 6 01:36:49.567671 kernel: software IO TLB: mapped [mem 0x0000000086fc5000-0x000000008afc5000] (64MB) Sep 6 01:36:49.567676 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 6 01:36:49.567682 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 6 01:36:49.567687 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 6 01:36:49.567692 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 6 01:36:49.567739 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 6 01:36:49.567747 kernel: Initialise system trusted keyrings Sep 6 01:36:49.567753 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 6 01:36:49.567758 kernel: Key type asymmetric registered Sep 6 01:36:49.567763 kernel: Asymmetric key parser 'x509' registered Sep 6 01:36:49.567768 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 01:36:49.567774 kernel: io scheduler mq-deadline registered Sep 6 01:36:49.567779 kernel: io scheduler kyber registered Sep 6 01:36:49.567784 kernel: io scheduler bfq registered Sep 6 01:36:49.567830 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 6 01:36:49.567873 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 6 01:36:49.567919 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 6 01:36:49.567963 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 6 01:36:49.568007 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 6 01:36:49.568051 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 6 01:36:49.568100 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 6 01:36:49.568110 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 6 01:36:49.568115 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 6 01:36:49.568120 kernel: pstore: Registered erst as persistent store backend Sep 6 01:36:49.568126 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 6 01:36:49.568131 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 01:36:49.568136 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 6 01:36:49.568142 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 6 01:36:49.568147 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 6 01:36:49.568191 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 6 01:36:49.568200 kernel: i8042: PNP: No PS/2 controller found. Sep 6 01:36:49.568239 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 6 01:36:49.568280 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 6 01:36:49.568319 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-06T01:36:48 UTC (1757122608) Sep 6 01:36:49.568362 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 6 01:36:49.568369 kernel: intel_pstate: Intel P-state driver initializing Sep 6 01:36:49.568375 kernel: intel_pstate: Disabling energy efficiency optimization Sep 6 01:36:49.568381 kernel: intel_pstate: HWP enabled Sep 6 01:36:49.568387 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 6 01:36:49.568392 kernel: vesafb: scrolling: redraw Sep 6 01:36:49.568397 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 6 01:36:49.568403 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000004f237cba, using 768k, total 768k Sep 6 01:36:49.568408 kernel: Console: switching to colour frame buffer device 128x48 Sep 6 01:36:49.568413 kernel: fb0: VESA VGA frame buffer device Sep 6 01:36:49.568418 kernel: NET: Registered PF_INET6 protocol family Sep 6 01:36:49.568424 kernel: Segment Routing with IPv6 Sep 6 01:36:49.568430 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 01:36:49.568435 kernel: NET: Registered PF_PACKET protocol family Sep 6 01:36:49.568440 kernel: Key type dns_resolver registered Sep 6 01:36:49.568445 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Sep 6 01:36:49.568451 kernel: microcode: Microcode Update Driver: v2.2. Sep 6 01:36:49.568456 kernel: IPI shorthand broadcast: enabled Sep 6 01:36:49.568461 kernel: sched_clock: Marking stable (1689957298, 1339925094)->(4492203733, -1462321341) Sep 6 01:36:49.568466 kernel: registered taskstats version 1 Sep 6 01:36:49.568472 kernel: Loading compiled-in X.509 certificates Sep 6 01:36:49.568478 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 59a3efd48c75422889eb056cb9758fbe471623cb' Sep 6 01:36:49.568483 kernel: Key type .fscrypt registered Sep 6 01:36:49.568488 kernel: Key type fscrypt-provisioning registered Sep 6 01:36:49.568494 kernel: pstore: Using crash dump compression: deflate Sep 6 01:36:49.568499 kernel: ima: Allocated hash algorithm: sha1 Sep 6 01:36:49.568504 kernel: ima: No architecture policies found Sep 6 01:36:49.568509 kernel: clk: Disabling unused clocks Sep 6 01:36:49.568514 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 6 01:36:49.568520 kernel: Write protecting the kernel read-only data: 28672k Sep 6 01:36:49.568526 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 6 01:36:49.568531 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 6 01:36:49.568536 kernel: Run /init as init process Sep 6 01:36:49.568542 kernel: with arguments: Sep 6 01:36:49.568547 kernel: /init Sep 6 01:36:49.568552 kernel: with environment: Sep 6 01:36:49.568557 kernel: HOME=/ Sep 6 01:36:49.568562 kernel: TERM=linux Sep 6 01:36:49.568567 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 01:36:49.568575 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:36:49.568582 systemd[1]: Detected architecture x86-64. Sep 6 01:36:49.568587 systemd[1]: Running in initrd. Sep 6 01:36:49.568593 systemd[1]: No hostname configured, using default hostname. Sep 6 01:36:49.568598 systemd[1]: Hostname set to . Sep 6 01:36:49.568603 systemd[1]: Initializing machine ID from random generator. Sep 6 01:36:49.568609 systemd[1]: Queued start job for default target initrd.target. Sep 6 01:36:49.568615 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:36:49.568621 systemd[1]: Reached target cryptsetup.target. Sep 6 01:36:49.568626 systemd[1]: Reached target paths.target. Sep 6 01:36:49.568631 systemd[1]: Reached target slices.target. Sep 6 01:36:49.568637 systemd[1]: Reached target swap.target. Sep 6 01:36:49.568642 systemd[1]: Reached target timers.target. Sep 6 01:36:49.568647 systemd[1]: Listening on iscsid.socket. Sep 6 01:36:49.568653 systemd[1]: Listening on iscsiuio.socket. Sep 6 01:36:49.568659 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:36:49.568665 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:36:49.568670 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:36:49.568675 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:36:49.568681 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:36:49.568686 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Sep 6 01:36:49.568692 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Sep 6 01:36:49.568697 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:36:49.568703 kernel: clocksource: Switched to clocksource tsc Sep 6 01:36:49.568709 systemd[1]: Reached target sockets.target. Sep 6 01:36:49.568714 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:36:49.568720 systemd[1]: Finished network-cleanup.service. Sep 6 01:36:49.568725 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 01:36:49.568731 systemd[1]: Starting systemd-journald.service... Sep 6 01:36:49.568736 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:36:49.568743 systemd-journald[268]: Journal started Sep 6 01:36:49.568770 systemd-journald[268]: Runtime Journal (/run/log/journal/4cabb7f4d7bc44cd9338143625fdf2d1) is 8.0M, max 640.0M, 632.0M free. Sep 6 01:36:49.571847 systemd-modules-load[269]: Inserted module 'overlay' Sep 6 01:36:49.626469 kernel: audit: type=1334 audit(1757122609.575:2): prog-id=6 op=LOAD Sep 6 01:36:49.626484 systemd[1]: Starting systemd-resolved.service... Sep 6 01:36:49.626493 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 01:36:49.575000 audit: BPF prog-id=6 op=LOAD Sep 6 01:36:49.662350 systemd-modules-load[269]: Inserted module 'br_netfilter' Sep 6 01:36:49.681362 kernel: Bridge firewalling registered Sep 6 01:36:49.681375 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 01:36:49.665088 systemd-resolved[271]: Positive Trust Anchors: Sep 6 01:36:49.718400 kernel: SCSI subsystem initialized Sep 6 01:36:49.718412 systemd[1]: Started systemd-journald.service. Sep 6 01:36:49.665093 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:36:49.807272 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 01:36:49.807286 kernel: audit: type=1130 audit(1757122609.738:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.807294 kernel: device-mapper: uevent: version 1.0.3 Sep 6 01:36:49.807301 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 01:36:49.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.665113 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:36:49.922593 kernel: audit: type=1130 audit(1757122609.832:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.666730 systemd-resolved[271]: Defaulting to hostname 'linux'. Sep 6 01:36:49.983605 kernel: audit: type=1130 audit(1757122609.930:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.739588 systemd[1]: Started systemd-resolved.service. Sep 6 01:36:50.043592 kernel: audit: type=1130 audit(1757122609.991:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.829317 systemd-modules-load[269]: Inserted module 'dm_multipath' Sep 6 01:36:50.105625 kernel: audit: type=1130 audit(1757122610.051:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:50.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.833718 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:36:50.168598 kernel: audit: type=1130 audit(1757122610.113:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:50.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:49.931737 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 01:36:49.992668 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:36:50.052672 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 01:36:50.114715 systemd[1]: Reached target nss-lookup.target. Sep 6 01:36:50.178014 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 01:36:50.201937 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:36:50.202238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:36:50.205018 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:36:50.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:50.205629 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:36:50.254586 kernel: audit: type=1130 audit(1757122610.203:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:50.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:50.267716 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 01:36:50.333472 kernel: audit: type=1130 audit(1757122610.266:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:50.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:50.325048 systemd[1]: Starting dracut-cmdline.service... Sep 6 01:36:50.348471 dracut-cmdline[294]: dracut-dracut-053 Sep 6 01:36:50.348471 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Sep 6 01:36:50.348471 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=a807e3b6c1f608bcead7858f1ad5b6908e6d312e2d99c0ec0e5454f978e611a7 Sep 6 01:36:50.419431 kernel: Loading iSCSI transport class v2.0-870. Sep 6 01:36:50.419446 kernel: iscsi: registered transport (tcp) Sep 6 01:36:50.460386 kernel: iscsi: registered transport (qla4xxx) Sep 6 01:36:50.460407 kernel: QLogic iSCSI HBA Driver Sep 6 01:36:50.493161 systemd[1]: Finished dracut-cmdline.service. Sep 6 01:36:50.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:50.493699 systemd[1]: Starting dracut-pre-udev.service... Sep 6 01:36:50.549425 kernel: raid6: avx2x4 gen() 48839 MB/s Sep 6 01:36:50.584392 kernel: raid6: avx2x4 xor() 22325 MB/s Sep 6 01:36:50.619420 kernel: raid6: avx2x2 gen() 53642 MB/s Sep 6 01:36:50.654361 kernel: raid6: avx2x2 xor() 32055 MB/s Sep 6 01:36:50.689361 kernel: raid6: avx2x1 gen() 45288 MB/s Sep 6 01:36:50.724391 kernel: raid6: avx2x1 xor() 27926 MB/s Sep 6 01:36:50.758378 kernel: raid6: sse2x4 gen() 21771 MB/s Sep 6 01:36:50.792391 kernel: raid6: sse2x4 xor() 12004 MB/s Sep 6 01:36:50.826421 kernel: raid6: sse2x2 gen() 22132 MB/s Sep 6 01:36:50.860421 kernel: raid6: sse2x2 xor() 13709 MB/s Sep 6 01:36:50.894397 kernel: raid6: sse2x1 gen() 18676 MB/s Sep 6 01:36:50.945939 kernel: raid6: sse2x1 xor() 9098 MB/s Sep 6 01:36:50.945957 kernel: raid6: using algorithm avx2x2 gen() 53642 MB/s Sep 6 01:36:50.945965 kernel: raid6: .... xor() 32055 MB/s, rmw enabled Sep 6 01:36:50.963979 kernel: raid6: using avx2x2 recovery algorithm Sep 6 01:36:51.010407 kernel: xor: automatically using best checksumming function avx Sep 6 01:36:51.089392 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 6 01:36:51.094927 systemd[1]: Finished dracut-pre-udev.service. Sep 6 01:36:51.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:51.102000 audit: BPF prog-id=7 op=LOAD Sep 6 01:36:51.102000 audit: BPF prog-id=8 op=LOAD Sep 6 01:36:51.104262 systemd[1]: Starting systemd-udevd.service... Sep 6 01:36:51.112100 systemd-udevd[476]: Using default interface naming scheme 'v252'. Sep 6 01:36:51.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:51.118596 systemd[1]: Started systemd-udevd.service. Sep 6 01:36:51.160479 dracut-pre-trigger[488]: rd.md=0: removing MD RAID activation Sep 6 01:36:51.136007 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 01:36:51.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:51.166853 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 01:36:51.177581 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:36:51.233051 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:36:51.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:51.262373 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 01:36:51.266056 kernel: libata version 3.00 loaded. Sep 6 01:36:51.307229 kernel: sdhci: Secure Digital Host Controller Interface driver Sep 6 01:36:51.307275 kernel: sdhci: Copyright(c) Pierre Ossman Sep 6 01:36:51.307366 kernel: AVX2 version of gcm_enc/dec engaged. Sep 6 01:36:51.307385 kernel: ACPI: bus type USB registered Sep 6 01:36:51.359337 kernel: usbcore: registered new interface driver usbfs Sep 6 01:36:51.359363 kernel: usbcore: registered new interface driver hub Sep 6 01:36:51.376748 kernel: usbcore: registered new device driver usb Sep 6 01:36:51.413362 kernel: AES CTR mode by8 optimization enabled Sep 6 01:36:51.413378 kernel: ahci 0000:00:17.0: version 3.0 Sep 6 01:36:51.507365 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 6 01:36:51.507383 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 6 01:36:51.507477 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 6 01:36:51.507487 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 6 01:36:51.507545 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 6 01:36:52.253000 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Sep 6 01:36:52.601811 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 6 01:36:52.601879 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 6 01:36:52.601935 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:5e Sep 6 01:36:52.601989 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 6 01:36:52.602041 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 6 01:36:52.602093 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 6 01:36:52.602147 kernel: scsi host0: ahci Sep 6 01:36:52.602205 kernel: scsi host1: ahci Sep 6 01:36:52.602259 kernel: scsi host2: ahci Sep 6 01:36:52.602312 kernel: scsi host3: ahci Sep 6 01:36:52.602366 kernel: scsi host4: ahci Sep 6 01:36:52.602421 kernel: scsi host5: ahci Sep 6 01:36:52.602536 kernel: scsi host6: ahci Sep 6 01:36:52.602589 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Sep 6 01:36:52.602597 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Sep 6 01:36:52.602604 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Sep 6 01:36:52.602611 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Sep 6 01:36:52.602618 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Sep 6 01:36:52.602624 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Sep 6 01:36:52.602631 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Sep 6 01:36:52.602638 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 6 01:36:52.602694 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 6 01:36:52.602745 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 6 01:36:52.602799 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 6 01:36:52.602850 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:5f Sep 6 01:36:52.602902 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 6 01:36:52.602953 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 6 01:36:52.603004 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 6 01:36:52.603057 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 6 01:36:52.603065 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 6 01:36:52.603072 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Sep 6 01:36:52.603079 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 6 01:36:52.603086 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 6 01:36:52.603092 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 6 01:36:52.603099 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 6 01:36:52.603105 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 6 01:36:52.603112 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Sep 6 01:36:52.603120 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 6 01:36:52.603170 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 6 01:36:52.603178 kernel: ata2.00: Features: NCQ-prio Sep 6 01:36:52.603184 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 6 01:36:52.603191 kernel: ata1.00: Features: NCQ-prio Sep 6 01:36:52.603198 kernel: ata2.00: configured for UDMA/133 Sep 6 01:36:52.603204 kernel: ata1.00: configured for UDMA/133 Sep 6 01:36:52.603211 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Sep 6 01:36:52.750725 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Sep 6 01:36:52.750801 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 6 01:36:52.750859 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 6 01:36:52.750913 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 6 01:36:52.750964 kernel: hub 1-0:1.0: USB hub found Sep 6 01:36:52.751028 kernel: hub 1-0:1.0: 16 ports detected Sep 6 01:36:52.751083 kernel: hub 2-0:1.0: USB hub found Sep 6 01:36:52.751141 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 6 01:36:52.751196 kernel: hub 2-0:1.0: 10 ports detected Sep 6 01:36:52.751251 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:52.767492 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 01:36:52.767502 kernel: ata2.00: Enabling discard_zeroes_data Sep 6 01:36:52.767510 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 6 01:36:52.767585 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 6 01:36:52.767684 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:52.767757 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 6 01:36:52.767819 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 6 01:36:52.767879 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 6 01:36:52.767937 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 6 01:36:52.767993 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 6 01:36:52.768049 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 6 01:36:52.768105 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 6 01:36:52.768162 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 6 01:36:52.768221 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 6 01:36:52.768280 kernel: ata2.00: Enabling discard_zeroes_data Sep 6 01:36:52.768287 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 01:36:52.768294 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 6 01:36:52.768403 kernel: ata2.00: Enabling discard_zeroes_data Sep 6 01:36:52.768411 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 6 01:36:52.768471 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:52.768524 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 01:36:52.768534 kernel: GPT:9289727 != 937703087 Sep 6 01:36:52.768541 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 01:36:52.768547 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 6 01:36:52.768601 kernel: GPT:9289727 != 937703087 Sep 6 01:36:52.768609 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 01:36:52.768615 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:36:52.768622 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:52.768672 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Sep 6 01:36:53.226809 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 6 01:36:53.226879 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 01:36:53.226887 kernel: hub 1-14:1.0: USB hub found Sep 6 01:36:53.226957 kernel: hub 1-14:1.0: 4 ports detected Sep 6 01:36:53.227016 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.300970 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 6 01:36:53.301066 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.356431 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (642) Sep 6 01:36:53.356444 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 01:36:53.356452 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 6 01:36:53.356526 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 6 01:36:53.356642 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:36:53.356650 kernel: port_module: 9 callbacks suppressed Sep 6 01:36:53.356658 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 6 01:36:53.356737 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 01:36:53.356745 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:36:53.356752 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Sep 6 01:36:53.356809 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 01:36:53.356818 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:36:53.356824 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.356876 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 01:36:53.356884 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.356935 kernel: usbcore: registered new interface driver usbhid Sep 6 01:36:53.356943 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.356992 kernel: usbhid: USB HID core driver Sep 6 01:36:53.357000 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 6 01:36:53.357009 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Sep 6 01:36:53.357063 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.357113 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 6 01:36:53.357183 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.357234 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 6 01:36:53.357288 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 6 01:36:53.357297 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.357348 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 6 01:36:53.414587 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:36:53.435882 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 6 01:36:52.797531 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 01:36:52.822476 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 01:36:52.835169 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 01:36:53.488599 disk-uuid[675]: Primary Header is updated. Sep 6 01:36:53.488599 disk-uuid[675]: Secondary Entries is updated. Sep 6 01:36:53.488599 disk-uuid[675]: Secondary Header is updated. Sep 6 01:36:52.847178 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 01:36:52.859198 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:36:52.868873 systemd[1]: Starting disk-uuid.service... Sep 6 01:36:54.032450 kernel: ata1.00: Enabling discard_zeroes_data Sep 6 01:36:54.054150 disk-uuid[676]: The operation has completed successfully. Sep 6 01:36:54.063608 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 6 01:36:54.094187 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 01:36:54.094230 systemd[1]: Finished disk-uuid.service. Sep 6 01:36:54.204459 kernel: audit: type=1130 audit(1757122614.108:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.204492 kernel: audit: type=1131 audit(1757122614.108:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.110059 systemd[1]: Starting verity-setup.service... Sep 6 01:36:54.235449 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 6 01:36:54.267026 systemd[1]: Found device dev-mapper-usr.device. Sep 6 01:36:54.276435 systemd[1]: Mounting sysusr-usr.mount... Sep 6 01:36:54.283592 systemd[1]: Finished verity-setup.service. Sep 6 01:36:54.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.353362 kernel: audit: type=1130 audit(1757122614.302:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.385362 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 01:36:54.385632 systemd[1]: Mounted sysusr-usr.mount. Sep 6 01:36:54.392670 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 01:36:54.393082 systemd[1]: Starting ignition-setup.service... Sep 6 01:36:54.491071 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 01:36:54.491088 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:36:54.491096 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:36:54.491103 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 01:36:54.424821 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 01:36:54.502264 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 01:36:54.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.519068 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 01:36:54.630359 kernel: audit: type=1130 audit(1757122614.517:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.630374 kernel: audit: type=1130 audit(1757122614.578:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.519667 systemd[1]: Finished ignition-setup.service. Sep 6 01:36:54.638000 audit: BPF prog-id=9 op=LOAD Sep 6 01:36:54.580039 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 01:36:54.679463 kernel: audit: type=1334 audit(1757122614.638:24): prog-id=9 op=LOAD Sep 6 01:36:54.640215 systemd[1]: Starting systemd-networkd.service... Sep 6 01:36:54.739432 kernel: audit: type=1130 audit(1757122614.686:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.700317 ignition[866]: Ignition 2.14.0 Sep 6 01:36:54.678047 systemd-networkd[880]: lo: Link UP Sep 6 01:36:54.700321 ignition[866]: Stage: fetch-offline Sep 6 01:36:54.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.678049 systemd-networkd[880]: lo: Gained carrier Sep 6 01:36:54.915937 kernel: audit: type=1130 audit(1757122614.777:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.915952 kernel: audit: type=1130 audit(1757122614.838:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.915960 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 6 01:36:54.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.700348 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:36:54.951626 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Sep 6 01:36:54.678395 systemd-networkd[880]: Enumeration completed Sep 6 01:36:54.700368 ignition[866]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 01:36:54.678466 systemd[1]: Started systemd-networkd.service. Sep 6 01:36:54.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.703060 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 01:36:54.997474 iscsid[902]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:36:54.997474 iscsid[902]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 01:36:54.997474 iscsid[902]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 01:36:54.997474 iscsid[902]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 01:36:54.997474 iscsid[902]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 01:36:54.997474 iscsid[902]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 01:36:54.997474 iscsid[902]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 01:36:55.163515 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 6 01:36:55.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:55.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:36:54.679109 systemd-networkd[880]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:36:54.703126 ignition[866]: parsed url from cmdline: "" Sep 6 01:36:54.687537 systemd[1]: Reached target network.target. Sep 6 01:36:54.703128 ignition[866]: no config URL provided Sep 6 01:36:54.714128 unknown[866]: fetched base config from "system" Sep 6 01:36:54.703130 ignition[866]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 01:36:54.714132 unknown[866]: fetched user config from "system" Sep 6 01:36:54.703153 ignition[866]: parsing config with SHA512: df17c542c69e301a324917fc2e0662bf23482298846bd972154af3544f9bf21b6f3ec6b7d6c5a96ac4a2ce0e9a60fbf8b294dff0d3c659d0dd8160408ca56b65 Sep 6 01:36:54.749078 systemd[1]: Starting iscsiuio.service... Sep 6 01:36:54.714440 ignition[866]: fetch-offline: fetch-offline passed Sep 6 01:36:54.763651 systemd[1]: Started iscsiuio.service. Sep 6 01:36:54.714443 ignition[866]: POST message to Packet Timeline Sep 6 01:36:54.778653 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 01:36:54.714447 ignition[866]: POST Status error: resource requires networking Sep 6 01:36:54.839628 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 01:36:54.714484 ignition[866]: Ignition finished successfully Sep 6 01:36:54.840075 systemd[1]: Starting ignition-kargs.service... Sep 6 01:36:54.920494 ignition[891]: Ignition 2.14.0 Sep 6 01:36:54.916854 systemd-networkd[880]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:36:54.920497 ignition[891]: Stage: kargs Sep 6 01:36:54.929979 systemd[1]: Starting iscsid.service... Sep 6 01:36:54.920556 ignition[891]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:36:54.958602 systemd[1]: Started iscsid.service. Sep 6 01:36:54.920566 ignition[891]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 01:36:54.973952 systemd[1]: Starting dracut-initqueue.service... Sep 6 01:36:54.922988 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 01:36:54.987628 systemd[1]: Finished dracut-initqueue.service. Sep 6 01:36:54.924102 ignition[891]: kargs: kargs passed Sep 6 01:36:55.005575 systemd[1]: Reached target remote-fs-pre.target. Sep 6 01:36:54.924111 ignition[891]: POST message to Packet Timeline Sep 6 01:36:55.049531 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:36:54.924128 ignition[891]: GET https://metadata.packet.net/metadata: attempt #1 Sep 6 01:36:55.075604 systemd[1]: Reached target remote-fs.target. Sep 6 01:36:54.926535 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35481->[::1]:53: read: connection refused Sep 6 01:36:55.094689 systemd[1]: Starting dracut-pre-mount.service... Sep 6 01:36:55.126906 ignition[891]: GET https://metadata.packet.net/metadata: attempt #2 Sep 6 01:36:55.111651 systemd[1]: Finished dracut-pre-mount.service. Sep 6 01:36:55.127397 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49386->[::1]:53: read: connection refused Sep 6 01:36:55.143424 systemd-networkd[880]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:36:55.171550 systemd-networkd[880]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 01:36:55.200150 systemd-networkd[880]: enp1s0f1np1: Link UP Sep 6 01:36:55.200396 systemd-networkd[880]: enp1s0f1np1: Gained carrier Sep 6 01:36:55.210809 systemd-networkd[880]: enp1s0f0np0: Link UP Sep 6 01:36:55.211109 systemd-networkd[880]: eno2: Link UP Sep 6 01:36:55.528074 ignition[891]: GET https://metadata.packet.net/metadata: attempt #3 Sep 6 01:36:55.211402 systemd-networkd[880]: eno1: Link UP Sep 6 01:36:55.529169 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37201->[::1]:53: read: connection refused Sep 6 01:36:55.980763 systemd-networkd[880]: enp1s0f0np0: Gained carrier Sep 6 01:36:55.989629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Sep 6 01:36:56.041677 systemd-networkd[880]: enp1s0f0np0: DHCPv4 address 139.178.94.47/31, gateway 139.178.94.46 acquired from 145.40.83.140 Sep 6 01:36:56.329697 ignition[891]: GET https://metadata.packet.net/metadata: attempt #4 Sep 6 01:36:56.331147 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57020->[::1]:53: read: connection refused Sep 6 01:36:56.468963 systemd-networkd[880]: enp1s0f1np1: Gained IPv6LL Sep 6 01:36:57.300967 systemd-networkd[880]: enp1s0f0np0: Gained IPv6LL Sep 6 01:36:57.932513 ignition[891]: GET https://metadata.packet.net/metadata: attempt #5 Sep 6 01:36:57.933748 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40857->[::1]:53: read: connection refused Sep 6 01:37:01.136432 ignition[891]: GET https://metadata.packet.net/metadata: attempt #6 Sep 6 01:37:02.225092 ignition[891]: GET result: OK Sep 6 01:37:02.612660 ignition[891]: Ignition finished successfully Sep 6 01:37:02.615431 systemd[1]: Finished ignition-kargs.service. Sep 6 01:37:02.708066 kernel: kauditd_printk_skb: 3 callbacks suppressed Sep 6 01:37:02.708097 kernel: audit: type=1130 audit(1757122622.627:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:02.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:02.637257 ignition[919]: Ignition 2.14.0 Sep 6 01:37:02.630805 systemd[1]: Starting ignition-disks.service... Sep 6 01:37:02.637261 ignition[919]: Stage: disks Sep 6 01:37:02.637318 ignition[919]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:37:02.637327 ignition[919]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 01:37:02.638713 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 01:37:02.640207 ignition[919]: disks: disks passed Sep 6 01:37:02.640210 ignition[919]: POST message to Packet Timeline Sep 6 01:37:02.640220 ignition[919]: GET https://metadata.packet.net/metadata: attempt #1 Sep 6 01:37:03.723417 ignition[919]: GET result: OK Sep 6 01:37:04.528291 ignition[919]: Ignition finished successfully Sep 6 01:37:04.531641 systemd[1]: Finished ignition-disks.service. Sep 6 01:37:04.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:04.544177 systemd[1]: Reached target initrd-root-device.target. Sep 6 01:37:04.609677 kernel: audit: type=1130 audit(1757122624.542:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:04.609643 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:37:04.624603 systemd[1]: Reached target local-fs.target. Sep 6 01:37:04.638544 systemd[1]: Reached target sysinit.target. Sep 6 01:37:04.638655 systemd[1]: Reached target basic.target. Sep 6 01:37:04.659322 systemd[1]: Starting systemd-fsck-root.service... Sep 6 01:37:04.691744 systemd-fsck[933]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 6 01:37:04.702967 systemd[1]: Finished systemd-fsck-root.service. Sep 6 01:37:04.796785 kernel: audit: type=1130 audit(1757122624.710:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:04.796801 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 01:37:04.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:04.713485 systemd[1]: Mounting sysroot.mount... Sep 6 01:37:04.804095 systemd[1]: Mounted sysroot.mount. Sep 6 01:37:04.818714 systemd[1]: Reached target initrd-root-fs.target. Sep 6 01:37:04.825350 systemd[1]: Mounting sysroot-usr.mount... Sep 6 01:37:04.839259 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 6 01:37:04.860462 systemd[1]: Starting flatcar-static-network.service... Sep 6 01:37:04.874711 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 01:37:04.874803 systemd[1]: Reached target ignition-diskful.target. Sep 6 01:37:04.894850 systemd[1]: Mounted sysroot-usr.mount. Sep 6 01:37:04.917690 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:37:04.930325 systemd[1]: Starting initrd-setup-root.service... Sep 6 01:37:05.067234 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (945) Sep 6 01:37:05.067252 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 01:37:05.067261 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:37:05.067268 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:37:05.067276 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 01:37:04.998616 systemd[1]: Finished initrd-setup-root.service. Sep 6 01:37:05.130772 kernel: audit: type=1130 audit(1757122625.075:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:05.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:05.130813 coreos-metadata[942]: Sep 06 01:37:05.006 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 6 01:37:05.153617 coreos-metadata[941]: Sep 06 01:37:05.007 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 6 01:37:05.174595 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 01:37:05.077692 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:37:05.199530 initrd-setup-root[960]: cut: /sysroot/etc/group: No such file or directory Sep 6 01:37:05.273597 kernel: audit: type=1130 audit(1757122625.206:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:05.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:05.139948 systemd[1]: Starting ignition-mount.service... Sep 6 01:37:05.280654 initrd-setup-root[968]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 01:37:05.161943 systemd[1]: Starting sysroot-boot.service... Sep 6 01:37:05.297617 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 01:37:05.183725 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Sep 6 01:37:05.317611 ignition[1015]: INFO : Ignition 2.14.0 Sep 6 01:37:05.317611 ignition[1015]: INFO : Stage: mount Sep 6 01:37:05.317611 ignition[1015]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:37:05.317611 ignition[1015]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 01:37:05.317611 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 01:37:05.317611 ignition[1015]: INFO : mount: mount passed Sep 6 01:37:05.317611 ignition[1015]: INFO : POST message to Packet Timeline Sep 6 01:37:05.317611 ignition[1015]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 6 01:37:05.183965 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Sep 6 01:37:05.198303 systemd[1]: Finished sysroot-boot.service. Sep 6 01:37:06.041640 coreos-metadata[941]: Sep 06 01:37:06.041 INFO Fetch successful Sep 6 01:37:06.050640 coreos-metadata[942]: Sep 06 01:37:06.045 INFO Fetch successful Sep 6 01:37:06.119605 coreos-metadata[941]: Sep 06 01:37:06.119 INFO wrote hostname ci-3510.3.8-n-02071fe470 to /sysroot/etc/hostname Sep 6 01:37:06.120027 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 6 01:37:06.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:06.141707 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 6 01:37:06.236479 kernel: audit: type=1130 audit(1757122626.140:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:06.236493 kernel: audit: type=1130 audit(1757122626.205:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:06.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:06.141747 systemd[1]: Finished flatcar-static-network.service. Sep 6 01:37:06.330603 kernel: audit: type=1131 audit(1757122626.205:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:06.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:06.330642 ignition[1015]: INFO : GET result: OK Sep 6 01:37:07.021478 ignition[1015]: INFO : Ignition finished successfully Sep 6 01:37:07.024452 systemd[1]: Finished ignition-mount.service. Sep 6 01:37:07.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:07.040550 systemd[1]: Starting ignition-files.service... Sep 6 01:37:07.111452 kernel: audit: type=1130 audit(1757122627.037:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:07.106305 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 01:37:07.169076 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1032) Sep 6 01:37:07.169091 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 6 01:37:07.169099 kernel: BTRFS info (device sda6): using free space tree Sep 6 01:37:07.192238 kernel: BTRFS info (device sda6): has skinny extents Sep 6 01:37:07.241511 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 6 01:37:07.242925 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 01:37:07.259488 ignition[1051]: INFO : Ignition 2.14.0 Sep 6 01:37:07.259488 ignition[1051]: INFO : Stage: files Sep 6 01:37:07.259488 ignition[1051]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:37:07.259488 ignition[1051]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 01:37:07.259488 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 01:37:07.259488 ignition[1051]: DEBUG : files: compiled without relabeling support, skipping Sep 6 01:37:07.262615 unknown[1051]: wrote ssh authorized keys file for user: core Sep 6 01:37:07.335543 ignition[1051]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 01:37:07.335543 ignition[1051]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 01:37:07.335543 ignition[1051]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 01:37:07.335543 ignition[1051]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 01:37:07.335543 ignition[1051]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 01:37:07.335543 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 01:37:07.335543 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 6 01:37:07.335543 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 01:37:07.335543 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 6 01:37:08.742101 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 01:37:09.505922 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 6 01:37:09.529711 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:37:09.529711 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 6 01:37:09.968386 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 6 01:37:10.221765 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 01:37:10.221765 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Sep 6 01:37:10.254618 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem430512198" Sep 6 01:37:10.254618 ignition[1051]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem430512198": device or resource busy Sep 6 01:37:10.234569 systemd[1]: mnt-oem430512198.mount: Deactivated successfully. Sep 6 01:37:10.521662 ignition[1051]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem430512198", trying btrfs: device or resource busy Sep 6 01:37:10.521662 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem430512198" Sep 6 01:37:10.521662 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem430512198" Sep 6 01:37:10.521662 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem430512198" Sep 6 01:37:10.521662 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem430512198" Sep 6 01:37:10.521662 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Sep 6 01:37:10.521662 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 01:37:10.521662 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 6 01:37:10.677680 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Sep 6 01:37:11.262052 ignition[1051]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 6 01:37:11.262052 ignition[1051]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 6 01:37:11.262052 ignition[1051]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 6 01:37:11.262052 ignition[1051]: INFO : files: op(12): [started] processing unit "packet-phone-home.service" Sep 6 01:37:11.262052 ignition[1051]: INFO : files: op(12): [finished] processing unit "packet-phone-home.service" Sep 6 01:37:11.262052 ignition[1051]: INFO : files: op(13): [started] processing unit "containerd.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(13): [finished] processing unit "containerd.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(18): [started] setting preset to enabled for "packet-phone-home.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(18): [finished] setting preset to enabled for "packet-phone-home.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 01:37:11.345631 ignition[1051]: INFO : files: files passed Sep 6 01:37:11.345631 ignition[1051]: INFO : POST message to Packet Timeline Sep 6 01:37:11.345631 ignition[1051]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 6 01:37:12.508339 ignition[1051]: INFO : GET result: OK Sep 6 01:37:12.939986 ignition[1051]: INFO : Ignition finished successfully Sep 6 01:37:12.942069 systemd[1]: Finished ignition-files.service. Sep 6 01:37:12.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:12.962481 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 01:37:13.034605 kernel: audit: type=1130 audit(1757122632.955:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.024646 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 01:37:13.058612 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 01:37:13.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.024954 systemd[1]: Starting ignition-quench.service... Sep 6 01:37:13.250293 kernel: audit: type=1130 audit(1757122633.068:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.250310 kernel: audit: type=1130 audit(1757122633.135:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.250318 kernel: audit: type=1131 audit(1757122633.135:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.041721 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 01:37:13.069872 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 01:37:13.069943 systemd[1]: Finished ignition-quench.service. Sep 6 01:37:13.406060 kernel: audit: type=1130 audit(1757122633.290:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.406075 kernel: audit: type=1131 audit(1757122633.290:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.136661 systemd[1]: Reached target ignition-complete.target. Sep 6 01:37:13.258979 systemd[1]: Starting initrd-parse-etc.service... Sep 6 01:37:13.280309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 01:37:13.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.280381 systemd[1]: Finished initrd-parse-etc.service. Sep 6 01:37:13.525602 kernel: audit: type=1130 audit(1757122633.452:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.291642 systemd[1]: Reached target initrd-fs.target. Sep 6 01:37:13.414593 systemd[1]: Reached target initrd.target. Sep 6 01:37:13.414651 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 01:37:13.415009 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 01:37:13.435708 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 01:37:13.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.453958 systemd[1]: Starting initrd-cleanup.service... Sep 6 01:37:13.675554 kernel: audit: type=1131 audit(1757122633.591:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.521284 systemd[1]: Stopped target nss-lookup.target. Sep 6 01:37:13.534614 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 01:37:13.550604 systemd[1]: Stopped target timers.target. Sep 6 01:37:13.574679 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 01:37:13.574806 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 01:37:13.592987 systemd[1]: Stopped target initrd.target. Sep 6 01:37:13.668618 systemd[1]: Stopped target basic.target. Sep 6 01:37:13.675653 systemd[1]: Stopped target ignition-complete.target. Sep 6 01:37:13.696685 systemd[1]: Stopped target ignition-diskful.target. Sep 6 01:37:13.712660 systemd[1]: Stopped target initrd-root-device.target. Sep 6 01:37:13.728730 systemd[1]: Stopped target remote-fs.target. Sep 6 01:37:13.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.744867 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 01:37:13.928638 kernel: audit: type=1131 audit(1757122633.839:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.759998 systemd[1]: Stopped target sysinit.target. Sep 6 01:37:13.998420 kernel: audit: type=1131 audit(1757122633.936:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.775993 systemd[1]: Stopped target local-fs.target. Sep 6 01:37:14.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.791969 systemd[1]: Stopped target local-fs-pre.target. Sep 6 01:37:13.808963 systemd[1]: Stopped target swap.target. Sep 6 01:37:13.824841 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 01:37:13.825208 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 01:37:13.841213 systemd[1]: Stopped target cryptsetup.target. Sep 6 01:37:14.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.918589 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 01:37:14.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.918663 systemd[1]: Stopped dracut-initqueue.service. Sep 6 01:37:14.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.937750 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 01:37:14.147572 ignition[1100]: INFO : Ignition 2.14.0 Sep 6 01:37:14.147572 ignition[1100]: INFO : Stage: umount Sep 6 01:37:14.147572 ignition[1100]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 6 01:37:14.147572 ignition[1100]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Sep 6 01:37:14.147572 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 6 01:37:14.147572 ignition[1100]: INFO : umount: umount passed Sep 6 01:37:14.147572 ignition[1100]: INFO : POST message to Packet Timeline Sep 6 01:37:14.147572 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 6 01:37:14.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:14.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:14.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:14.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:13.937827 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 01:37:14.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:14.287688 iscsid[902]: iscsid shutting down. Sep 6 01:37:14.006778 systemd[1]: Stopped target paths.target. Sep 6 01:37:14.020616 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 01:37:14.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:14.024584 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 01:37:14.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:14.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:14.028695 systemd[1]: Stopped target slices.target. Sep 6 01:37:14.049642 systemd[1]: Stopped target sockets.target. Sep 6 01:37:14.067672 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 01:37:14.067803 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 01:37:14.086889 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 01:37:14.087051 systemd[1]: Stopped ignition-files.service. Sep 6 01:37:14.104106 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 6 01:37:14.104497 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 6 01:37:14.123123 systemd[1]: Stopping ignition-mount.service... Sep 6 01:37:14.136777 systemd[1]: Stopping iscsid.service... Sep 6 01:37:14.155514 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 01:37:14.155701 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 01:37:14.176964 systemd[1]: Stopping sysroot-boot.service... Sep 6 01:37:14.194587 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 01:37:14.194947 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 01:37:14.222094 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 01:37:14.222473 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 01:37:14.245470 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 01:37:14.245807 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 01:37:14.245856 systemd[1]: Stopped iscsid.service. Sep 6 01:37:14.262902 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 01:37:14.262960 systemd[1]: Stopped sysroot-boot.service. Sep 6 01:37:14.280002 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 01:37:14.280086 systemd[1]: Closed iscsid.socket. Sep 6 01:37:14.294763 systemd[1]: Stopping iscsiuio.service... Sep 6 01:37:14.309997 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 01:37:14.310169 systemd[1]: Stopped iscsiuio.service. Sep 6 01:37:14.324564 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 01:37:14.324796 systemd[1]: Finished initrd-cleanup.service. Sep 6 01:37:14.344655 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 01:37:14.344743 systemd[1]: Closed iscsiuio.socket. Sep 6 01:37:15.716652 ignition[1100]: INFO : GET result: OK Sep 6 01:37:16.174313 ignition[1100]: INFO : Ignition finished successfully Sep 6 01:37:16.177114 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 01:37:16.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.177371 systemd[1]: Stopped ignition-mount.service. Sep 6 01:37:16.190928 systemd[1]: Stopped target network.target. Sep 6 01:37:16.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.206622 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 01:37:16.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.206762 systemd[1]: Stopped ignition-disks.service. Sep 6 01:37:16.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.221691 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 01:37:16.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.221822 systemd[1]: Stopped ignition-kargs.service. Sep 6 01:37:16.237793 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 01:37:16.237947 systemd[1]: Stopped ignition-setup.service. Sep 6 01:37:16.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.255804 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 01:37:16.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.337000 audit: BPF prog-id=6 op=UNLOAD Sep 6 01:37:16.255957 systemd[1]: Stopped initrd-setup-root.service. Sep 6 01:37:16.272184 systemd[1]: Stopping systemd-networkd.service... Sep 6 01:37:16.283565 systemd-networkd[880]: enp1s0f0np0: DHCPv6 lease lost Sep 6 01:37:16.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.289926 systemd[1]: Stopping systemd-resolved.service... Sep 6 01:37:16.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.291573 systemd-networkd[880]: enp1s0f1np1: DHCPv6 lease lost Sep 6 01:37:16.411000 audit: BPF prog-id=9 op=UNLOAD Sep 6 01:37:16.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.304201 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 01:37:16.304471 systemd[1]: Stopped systemd-resolved.service. Sep 6 01:37:16.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.322198 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 01:37:16.322474 systemd[1]: Stopped systemd-networkd.service. Sep 6 01:37:16.337108 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 01:37:16.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.337206 systemd[1]: Closed systemd-networkd.socket. Sep 6 01:37:16.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.357302 systemd[1]: Stopping network-cleanup.service... Sep 6 01:37:16.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.369586 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 01:37:16.369827 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 01:37:16.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.386843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:37:16.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.386983 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:37:16.405040 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 01:37:16.405188 systemd[1]: Stopped systemd-modules-load.service. Sep 6 01:37:16.421035 systemd[1]: Stopping systemd-udevd.service... Sep 6 01:37:16.440714 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 01:37:16.442291 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 01:37:16.442659 systemd[1]: Stopped systemd-udevd.service. Sep 6 01:37:16.456193 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 01:37:16.456331 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 01:37:16.469715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 01:37:16.469822 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 01:37:16.486774 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 01:37:16.486908 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 01:37:16.502925 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 01:37:16.503082 systemd[1]: Stopped dracut-cmdline.service. Sep 6 01:37:16.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:16.519898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 01:37:16.520049 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 01:37:16.753000 audit: BPF prog-id=5 op=UNLOAD Sep 6 01:37:16.753000 audit: BPF prog-id=4 op=UNLOAD Sep 6 01:37:16.753000 audit: BPF prog-id=3 op=UNLOAD Sep 6 01:37:16.753000 audit: BPF prog-id=8 op=UNLOAD Sep 6 01:37:16.753000 audit: BPF prog-id=7 op=UNLOAD Sep 6 01:37:16.536776 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 01:37:16.824709 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Sep 6 01:37:16.824755 systemd-journald[268]: Failed to send stream file descriptor to service manager: Connection refused Sep 6 01:37:16.550435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 01:37:16.550463 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 01:37:16.566868 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 01:37:16.566923 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 01:37:16.714125 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 01:37:16.714425 systemd[1]: Stopped network-cleanup.service. Sep 6 01:37:16.724961 systemd[1]: Reached target initrd-switch-root.target. Sep 6 01:37:16.741431 systemd[1]: Starting initrd-switch-root.service... Sep 6 01:37:16.751851 systemd[1]: Switching root. Sep 6 01:37:16.824998 systemd-journald[268]: Journal stopped Sep 6 01:37:20.552023 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 01:37:20.552038 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 01:37:20.552046 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 01:37:20.552052 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 01:37:20.552057 kernel: SELinux: policy capability open_perms=1 Sep 6 01:37:20.552063 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 01:37:20.552069 kernel: SELinux: policy capability always_check_network=0 Sep 6 01:37:20.552074 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 01:37:20.552080 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 01:37:20.552086 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 01:37:20.552091 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 01:37:20.552097 systemd[1]: Successfully loaded SELinux policy in 320.747ms. Sep 6 01:37:20.552104 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.336ms. Sep 6 01:37:20.552111 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 01:37:20.552119 systemd[1]: Detected architecture x86-64. Sep 6 01:37:20.552125 systemd[1]: Detected first boot. Sep 6 01:37:20.552135 systemd[1]: Hostname set to . Sep 6 01:37:20.552145 systemd[1]: Initializing machine ID from random generator. Sep 6 01:37:20.552152 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 01:37:20.552157 systemd[1]: Populated /etc with preset unit settings. Sep 6 01:37:20.552165 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:37:20.552174 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:37:20.552181 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:37:20.552188 systemd[1]: Queued start job for default target multi-user.target. Sep 6 01:37:20.552194 systemd[1]: Unnecessary job was removed for dev-sda6.device. Sep 6 01:37:20.552200 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 01:37:20.552207 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 01:37:20.552214 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 6 01:37:20.552220 systemd[1]: Created slice system-getty.slice. Sep 6 01:37:20.552226 systemd[1]: Created slice system-modprobe.slice. Sep 6 01:37:20.552232 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 01:37:20.552238 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 01:37:20.552244 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 01:37:20.552250 systemd[1]: Created slice user.slice. Sep 6 01:37:20.552256 systemd[1]: Started systemd-ask-password-console.path. Sep 6 01:37:20.552262 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 01:37:20.552269 systemd[1]: Set up automount boot.automount. Sep 6 01:37:20.552275 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 01:37:20.552281 systemd[1]: Reached target integritysetup.target. Sep 6 01:37:20.552288 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 01:37:20.552298 systemd[1]: Reached target remote-fs.target. Sep 6 01:37:20.552308 systemd[1]: Reached target slices.target. Sep 6 01:37:20.552316 systemd[1]: Reached target swap.target. Sep 6 01:37:20.552324 systemd[1]: Reached target torcx.target. Sep 6 01:37:20.552333 systemd[1]: Reached target veritysetup.target. Sep 6 01:37:20.552339 systemd[1]: Listening on systemd-coredump.socket. Sep 6 01:37:20.552346 systemd[1]: Listening on systemd-initctl.socket. Sep 6 01:37:20.552352 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 01:37:20.552381 kernel: kauditd_printk_skb: 49 callbacks suppressed Sep 6 01:37:20.552388 kernel: audit: type=1400 audit(1757122639.804:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:37:20.552395 kernel: audit: type=1335 audit(1757122639.804:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 01:37:20.552418 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 01:37:20.552425 systemd[1]: Listening on systemd-journald.socket. Sep 6 01:37:20.552448 systemd[1]: Listening on systemd-networkd.socket. Sep 6 01:37:20.552454 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 01:37:20.552465 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 01:37:20.552477 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 01:37:20.552484 systemd[1]: Mounting dev-hugepages.mount... Sep 6 01:37:20.552490 systemd[1]: Mounting dev-mqueue.mount... Sep 6 01:37:20.552496 systemd[1]: Mounting media.mount... Sep 6 01:37:20.552503 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:37:20.552509 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 01:37:20.552516 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 01:37:20.552522 systemd[1]: Mounting tmp.mount... Sep 6 01:37:20.552529 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 01:37:20.552536 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:37:20.552543 systemd[1]: Starting kmod-static-nodes.service... Sep 6 01:37:20.552549 systemd[1]: Starting modprobe@configfs.service... Sep 6 01:37:20.552556 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:37:20.552562 systemd[1]: Starting modprobe@drm.service... Sep 6 01:37:20.552569 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:37:20.552575 systemd[1]: Starting modprobe@fuse.service... Sep 6 01:37:20.552581 kernel: fuse: init (API version 7.34) Sep 6 01:37:20.552587 systemd[1]: Starting modprobe@loop.service... Sep 6 01:37:20.552594 kernel: loop: module loaded Sep 6 01:37:20.552600 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 01:37:20.552607 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 6 01:37:20.552613 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Sep 6 01:37:20.552620 systemd[1]: Starting systemd-journald.service... Sep 6 01:37:20.552626 systemd[1]: Starting systemd-modules-load.service... Sep 6 01:37:20.552632 kernel: audit: type=1305 audit(1757122640.548:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:37:20.552642 systemd-journald[1295]: Journal started Sep 6 01:37:20.552668 systemd-journald[1295]: Runtime Journal (/run/log/journal/0c98befd35a7433caed258862434249c) is 8.0M, max 640.0M, 632.0M free. Sep 6 01:37:19.804000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 01:37:19.804000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Sep 6 01:37:20.548000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 01:37:20.548000 audit[1295]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffde30e9b80 a2=4000 a3=7ffde30e9c1c items=0 ppid=1 pid=1295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:37:20.548000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:37:20.598425 kernel: audit: type=1300 audit(1757122640.548:94): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffde30e9b80 a2=4000 a3=7ffde30e9c1c items=0 ppid=1 pid=1295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:37:20.598458 kernel: audit: type=1327 audit(1757122640.548:94): proctitle="/usr/lib/systemd/systemd-journald" Sep 6 01:37:20.712568 systemd[1]: Starting systemd-network-generator.service... Sep 6 01:37:20.739548 systemd[1]: Starting systemd-remount-fs.service... Sep 6 01:37:20.765398 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 01:37:20.808409 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:37:20.827547 systemd[1]: Started systemd-journald.service. Sep 6 01:37:20.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:20.836142 systemd[1]: Mounted dev-hugepages.mount. Sep 6 01:37:20.884575 kernel: audit: type=1130 audit(1757122640.834:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:20.890647 systemd[1]: Mounted dev-mqueue.mount. Sep 6 01:37:20.897644 systemd[1]: Mounted media.mount. Sep 6 01:37:20.904660 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 01:37:20.913632 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 01:37:20.922601 systemd[1]: Mounted tmp.mount. Sep 6 01:37:20.929735 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 01:37:20.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:20.938834 systemd[1]: Finished kmod-static-nodes.service. Sep 6 01:37:20.986385 kernel: audit: type=1130 audit(1757122640.937:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:20.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:20.994694 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 01:37:20.994777 systemd[1]: Finished modprobe@configfs.service. Sep 6 01:37:21.043542 kernel: audit: type=1130 audit(1757122640.993:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.051696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:37:21.051773 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:37:21.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.102403 kernel: audit: type=1130 audit(1757122641.050:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.102424 kernel: audit: type=1131 audit(1757122641.050:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.161732 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:37:21.161807 systemd[1]: Finished modprobe@drm.service. Sep 6 01:37:21.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.170778 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:37:21.170856 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:37:21.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.179709 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 01:37:21.179784 systemd[1]: Finished modprobe@fuse.service. Sep 6 01:37:21.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.188750 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:37:21.188831 systemd[1]: Finished modprobe@loop.service. Sep 6 01:37:21.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.197751 systemd[1]: Finished systemd-modules-load.service. Sep 6 01:37:21.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.206724 systemd[1]: Finished systemd-network-generator.service. Sep 6 01:37:21.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.215774 systemd[1]: Finished systemd-remount-fs.service. Sep 6 01:37:21.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.224763 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 01:37:21.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.233886 systemd[1]: Reached target network-pre.target. Sep 6 01:37:21.244110 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 01:37:21.253092 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 01:37:21.260589 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 01:37:21.261620 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 01:37:21.269093 systemd[1]: Starting systemd-journal-flush.service... Sep 6 01:37:21.273094 systemd-journald[1295]: Time spent on flushing to /var/log/journal/0c98befd35a7433caed258862434249c is 14.507ms for 1548 entries. Sep 6 01:37:21.273094 systemd-journald[1295]: System Journal (/var/log/journal/0c98befd35a7433caed258862434249c) is 8.0M, max 195.6M, 187.6M free. Sep 6 01:37:21.320361 systemd-journald[1295]: Received client request to flush runtime journal. Sep 6 01:37:21.286462 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:37:21.287123 systemd[1]: Starting systemd-random-seed.service... Sep 6 01:37:21.303507 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:37:21.304116 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:37:21.311005 systemd[1]: Starting systemd-sysusers.service... Sep 6 01:37:21.318059 systemd[1]: Starting systemd-udev-settle.service... Sep 6 01:37:21.325763 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 01:37:21.333548 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 01:37:21.341624 systemd[1]: Finished systemd-journal-flush.service. Sep 6 01:37:21.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.349640 systemd[1]: Finished systemd-random-seed.service. Sep 6 01:37:21.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.357603 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:37:21.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.365590 systemd[1]: Finished systemd-sysusers.service. Sep 6 01:37:21.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.374519 systemd[1]: Reached target first-boot-complete.target. Sep 6 01:37:21.383144 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 01:37:21.391680 udevadm[1322]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 01:37:21.400626 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 01:37:21.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.568158 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 01:37:21.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.577342 systemd[1]: Starting systemd-udevd.service... Sep 6 01:37:21.589028 systemd-udevd[1330]: Using default interface naming scheme 'v252'. Sep 6 01:37:21.607905 systemd[1]: Started systemd-udevd.service. Sep 6 01:37:21.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.620762 systemd[1]: Found device dev-ttyS1.device. Sep 6 01:37:21.669702 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 6 01:37:21.669780 kernel: mousedev: PS/2 mouse device common for all mice Sep 6 01:37:21.669803 kernel: ACPI: button: Sleep Button [SLPB] Sep 6 01:37:21.689707 systemd[1]: Starting systemd-networkd.service... Sep 6 01:37:21.712111 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:21.712672 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 6 01:37:21.756574 kernel: ACPI: button: Power Button [PWRF] Sep 6 01:37:21.757362 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:21.776755 systemd[1]: Starting systemd-userdbd.service... Sep 6 01:37:21.807369 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:21.860477 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:21.860618 kernel: IPMI message handler: version 39.2 Sep 6 01:37:21.785000 audit[1389]: AVC avc: denied { confidentiality } for pid=1389 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 6 01:37:21.863982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 01:37:21.889558 systemd[1]: Started systemd-userdbd.service. Sep 6 01:37:21.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:21.785000 audit[1389]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fdbe3123010 a1=4d9cc a2=7fdbe4ddbbc5 a3=5 items=42 ppid=1330 pid=1389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:37:21.785000 audit: CWD cwd="/" Sep 6 01:37:21.785000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=1 name=(null) inode=9006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=2 name=(null) inode=9006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=3 name=(null) inode=9007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=4 name=(null) inode=9006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=5 name=(null) inode=9008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=6 name=(null) inode=9006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=7 name=(null) inode=9009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=8 name=(null) inode=9009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=9 name=(null) inode=9010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=10 name=(null) inode=9009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=11 name=(null) inode=9011 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=12 name=(null) inode=9009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=13 name=(null) inode=9012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=14 name=(null) inode=9009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=15 name=(null) inode=9013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=16 name=(null) inode=9009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=17 name=(null) inode=9014 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=18 name=(null) inode=9006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=19 name=(null) inode=9015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=20 name=(null) inode=9015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=21 name=(null) inode=9016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=22 name=(null) inode=9015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=23 name=(null) inode=9017 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=24 name=(null) inode=9015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=25 name=(null) inode=9018 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=26 name=(null) inode=9015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=27 name=(null) inode=9019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=28 name=(null) inode=9015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=29 name=(null) inode=9020 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=30 name=(null) inode=9006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=31 name=(null) inode=9021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=32 name=(null) inode=9021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=33 name=(null) inode=9022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=34 name=(null) inode=9021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=35 name=(null) inode=9023 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=36 name=(null) inode=9021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=37 name=(null) inode=9024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=38 name=(null) inode=9021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=39 name=(null) inode=9025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=40 name=(null) inode=9021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PATH item=41 name=(null) inode=9026 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 01:37:21.785000 audit: PROCTITLE proctitle="(udev-worker)" Sep 6 01:37:21.919365 kernel: ipmi device interface Sep 6 01:37:21.963609 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 6 01:37:21.966600 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 6 01:37:21.966692 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 6 01:37:21.986980 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 6 01:37:22.053088 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 6 01:37:22.053208 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:22.095912 kernel: ipmi_si: IPMI System Interface driver Sep 6 01:37:22.095941 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 6 01:37:22.138674 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 6 01:37:22.138691 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 6 01:37:22.138705 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 6 01:37:22.281504 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:22.369265 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 6 01:37:22.369365 kernel: iTCO_vendor_support: vendor-support=0 Sep 6 01:37:22.369380 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 6 01:37:22.369444 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 6 01:37:22.369459 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:22.369531 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 6 01:37:22.369542 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Sep 6 01:37:22.369611 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Sep 6 01:37:22.369671 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:22.369738 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 6 01:37:22.283860 systemd-networkd[1390]: bond0: netdev ready Sep 6 01:37:22.286919 systemd-networkd[1390]: lo: Link UP Sep 6 01:37:22.286921 systemd-networkd[1390]: lo: Gained carrier Sep 6 01:37:22.287536 systemd-networkd[1390]: Enumeration completed Sep 6 01:37:22.287644 systemd[1]: Started systemd-networkd.service. Sep 6 01:37:22.287902 systemd-networkd[1390]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 6 01:37:22.288607 systemd-networkd[1390]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:f8:2d.network. Sep 6 01:37:22.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:22.447382 kernel: intel_rapl_common: Found RAPL domain package Sep 6 01:37:22.447455 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Sep 6 01:37:22.447767 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:22.453368 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 6 01:37:22.455359 kernel: intel_rapl_common: Found RAPL domain core Sep 6 01:37:22.455380 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Sep 6 01:37:22.455879 systemd-networkd[1390]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:f8:2c.network. Sep 6 01:37:22.519768 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 6 01:37:22.519819 kernel: intel_rapl_common: Found RAPL domain dram Sep 6 01:37:22.560362 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 6 01:37:22.601414 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 6 01:37:22.622408 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:22.657518 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 6 01:37:22.657532 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Sep 6 01:37:22.717362 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Sep 6 01:37:22.717398 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:22.736928 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 6 01:37:22.725967 systemd-networkd[1390]: bond0: Link UP Sep 6 01:37:22.726197 systemd-networkd[1390]: enp1s0f1np1: Link UP Sep 6 01:37:22.726370 systemd-networkd[1390]: enp1s0f1np1: Gained carrier Sep 6 01:37:22.727368 systemd-networkd[1390]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:f8:2c.network. Sep 6 01:37:22.753394 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 6 01:37:22.773374 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:22.808188 kernel: bond0: active interface up! Sep 6 01:37:22.813690 systemd[1]: Finished systemd-udev-settle.service. Sep 6 01:37:22.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:22.831249 systemd[1]: Starting lvm2-activation-early.service... Sep 6 01:37:22.835362 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 6 01:37:22.847399 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:37:22.882940 systemd[1]: Finished lvm2-activation-early.service. Sep 6 01:37:22.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:22.891625 systemd[1]: Reached target cryptsetup.target. Sep 6 01:37:22.900539 systemd[1]: Starting lvm2-activation.service... Sep 6 01:37:22.904848 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 01:37:22.944620 systemd[1]: Finished lvm2-activation.service. Sep 6 01:37:22.964405 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:22.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:22.984531 systemd[1]: Reached target local-fs-pre.target. Sep 6 01:37:22.987359 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.003472 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 01:37:23.003486 systemd[1]: Reached target local-fs.target. Sep 6 01:37:23.010392 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.026480 systemd[1]: Reached target machines.target. Sep 6 01:37:23.033361 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.050118 systemd[1]: Starting ldconfig.service... Sep 6 01:37:23.055360 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.071951 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:37:23.071974 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:37:23.072670 systemd[1]: Starting systemd-boot-update.service... Sep 6 01:37:23.078360 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.093937 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 01:37:23.101421 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.120055 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 01:37:23.124405 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.124569 systemd[1]: Starting systemd-sysext.service... Sep 6 01:37:23.124764 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1439 (bootctl) Sep 6 01:37:23.125411 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 01:37:23.146403 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.162811 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 01:37:23.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.167361 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.169291 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 01:37:23.188400 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.188571 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 01:37:23.188709 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 01:37:23.209429 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.209467 kernel: loop0: detected capacity change from 0 to 221472 Sep 6 01:37:23.225362 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.264360 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.284528 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.304451 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.324402 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.344538 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.344567 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.363496 systemd-networkd[1390]: enp1s0f0np0: Link UP Sep 6 01:37:23.363674 systemd-networkd[1390]: bond0: Gained carrier Sep 6 01:37:23.363767 systemd-networkd[1390]: enp1s0f0np0: Gained carrier Sep 6 01:37:23.395455 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Sep 6 01:37:23.395508 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Sep 6 01:37:23.398688 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 01:37:23.399112 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 01:37:23.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.416361 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 01:37:23.417744 systemd-networkd[1390]: enp1s0f1np1: Link DOWN Sep 6 01:37:23.417747 systemd-networkd[1390]: enp1s0f1np1: Lost carrier Sep 6 01:37:23.425835 systemd-fsck[1451]: fsck.fat 4.2 (2021-01-31) Sep 6 01:37:23.425835 systemd-fsck[1451]: /dev/sda1: 790 files, 120761/258078 clusters Sep 6 01:37:23.426563 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 01:37:23.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.439445 systemd[1]: Mounting boot.mount... Sep 6 01:37:23.459367 kernel: loop1: detected capacity change from 0 to 221472 Sep 6 01:37:23.461780 systemd[1]: Mounted boot.mount. Sep 6 01:37:23.475976 (sd-sysext)[1458]: Using extensions 'kubernetes'. Sep 6 01:37:23.476210 (sd-sysext)[1458]: Merged extensions into '/usr'. Sep 6 01:37:23.480179 systemd[1]: Finished systemd-boot-update.service. Sep 6 01:37:23.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.496256 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:37:23.497043 systemd[1]: Mounting usr-share-oem.mount... Sep 6 01:37:23.503596 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:37:23.504247 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:37:23.512000 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:37:23.518991 systemd[1]: Starting modprobe@loop.service... Sep 6 01:37:23.525478 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:37:23.525547 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:37:23.525615 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:37:23.527429 systemd[1]: Mounted usr-share-oem.mount. Sep 6 01:37:23.534630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:37:23.534709 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:37:23.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.541269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:37:23.541354 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:37:23.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.549648 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:37:23.549725 systemd[1]: Finished modprobe@loop.service. Sep 6 01:37:23.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.557689 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:37:23.557748 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:37:23.558248 systemd[1]: Finished systemd-sysext.service. Sep 6 01:37:23.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.572171 systemd[1]: Starting ensure-sysext.service... Sep 6 01:37:23.575361 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 6 01:37:23.586848 ldconfig[1438]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 01:37:23.588021 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 01:37:23.592387 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Sep 6 01:37:23.592938 systemd-networkd[1390]: enp1s0f1np1: Link UP Sep 6 01:37:23.593123 systemd-networkd[1390]: enp1s0f1np1: Gained carrier Sep 6 01:37:23.596989 systemd-tmpfiles[1475]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 01:37:23.598195 systemd-tmpfiles[1475]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 01:37:23.599209 systemd-tmpfiles[1475]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 01:37:23.600699 systemd[1]: Finished ldconfig.service. Sep 6 01:37:23.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.616521 systemd[1]: Reloading. Sep 6 01:37:23.636300 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2025-09-06T01:37:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:37:23.636513 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Sep 6 01:37:23.636547 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 6 01:37:23.636316 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2025-09-06T01:37:23Z" level=info msg="torcx already run" Sep 6 01:37:23.692465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:37:23.692474 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:37:23.703568 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:37:23.747959 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 01:37:23.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 01:37:23.758005 systemd[1]: Starting audit-rules.service... Sep 6 01:37:23.765034 systemd[1]: Starting clean-ca-certificates.service... Sep 6 01:37:23.771000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 01:37:23.771000 audit[1581]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff5d5ca30 a2=420 a3=0 items=0 ppid=1564 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 01:37:23.771000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 01:37:23.773341 augenrules[1581]: No rules Sep 6 01:37:23.775181 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 01:37:23.784304 systemd[1]: Starting systemd-resolved.service... Sep 6 01:37:23.792377 systemd[1]: Starting systemd-timesyncd.service... Sep 6 01:37:23.800137 systemd[1]: Starting systemd-update-utmp.service... Sep 6 01:37:23.807811 systemd[1]: Finished audit-rules.service. Sep 6 01:37:23.814619 systemd[1]: Finished clean-ca-certificates.service. Sep 6 01:37:23.822609 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 01:37:23.835491 systemd[1]: Starting systemd-update-done.service... Sep 6 01:37:23.842471 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:37:23.843173 systemd[1]: Finished systemd-update-done.service. Sep 6 01:37:23.853058 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:37:23.853775 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:37:23.861065 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:37:23.868030 systemd[1]: Starting modprobe@loop.service... Sep 6 01:37:23.870142 systemd-resolved[1589]: Positive Trust Anchors: Sep 6 01:37:23.870149 systemd-resolved[1589]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 01:37:23.870168 systemd-resolved[1589]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 01:37:23.873974 systemd-resolved[1589]: Using system hostname 'ci-3510.3.8-n-02071fe470'. Sep 6 01:37:23.875470 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:37:23.875542 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:37:23.875602 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:37:23.876120 systemd[1]: Started systemd-timesyncd.service. Sep 6 01:37:23.885662 systemd[1]: Started systemd-resolved.service. Sep 6 01:37:23.894674 systemd[1]: Finished systemd-update-utmp.service. Sep 6 01:37:23.903634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:37:23.903715 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:37:23.911653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:37:23.911727 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:37:23.919658 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:37:23.919760 systemd[1]: Finished modprobe@loop.service. Sep 6 01:37:23.929099 systemd[1]: Reached target network.target. Sep 6 01:37:23.938451 systemd[1]: Reached target nss-lookup.target. Sep 6 01:37:23.947445 systemd[1]: Reached target time-set.target. Sep 6 01:37:23.955582 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 01:37:23.956269 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 01:37:23.963949 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 01:37:23.970921 systemd[1]: Starting modprobe@loop.service... Sep 6 01:37:23.977409 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 01:37:23.977475 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:37:23.977534 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 01:37:23.978102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 01:37:23.978188 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 01:37:23.986588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 01:37:23.986662 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 01:37:23.994583 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 01:37:23.994677 systemd[1]: Finished modprobe@loop.service. Sep 6 01:37:24.002578 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 01:37:24.002653 systemd[1]: Reached target sysinit.target. Sep 6 01:37:24.010486 systemd[1]: Started motdgen.path. Sep 6 01:37:24.017459 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 01:37:24.027523 systemd[1]: Started logrotate.timer. Sep 6 01:37:24.034476 systemd[1]: Started mdadm.timer. Sep 6 01:37:24.041438 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 01:37:24.049413 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 01:37:24.049474 systemd[1]: Reached target paths.target. Sep 6 01:37:24.056430 systemd[1]: Reached target timers.target. Sep 6 01:37:24.063601 systemd[1]: Listening on dbus.socket. Sep 6 01:37:24.071077 systemd[1]: Starting docker.socket... Sep 6 01:37:24.078247 systemd[1]: Listening on sshd.socket. Sep 6 01:37:24.085548 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:37:24.085612 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 01:37:24.086289 systemd[1]: Listening on docker.socket. Sep 6 01:37:24.094585 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 01:37:24.094647 systemd[1]: Reached target sockets.target. Sep 6 01:37:24.102504 systemd[1]: Reached target basic.target. Sep 6 01:37:24.109542 systemd[1]: System is tainted: cgroupsv1 Sep 6 01:37:24.109572 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:37:24.109635 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 01:37:24.110318 systemd[1]: Starting containerd.service... Sep 6 01:37:24.117967 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 6 01:37:24.127059 systemd[1]: Starting coreos-metadata.service... Sep 6 01:37:24.134082 systemd[1]: Starting dbus.service... Sep 6 01:37:24.140043 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 01:37:24.144130 jq[1624]: false Sep 6 01:37:24.147224 systemd[1]: Starting extend-filesystems.service... Sep 6 01:37:24.148418 coreos-metadata[1617]: Sep 06 01:37:24.148 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 6 01:37:24.150010 dbus-daemon[1623]: [system] SELinux support is enabled Sep 6 01:37:24.153472 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 01:37:24.154176 systemd[1]: Starting modprobe@drm.service... Sep 6 01:37:24.154623 extend-filesystems[1626]: Found loop1 Sep 6 01:37:24.175515 extend-filesystems[1626]: Found sda Sep 6 01:37:24.175515 extend-filesystems[1626]: Found sda1 Sep 6 01:37:24.175515 extend-filesystems[1626]: Found sda2 Sep 6 01:37:24.175515 extend-filesystems[1626]: Found sda3 Sep 6 01:37:24.175515 extend-filesystems[1626]: Found usr Sep 6 01:37:24.175515 extend-filesystems[1626]: Found sda4 Sep 6 01:37:24.175515 extend-filesystems[1626]: Found sda6 Sep 6 01:37:24.175515 extend-filesystems[1626]: Found sda7 Sep 6 01:37:24.175515 extend-filesystems[1626]: Found sda9 Sep 6 01:37:24.175515 extend-filesystems[1626]: Checking size of /dev/sda9 Sep 6 01:37:24.175515 extend-filesystems[1626]: Resized partition /dev/sda9 Sep 6 01:37:24.311443 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Sep 6 01:37:24.311478 coreos-metadata[1620]: Sep 06 01:37:24.156 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 6 01:37:24.162305 systemd[1]: Starting motdgen.service... Sep 6 01:37:24.311668 extend-filesystems[1637]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 01:37:24.187354 systemd[1]: Starting prepare-helm.service... Sep 6 01:37:24.202243 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 01:37:24.217083 systemd[1]: Starting sshd-keygen.service... Sep 6 01:37:24.236066 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 01:37:24.242477 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 01:37:24.243145 systemd[1]: Starting tcsd.service... Sep 6 01:37:24.326969 update_engine[1660]: I0906 01:37:24.317759 1660 main.cc:92] Flatcar Update Engine starting Sep 6 01:37:24.326969 update_engine[1660]: I0906 01:37:24.321727 1660 update_check_scheduler.cc:74] Next update check in 7m47s Sep 6 01:37:24.267085 systemd[1]: Starting update-engine.service... Sep 6 01:37:24.327158 jq[1661]: true Sep 6 01:37:24.281065 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 01:37:24.304937 systemd[1]: Started dbus.service. Sep 6 01:37:24.320099 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 01:37:24.320226 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 01:37:24.320457 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 01:37:24.320547 systemd[1]: Finished modprobe@drm.service. Sep 6 01:37:24.334906 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 01:37:24.335022 systemd[1]: Finished motdgen.service. Sep 6 01:37:24.342047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 01:37:24.342173 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 01:37:24.353504 jq[1668]: true Sep 6 01:37:24.354010 systemd[1]: Finished ensure-sysext.service. Sep 6 01:37:24.362113 env[1669]: time="2025-09-06T01:37:24.362085157Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 01:37:24.367585 tar[1666]: linux-amd64/helm Sep 6 01:37:24.368932 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 6 01:37:24.369057 systemd[1]: Condition check resulted in tcsd.service being skipped. Sep 6 01:37:24.371164 env[1669]: time="2025-09-06T01:37:24.371142759Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 01:37:24.371227 env[1669]: time="2025-09-06T01:37:24.371217758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:37:24.371775 env[1669]: time="2025-09-06T01:37:24.371756721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:37:24.371819 env[1669]: time="2025-09-06T01:37:24.371775099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:37:24.371943 env[1669]: time="2025-09-06T01:37:24.371930869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:37:24.371981 env[1669]: time="2025-09-06T01:37:24.371942932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 01:37:24.371981 env[1669]: time="2025-09-06T01:37:24.371954832Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 01:37:24.371981 env[1669]: time="2025-09-06T01:37:24.371964849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 01:37:24.372068 env[1669]: time="2025-09-06T01:37:24.372029231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:37:24.372188 env[1669]: time="2025-09-06T01:37:24.372177569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 01:37:24.372420 env[1669]: time="2025-09-06T01:37:24.372400579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 01:37:24.372464 env[1669]: time="2025-09-06T01:37:24.372419646Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 01:37:24.372496 env[1669]: time="2025-09-06T01:37:24.372464970Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 01:37:24.372496 env[1669]: time="2025-09-06T01:37:24.372481622Z" level=info msg="metadata content store policy set" policy=shared Sep 6 01:37:24.374290 systemd[1]: Started update-engine.service. Sep 6 01:37:24.382622 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:37:24.383615 systemd[1]: Started locksmithd.service. Sep 6 01:37:24.387389 env[1669]: time="2025-09-06T01:37:24.387372719Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 01:37:24.387432 env[1669]: time="2025-09-06T01:37:24.387396138Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 01:37:24.387432 env[1669]: time="2025-09-06T01:37:24.387409412Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 01:37:24.387489 env[1669]: time="2025-09-06T01:37:24.387436042Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.387489 env[1669]: time="2025-09-06T01:37:24.387450159Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.387489 env[1669]: time="2025-09-06T01:37:24.387463413Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.387489 env[1669]: time="2025-09-06T01:37:24.387475244Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.389001 env[1669]: time="2025-09-06T01:37:24.387488949Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.389001 env[1669]: time="2025-09-06T01:37:24.387501121Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.389001 env[1669]: time="2025-09-06T01:37:24.387513685Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.389001 env[1669]: time="2025-09-06T01:37:24.387521196Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.389001 env[1669]: time="2025-09-06T01:37:24.387527682Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 01:37:24.389001 env[1669]: time="2025-09-06T01:37:24.388949914Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 01:37:24.389152 env[1669]: time="2025-09-06T01:37:24.389020366Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 01:37:24.389265 env[1669]: time="2025-09-06T01:37:24.389254921Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 01:37:24.389298 env[1669]: time="2025-09-06T01:37:24.389275088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389298 env[1669]: time="2025-09-06T01:37:24.389289061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 01:37:24.389360 env[1669]: time="2025-09-06T01:37:24.389329463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389360 env[1669]: time="2025-09-06T01:37:24.389343469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389419 env[1669]: time="2025-09-06T01:37:24.389360130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389419 env[1669]: time="2025-09-06T01:37:24.389371859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389419 env[1669]: time="2025-09-06T01:37:24.389385067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389419 env[1669]: time="2025-09-06T01:37:24.389397933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389419 env[1669]: time="2025-09-06T01:37:24.389410139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389549 env[1669]: time="2025-09-06T01:37:24.389423401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389549 env[1669]: time="2025-09-06T01:37:24.389436259Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 01:37:24.389549 env[1669]: time="2025-09-06T01:37:24.389531626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389549 env[1669]: time="2025-09-06T01:37:24.389545752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389657 env[1669]: time="2025-09-06T01:37:24.389557252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389657 env[1669]: time="2025-09-06T01:37:24.389568813Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 01:37:24.389657 env[1669]: time="2025-09-06T01:37:24.389582112Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 01:37:24.389657 env[1669]: time="2025-09-06T01:37:24.389592616Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 01:37:24.389657 env[1669]: time="2025-09-06T01:37:24.389608944Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 01:37:24.389657 env[1669]: time="2025-09-06T01:37:24.389636571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 01:37:24.389837 env[1669]: time="2025-09-06T01:37:24.389806733Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 01:37:24.394049 env[1669]: time="2025-09-06T01:37:24.389847679Z" level=info msg="Connect containerd service" Sep 6 01:37:24.394049 env[1669]: time="2025-09-06T01:37:24.389871808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 01:37:24.390437 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 01:37:24.390457 systemd[1]: Reached target system-config.target. Sep 6 01:37:24.396024 env[1669]: time="2025-09-06T01:37:24.396011056Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:37:24.396149 env[1669]: time="2025-09-06T01:37:24.396128412Z" level=info msg="Start subscribing containerd event" Sep 6 01:37:24.396185 env[1669]: time="2025-09-06T01:37:24.396148006Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 01:37:24.396185 env[1669]: time="2025-09-06T01:37:24.396159087Z" level=info msg="Start recovering state" Sep 6 01:37:24.396185 env[1669]: time="2025-09-06T01:37:24.396180889Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 01:37:24.396269 env[1669]: time="2025-09-06T01:37:24.396191884Z" level=info msg="Start event monitor" Sep 6 01:37:24.396269 env[1669]: time="2025-09-06T01:37:24.396202408Z" level=info msg="Start snapshots syncer" Sep 6 01:37:24.396269 env[1669]: time="2025-09-06T01:37:24.396207925Z" level=info msg="Start cni network conf syncer for default" Sep 6 01:37:24.396269 env[1669]: time="2025-09-06T01:37:24.396211723Z" level=info msg="Start streaming server" Sep 6 01:37:24.396269 env[1669]: time="2025-09-06T01:37:24.396214377Z" level=info msg="containerd successfully booted in 0.034481s" Sep 6 01:37:24.396838 bash[1702]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:37:24.399583 systemd[1]: Starting systemd-logind.service... Sep 6 01:37:24.406470 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 01:37:24.406490 systemd[1]: Reached target user-config.target. Sep 6 01:37:24.415401 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 6 01:37:24.415581 systemd[1]: Started containerd.service. Sep 6 01:37:24.422657 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 01:37:24.425760 systemd-logind[1711]: Watching system buttons on /dev/input/event3 (Power Button) Sep 6 01:37:24.425771 systemd-logind[1711]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 6 01:37:24.425782 systemd-logind[1711]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 6 01:37:24.425927 systemd-logind[1711]: New seat seat0. Sep 6 01:37:24.433735 systemd[1]: Started systemd-logind.service. Sep 6 01:37:24.444772 locksmithd[1705]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 01:37:24.628482 systemd-networkd[1390]: bond0: Gained IPv6LL Sep 6 01:37:24.629071 tar[1666]: linux-amd64/LICENSE Sep 6 01:37:24.629110 tar[1666]: linux-amd64/README.md Sep 6 01:37:24.631827 systemd[1]: Finished prepare-helm.service. Sep 6 01:37:24.687362 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Sep 6 01:37:24.715024 extend-filesystems[1637]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Sep 6 01:37:24.715024 extend-filesystems[1637]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 6 01:37:24.715024 extend-filesystems[1637]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Sep 6 01:37:24.759487 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Sep 6 01:37:24.759593 extend-filesystems[1626]: Resized filesystem in /dev/sda9 Sep 6 01:37:24.759593 extend-filesystems[1626]: Found sdb Sep 6 01:37:24.715459 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 01:37:24.715580 systemd[1]: Finished extend-filesystems.service. Sep 6 01:37:24.810405 kernel: sdhci-pci 0000:00:14.5: SDHCI controller found [8086:a375] (rev 10) Sep 6 01:37:25.252069 sshd_keygen[1657]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 01:37:25.263990 systemd[1]: Finished sshd-keygen.service. Sep 6 01:37:25.273774 systemd[1]: Starting issuegen.service... Sep 6 01:37:25.281940 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 01:37:25.282073 systemd[1]: Finished issuegen.service. Sep 6 01:37:25.290627 systemd[1]: Starting systemd-user-sessions.service... Sep 6 01:37:25.299908 systemd[1]: Finished systemd-user-sessions.service. Sep 6 01:37:25.310572 systemd[1]: Started getty@tty1.service. Sep 6 01:37:25.319616 systemd[1]: Started serial-getty@ttyS1.service. Sep 6 01:37:25.327850 systemd[1]: Reached target getty.target. Sep 6 01:37:25.335344 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 01:37:25.346864 systemd[1]: Reached target network-online.target. Sep 6 01:37:25.359248 systemd[1]: Starting kubelet.service... Sep 6 01:37:26.423290 systemd[1]: Started kubelet.service. Sep 6 01:37:27.010531 kubelet[1755]: E0906 01:37:27.010508 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:37:27.011666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:37:27.011749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:37:30.005445 coreos-metadata[1617]: Sep 06 01:37:30.005 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Sep 6 01:37:30.006342 coreos-metadata[1620]: Sep 06 01:37:30.005 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Sep 6 01:37:30.340115 login[1745]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:37:30.347350 login[1744]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 6 01:37:30.347792 systemd-logind[1711]: New session 1 of user core. Sep 6 01:37:30.348217 systemd[1]: Created slice user-500.slice. Sep 6 01:37:30.348734 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 01:37:30.349919 systemd-logind[1711]: New session 2 of user core. Sep 6 01:37:30.354037 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 01:37:30.354666 systemd[1]: Starting user@500.service... Sep 6 01:37:30.356661 (systemd)[1776]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:37:30.500592 systemd[1776]: Queued start job for default target default.target. Sep 6 01:37:30.500694 systemd[1776]: Reached target paths.target. Sep 6 01:37:30.500705 systemd[1776]: Reached target sockets.target. Sep 6 01:37:30.500714 systemd[1776]: Reached target timers.target. Sep 6 01:37:30.500721 systemd[1776]: Reached target basic.target. Sep 6 01:37:30.500742 systemd[1776]: Reached target default.target. Sep 6 01:37:30.500756 systemd[1776]: Startup finished in 140ms. Sep 6 01:37:30.500818 systemd[1]: Started user@500.service. Sep 6 01:37:30.501434 systemd[1]: Started session-1.scope. Sep 6 01:37:30.501752 systemd[1]: Started session-2.scope. Sep 6 01:37:30.826546 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Sep 6 01:37:30.826697 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Sep 6 01:37:31.005685 coreos-metadata[1617]: Sep 06 01:37:31.005 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 6 01:37:31.006588 coreos-metadata[1620]: Sep 06 01:37:31.005 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 6 01:37:31.069906 systemd[1]: Created slice system-sshd.slice. Sep 6 01:37:31.070677 systemd[1]: Started sshd@0-139.178.94.47:22-139.178.68.195:56378.service. Sep 6 01:37:31.117606 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 56378 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:37:31.120864 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:37:31.131902 systemd-logind[1711]: New session 3 of user core. Sep 6 01:37:31.134153 systemd[1]: Started session-3.scope. Sep 6 01:37:31.187614 systemd[1]: Started sshd@1-139.178.94.47:22-139.178.68.195:56390.service. Sep 6 01:37:31.219258 sshd[1803]: Accepted publickey for core from 139.178.68.195 port 56390 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:37:31.219987 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:37:31.222243 systemd-logind[1711]: New session 4 of user core. Sep 6 01:37:31.222946 systemd[1]: Started session-4.scope. Sep 6 01:37:31.272941 sshd[1803]: pam_unix(sshd:session): session closed for user core Sep 6 01:37:31.275160 systemd[1]: Started sshd@2-139.178.94.47:22-139.178.68.195:56394.service. Sep 6 01:37:31.275624 systemd[1]: sshd@1-139.178.94.47:22-139.178.68.195:56390.service: Deactivated successfully. Sep 6 01:37:31.276263 systemd-logind[1711]: Session 4 logged out. Waiting for processes to exit. Sep 6 01:37:31.276334 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 01:37:31.276978 systemd-logind[1711]: Removed session 4. Sep 6 01:37:31.310983 sshd[1809]: Accepted publickey for core from 139.178.68.195 port 56394 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:37:31.312813 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:37:31.320260 systemd-logind[1711]: New session 5 of user core. Sep 6 01:37:31.322102 systemd[1]: Started session-5.scope. Sep 6 01:37:31.340965 systemd-timesyncd[1591]: Contacted time server 66.118.230.14:123 (0.flatcar.pool.ntp.org). Sep 6 01:37:31.341115 systemd-timesyncd[1591]: Initial clock synchronization to Sat 2025-09-06 01:37:31.590219 UTC. Sep 6 01:37:31.392082 sshd[1809]: pam_unix(sshd:session): session closed for user core Sep 6 01:37:31.393312 systemd[1]: sshd@2-139.178.94.47:22-139.178.68.195:56394.service: Deactivated successfully. Sep 6 01:37:31.393891 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 01:37:31.393920 systemd-logind[1711]: Session 5 logged out. Waiting for processes to exit. Sep 6 01:37:31.394337 systemd-logind[1711]: Removed session 5. Sep 6 01:37:32.035877 coreos-metadata[1620]: Sep 06 01:37:32.035 INFO Fetch successful Sep 6 01:37:32.074254 systemd[1]: Finished coreos-metadata.service. Sep 6 01:37:32.075150 systemd[1]: Started packet-phone-home.service. Sep 6 01:37:32.080680 curl[1822]: % Total % Received % Xferd Average Speed Time Time Time Current Sep 6 01:37:32.080882 curl[1822]: Dload Upload Total Spent Left Speed Sep 6 01:37:32.224631 coreos-metadata[1617]: Sep 06 01:37:32.224 INFO Fetch successful Sep 6 01:37:32.263338 unknown[1617]: wrote ssh authorized keys file for user: core Sep 6 01:37:32.276484 update-ssh-keys[1824]: Updated "/home/core/.ssh/authorized_keys" Sep 6 01:37:32.276762 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 6 01:37:32.276954 systemd[1]: Reached target multi-user.target. Sep 6 01:37:32.277732 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 01:37:32.281678 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 01:37:32.281794 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 01:37:32.281946 systemd[1]: Startup finished in 29.944s (kernel) + 15.369s (userspace) = 45.314s. Sep 6 01:37:32.601261 curl[1822]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Sep 6 01:37:32.603700 systemd[1]: packet-phone-home.service: Deactivated successfully. Sep 6 01:37:37.071681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 01:37:37.072233 systemd[1]: Stopped kubelet.service. Sep 6 01:37:37.074978 systemd[1]: Starting kubelet.service... Sep 6 01:37:37.317037 systemd[1]: Started kubelet.service. Sep 6 01:37:37.339289 kubelet[1838]: E0906 01:37:37.339217 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:37:37.341123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:37:37.341211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:37:41.574894 systemd[1]: Started sshd@3-139.178.94.47:22-139.178.68.195:39086.service. Sep 6 01:37:41.612578 sshd[1857]: Accepted publickey for core from 139.178.68.195 port 39086 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:37:41.613229 sshd[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:37:41.615608 systemd-logind[1711]: New session 6 of user core. Sep 6 01:37:41.616211 systemd[1]: Started session-6.scope. Sep 6 01:37:41.670257 sshd[1857]: pam_unix(sshd:session): session closed for user core Sep 6 01:37:41.672055 systemd[1]: Started sshd@4-139.178.94.47:22-139.178.68.195:39092.service. Sep 6 01:37:41.672466 systemd[1]: sshd@3-139.178.94.47:22-139.178.68.195:39086.service: Deactivated successfully. Sep 6 01:37:41.672942 systemd-logind[1711]: Session 6 logged out. Waiting for processes to exit. Sep 6 01:37:41.673012 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 01:37:41.673521 systemd-logind[1711]: Removed session 6. Sep 6 01:37:41.703652 sshd[1863]: Accepted publickey for core from 139.178.68.195 port 39092 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:37:41.704423 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:37:41.706932 systemd-logind[1711]: New session 7 of user core. Sep 6 01:37:41.707665 systemd[1]: Started session-7.scope. Sep 6 01:37:41.756946 sshd[1863]: pam_unix(sshd:session): session closed for user core Sep 6 01:37:41.761496 systemd[1]: Started sshd@5-139.178.94.47:22-139.178.68.195:39096.service. Sep 6 01:37:41.762201 systemd[1]: sshd@4-139.178.94.47:22-139.178.68.195:39092.service: Deactivated successfully. Sep 6 01:37:41.762705 systemd-logind[1711]: Session 7 logged out. Waiting for processes to exit. Sep 6 01:37:41.762739 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 01:37:41.763166 systemd-logind[1711]: Removed session 7. Sep 6 01:37:41.798478 sshd[1871]: Accepted publickey for core from 139.178.68.195 port 39096 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:37:41.799245 sshd[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:37:41.801873 systemd-logind[1711]: New session 8 of user core. Sep 6 01:37:41.802651 systemd[1]: Started session-8.scope. Sep 6 01:37:41.868675 sshd[1871]: pam_unix(sshd:session): session closed for user core Sep 6 01:37:41.877316 systemd[1]: Started sshd@6-139.178.94.47:22-139.178.68.195:39110.service. Sep 6 01:37:41.879588 systemd[1]: sshd@5-139.178.94.47:22-139.178.68.195:39096.service: Deactivated successfully. Sep 6 01:37:41.882596 systemd-logind[1711]: Session 8 logged out. Waiting for processes to exit. Sep 6 01:37:41.882656 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 01:37:41.885635 systemd-logind[1711]: Removed session 8. Sep 6 01:37:41.933577 sshd[1878]: Accepted publickey for core from 139.178.68.195 port 39110 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:37:41.934268 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:37:41.936741 systemd-logind[1711]: New session 9 of user core. Sep 6 01:37:41.937121 systemd[1]: Started session-9.scope. Sep 6 01:37:42.020790 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 01:37:42.021514 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 01:37:42.072565 systemd[1]: Starting docker.service... Sep 6 01:37:42.108192 env[1899]: time="2025-09-06T01:37:42.108150064Z" level=info msg="Starting up" Sep 6 01:37:42.109198 env[1899]: time="2025-09-06T01:37:42.109174758Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:37:42.109198 env[1899]: time="2025-09-06T01:37:42.109192978Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:37:42.109328 env[1899]: time="2025-09-06T01:37:42.109225295Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:37:42.109328 env[1899]: time="2025-09-06T01:37:42.109244518Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:37:42.110608 env[1899]: time="2025-09-06T01:37:42.110565988Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 6 01:37:42.110608 env[1899]: time="2025-09-06T01:37:42.110585255Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 6 01:37:42.110608 env[1899]: time="2025-09-06T01:37:42.110604782Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 6 01:37:42.110741 env[1899]: time="2025-09-06T01:37:42.110625648Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 6 01:37:42.280375 env[1899]: time="2025-09-06T01:37:42.280181681Z" level=warning msg="Your kernel does not support cgroup blkio weight" Sep 6 01:37:42.280375 env[1899]: time="2025-09-06T01:37:42.280222305Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Sep 6 01:37:42.280816 env[1899]: time="2025-09-06T01:37:42.280494726Z" level=info msg="Loading containers: start." Sep 6 01:37:42.442382 kernel: Initializing XFRM netlink socket Sep 6 01:37:42.489997 env[1899]: time="2025-09-06T01:37:42.489966656Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 6 01:37:42.538457 systemd-networkd[1390]: docker0: Link UP Sep 6 01:37:42.562422 env[1899]: time="2025-09-06T01:37:42.562351851Z" level=info msg="Loading containers: done." Sep 6 01:37:42.569155 env[1899]: time="2025-09-06T01:37:42.569132093Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 01:37:42.569302 env[1899]: time="2025-09-06T01:37:42.569284415Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 6 01:37:42.569392 env[1899]: time="2025-09-06T01:37:42.569377785Z" level=info msg="Daemon has completed initialization" Sep 6 01:37:42.578811 systemd[1]: Started docker.service. Sep 6 01:37:42.584399 env[1899]: time="2025-09-06T01:37:42.584325392Z" level=info msg="API listen on /run/docker.sock" Sep 6 01:37:43.915723 env[1669]: time="2025-09-06T01:37:43.915615688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 6 01:37:44.554882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134686651.mount: Deactivated successfully. Sep 6 01:37:45.912313 env[1669]: time="2025-09-06T01:37:45.912285334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:45.913039 env[1669]: time="2025-09-06T01:37:45.913009312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:45.914094 env[1669]: time="2025-09-06T01:37:45.914080222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:45.915156 env[1669]: time="2025-09-06T01:37:45.915111174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:45.916040 env[1669]: time="2025-09-06T01:37:45.915996448Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 6 01:37:45.916419 env[1669]: time="2025-09-06T01:37:45.916402806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 6 01:37:47.357126 env[1669]: time="2025-09-06T01:37:47.357072399Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:47.357731 env[1669]: time="2025-09-06T01:37:47.357680176Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:47.359045 env[1669]: time="2025-09-06T01:37:47.359005137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:47.359898 env[1669]: time="2025-09-06T01:37:47.359858057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:47.360363 env[1669]: time="2025-09-06T01:37:47.360322162Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 6 01:37:47.360709 env[1669]: time="2025-09-06T01:37:47.360689902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 6 01:37:47.571307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 01:37:47.571919 systemd[1]: Stopped kubelet.service. Sep 6 01:37:47.575351 systemd[1]: Starting kubelet.service... Sep 6 01:37:47.817621 systemd[1]: Started kubelet.service. Sep 6 01:37:47.840080 kubelet[2061]: E0906 01:37:47.840055 2061 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 01:37:47.841051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 01:37:47.841136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 01:37:48.628430 env[1669]: time="2025-09-06T01:37:48.628384384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:48.628993 env[1669]: time="2025-09-06T01:37:48.628944940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:48.630312 env[1669]: time="2025-09-06T01:37:48.630273878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:48.631074 env[1669]: time="2025-09-06T01:37:48.631033285Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:48.631517 env[1669]: time="2025-09-06T01:37:48.631476696Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 6 01:37:48.631907 env[1669]: time="2025-09-06T01:37:48.631851022Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 01:37:49.575872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152470096.mount: Deactivated successfully. Sep 6 01:37:49.964630 env[1669]: time="2025-09-06T01:37:49.964527741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:49.965729 env[1669]: time="2025-09-06T01:37:49.965709246Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:49.967297 env[1669]: time="2025-09-06T01:37:49.967261065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:49.967880 env[1669]: time="2025-09-06T01:37:49.967836215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:49.968181 env[1669]: time="2025-09-06T01:37:49.968139607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 6 01:37:49.968498 env[1669]: time="2025-09-06T01:37:49.968465170Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 6 01:37:50.453892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3166551268.mount: Deactivated successfully. Sep 6 01:37:51.263996 env[1669]: time="2025-09-06T01:37:51.263940903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:51.264706 env[1669]: time="2025-09-06T01:37:51.264675187Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:51.265835 env[1669]: time="2025-09-06T01:37:51.265792671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:51.266985 env[1669]: time="2025-09-06T01:37:51.266944285Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:51.267550 env[1669]: time="2025-09-06T01:37:51.267506874Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 6 01:37:51.267961 env[1669]: time="2025-09-06T01:37:51.267926969Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 01:37:51.843760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794962178.mount: Deactivated successfully. Sep 6 01:37:51.844681 env[1669]: time="2025-09-06T01:37:51.844664305Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:51.845207 env[1669]: time="2025-09-06T01:37:51.845196081Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:51.845966 env[1669]: time="2025-09-06T01:37:51.845955833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:51.846657 env[1669]: time="2025-09-06T01:37:51.846616331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:51.847283 env[1669]: time="2025-09-06T01:37:51.847252474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 6 01:37:51.847561 env[1669]: time="2025-09-06T01:37:51.847549386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 6 01:37:52.436522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558744097.mount: Deactivated successfully. Sep 6 01:37:54.094454 env[1669]: time="2025-09-06T01:37:54.094394169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:54.095066 env[1669]: time="2025-09-06T01:37:54.095020707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:54.096447 env[1669]: time="2025-09-06T01:37:54.096370192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:54.097343 env[1669]: time="2025-09-06T01:37:54.097303533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:54.097889 env[1669]: time="2025-09-06T01:37:54.097851716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 6 01:37:55.954216 systemd[1]: Stopped kubelet.service. Sep 6 01:37:55.955613 systemd[1]: Starting kubelet.service... Sep 6 01:37:55.971251 systemd[1]: Reloading. Sep 6 01:37:56.000280 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2025-09-06T01:37:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:37:56.000295 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2025-09-06T01:37:56Z" level=info msg="torcx already run" Sep 6 01:37:56.060480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:37:56.060489 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:37:56.073653 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:37:56.146083 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 6 01:37:56.146125 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 6 01:37:56.146258 systemd[1]: Stopped kubelet.service. Sep 6 01:37:56.147120 systemd[1]: Starting kubelet.service... Sep 6 01:37:56.381746 systemd[1]: Started kubelet.service. Sep 6 01:37:56.399779 kubelet[2222]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:37:56.399779 kubelet[2222]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 01:37:56.399779 kubelet[2222]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:37:56.400087 kubelet[2222]: I0906 01:37:56.399819 2222 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:37:56.565913 kubelet[2222]: I0906 01:37:56.565865 2222 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 01:37:56.565913 kubelet[2222]: I0906 01:37:56.565881 2222 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:37:56.566048 kubelet[2222]: I0906 01:37:56.566006 2222 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 01:37:56.586071 kubelet[2222]: E0906 01:37:56.586031 2222 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.94.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.47:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:37:56.587497 kubelet[2222]: I0906 01:37:56.587449 2222 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:37:56.624671 kubelet[2222]: E0906 01:37:56.624576 2222 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:37:56.624671 kubelet[2222]: I0906 01:37:56.624646 2222 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:37:56.665546 kubelet[2222]: I0906 01:37:56.665339 2222 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:37:56.667663 kubelet[2222]: I0906 01:37:56.667575 2222 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 01:37:56.667943 kubelet[2222]: I0906 01:37:56.667836 2222 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:37:56.668342 kubelet[2222]: I0906 01:37:56.667906 2222 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-02071fe470","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 01:37:56.668342 kubelet[2222]: I0906 01:37:56.668341 2222 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:37:56.668746 kubelet[2222]: I0906 01:37:56.668423 2222 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 01:37:56.668746 kubelet[2222]: I0906 01:37:56.668627 2222 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:37:56.679536 kubelet[2222]: I0906 01:37:56.679484 2222 kubelet.go:408] "Attempting to sync node with API server" Sep 6 01:37:56.679536 kubelet[2222]: I0906 01:37:56.679515 2222 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:37:56.679678 kubelet[2222]: I0906 01:37:56.679558 2222 kubelet.go:314] "Adding apiserver pod source" Sep 6 01:37:56.679678 kubelet[2222]: I0906 01:37:56.679584 2222 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:37:56.707471 kubelet[2222]: W0906 01:37:56.707374 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.47:6443: connect: connection refused Sep 6 01:37:56.707471 kubelet[2222]: E0906 01:37:56.707461 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.47:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:37:56.708797 kubelet[2222]: W0906 01:37:56.708712 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.94.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-02071fe470&limit=500&resourceVersion=0": dial tcp 139.178.94.47:6443: connect: connection refused Sep 6 01:37:56.708797 kubelet[2222]: E0906 01:37:56.708786 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.94.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-02071fe470&limit=500&resourceVersion=0\": dial tcp 139.178.94.47:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:37:56.710915 kubelet[2222]: I0906 01:37:56.710862 2222 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:37:56.711477 kubelet[2222]: I0906 01:37:56.711426 2222 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:37:56.712478 kubelet[2222]: W0906 01:37:56.712427 2222 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 01:37:56.716123 kubelet[2222]: I0906 01:37:56.716068 2222 server.go:1274] "Started kubelet" Sep 6 01:37:56.716261 kubelet[2222]: I0906 01:37:56.716185 2222 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:37:56.716261 kubelet[2222]: I0906 01:37:56.716191 2222 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:37:56.716595 kubelet[2222]: I0906 01:37:56.716569 2222 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:37:56.718127 kubelet[2222]: I0906 01:37:56.718084 2222 server.go:449] "Adding debug handlers to kubelet server" Sep 6 01:37:56.727063 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 01:37:56.727158 kubelet[2222]: I0906 01:37:56.727129 2222 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:37:56.727254 kubelet[2222]: I0906 01:37:56.727208 2222 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:37:56.727342 kubelet[2222]: E0906 01:37:56.727308 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-02071fe470\" not found" Sep 6 01:37:56.727462 kubelet[2222]: I0906 01:37:56.727369 2222 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 01:37:56.728044 kubelet[2222]: I0906 01:37:56.728006 2222 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 01:37:56.728044 kubelet[2222]: I0906 01:37:56.728024 2222 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:37:56.740269 kubelet[2222]: W0906 01:37:56.728068 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.47:6443: connect: connection refused Sep 6 01:37:56.740432 kubelet[2222]: E0906 01:37:56.740384 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.47:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:37:56.740532 kubelet[2222]: E0906 01:37:56.740443 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-02071fe470?timeout=10s\": dial tcp 139.178.94.47:6443: connect: connection refused" interval="200ms" Sep 6 01:37:56.740647 kubelet[2222]: I0906 01:37:56.740560 2222 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:37:56.740767 kubelet[2222]: I0906 01:37:56.740721 2222 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:37:56.740926 kubelet[2222]: E0906 01:37:56.740890 2222 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:37:56.744558 kubelet[2222]: I0906 01:37:56.744507 2222 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:37:56.746403 kubelet[2222]: E0906 01:37:56.744113 2222 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.47:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-02071fe470.18628dc3cc715bf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-02071fe470,UID:ci-3510.3.8-n-02071fe470,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-02071fe470,},FirstTimestamp:2025-09-06 01:37:56.716035058 +0000 UTC m=+0.332156280,LastTimestamp:2025-09-06 01:37:56.716035058 +0000 UTC m=+0.332156280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-02071fe470,}" Sep 6 01:37:56.761607 kubelet[2222]: I0906 01:37:56.761529 2222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:37:56.762906 kubelet[2222]: I0906 01:37:56.762853 2222 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:37:56.762906 kubelet[2222]: I0906 01:37:56.762884 2222 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 01:37:56.762906 kubelet[2222]: I0906 01:37:56.762908 2222 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 01:37:56.763075 kubelet[2222]: E0906 01:37:56.762977 2222 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:37:56.763573 kubelet[2222]: W0906 01:37:56.763493 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.47:6443: connect: connection refused Sep 6 01:37:56.763683 kubelet[2222]: E0906 01:37:56.763599 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.47:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:37:56.828500 kubelet[2222]: E0906 01:37:56.828403 2222 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-02071fe470\" not found" Sep 6 01:37:56.863424 kubelet[2222]: E0906 01:37:56.863301 2222 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 01:37:56.903187 kubelet[2222]: I0906 01:37:56.903139 2222 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 01:37:56.903187 kubelet[2222]: I0906 01:37:56.903174 2222 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 01:37:56.903567 kubelet[2222]: I0906 01:37:56.903214 2222 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:37:56.905074 kubelet[2222]: I0906 01:37:56.904998 2222 policy_none.go:49] "None policy: Start" Sep 6 01:37:56.906531 kubelet[2222]: I0906 01:37:56.906481 2222 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 01:37:56.906751 kubelet[2222]: I0906 01:37:56.906548 2222 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:37:56.916424 kubelet[2222]: I0906 01:37:56.916242 2222 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:37:56.916679 kubelet[2222]: I0906 01:37:56.916637 2222 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:37:56.916880 kubelet[2222]: I0906 01:37:56.916670 2222 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:37:56.917254 kubelet[2222]: I0906 01:37:56.917125 2222 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:37:56.918579 kubelet[2222]: E0906 01:37:56.918569 2222 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-02071fe470\" not found" Sep 6 01:37:56.942007 kubelet[2222]: E0906 01:37:56.941879 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-02071fe470?timeout=10s\": dial tcp 139.178.94.47:6443: connect: connection refused" interval="400ms" Sep 6 01:37:57.021342 kubelet[2222]: I0906 01:37:57.021215 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.022158 kubelet[2222]: E0906 01:37:57.022046 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.47:6443/api/v1/nodes\": dial tcp 139.178.94.47:6443: connect: connection refused" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130040 kubelet[2222]: I0906 01:37:57.129986 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b57a6311f4916e4bdb8ba148923d54c6-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-02071fe470\" (UID: \"b57a6311f4916e4bdb8ba148923d54c6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130040 kubelet[2222]: I0906 01:37:57.130017 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130040 kubelet[2222]: I0906 01:37:57.130040 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130218 kubelet[2222]: I0906 01:37:57.130058 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130218 kubelet[2222]: I0906 01:37:57.130073 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4f60ba8edcd8e428d346301365b662c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-02071fe470\" (UID: \"b4f60ba8edcd8e428d346301365b662c\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130218 kubelet[2222]: I0906 01:37:57.130088 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b57a6311f4916e4bdb8ba148923d54c6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-02071fe470\" (UID: \"b57a6311f4916e4bdb8ba148923d54c6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130218 kubelet[2222]: I0906 01:37:57.130102 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b57a6311f4916e4bdb8ba148923d54c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-02071fe470\" (UID: \"b57a6311f4916e4bdb8ba148923d54c6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130218 kubelet[2222]: I0906 01:37:57.130126 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.130370 kubelet[2222]: I0906 01:37:57.130146 2222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.226675 kubelet[2222]: I0906 01:37:57.226464 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.227340 kubelet[2222]: E0906 01:37:57.227223 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.47:6443/api/v1/nodes\": dial tcp 139.178.94.47:6443: connect: connection refused" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.343238 kubelet[2222]: E0906 01:37:57.343089 2222 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-02071fe470?timeout=10s\": dial tcp 139.178.94.47:6443: connect: connection refused" interval="800ms" Sep 6 01:37:57.381040 env[1669]: time="2025-09-06T01:37:57.380897972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-02071fe470,Uid:b57a6311f4916e4bdb8ba148923d54c6,Namespace:kube-system,Attempt:0,}" Sep 6 01:37:57.384182 env[1669]: time="2025-09-06T01:37:57.384065045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-02071fe470,Uid:ad4c284b65b03a83689a6d6282c4fdae,Namespace:kube-system,Attempt:0,}" Sep 6 01:37:57.384182 env[1669]: time="2025-09-06T01:37:57.384108708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-02071fe470,Uid:b4f60ba8edcd8e428d346301365b662c,Namespace:kube-system,Attempt:0,}" Sep 6 01:37:57.609241 kubelet[2222]: W0906 01:37:57.609085 2222 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.47:6443: connect: connection refused Sep 6 01:37:57.610164 kubelet[2222]: E0906 01:37:57.609253 2222 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.47:6443: connect: connection refused" logger="UnhandledError" Sep 6 01:37:57.632118 kubelet[2222]: I0906 01:37:57.632016 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.632828 kubelet[2222]: E0906 01:37:57.632727 2222 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.47:6443/api/v1/nodes\": dial tcp 139.178.94.47:6443: connect: connection refused" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:57.927980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177129064.mount: Deactivated successfully. Sep 6 01:37:57.929321 env[1669]: time="2025-09-06T01:37:57.929300563Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.930126 env[1669]: time="2025-09-06T01:37:57.930110780Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.930546 env[1669]: time="2025-09-06T01:37:57.930521852Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.931227 env[1669]: time="2025-09-06T01:37:57.931214877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.931651 env[1669]: time="2025-09-06T01:37:57.931625053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.932885 env[1669]: time="2025-09-06T01:37:57.932872841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.934785 env[1669]: time="2025-09-06T01:37:57.934748535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.936453 env[1669]: time="2025-09-06T01:37:57.936427174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.937100 env[1669]: time="2025-09-06T01:37:57.937083950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.937982 env[1669]: time="2025-09-06T01:37:57.937951129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.938544 env[1669]: time="2025-09-06T01:37:57.938513890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.939177 env[1669]: time="2025-09-06T01:37:57.939160231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:37:57.943501 env[1669]: time="2025-09-06T01:37:57.943467482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:37:57.943501 env[1669]: time="2025-09-06T01:37:57.943489287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:37:57.943501 env[1669]: time="2025-09-06T01:37:57.943496295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:37:57.943622 env[1669]: time="2025-09-06T01:37:57.943568776Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57ad5cca89e99ffc9a887dde9fb48ea672387c00738e9ed72e268fc338ca94d9 pid=2272 runtime=io.containerd.runc.v2 Sep 6 01:37:57.944898 env[1669]: time="2025-09-06T01:37:57.944862372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:37:57.944898 env[1669]: time="2025-09-06T01:37:57.944862393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:37:57.944898 env[1669]: time="2025-09-06T01:37:57.944882878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:37:57.944898 env[1669]: time="2025-09-06T01:37:57.944889852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:37:57.944898 env[1669]: time="2025-09-06T01:37:57.944883543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:37:57.944898 env[1669]: time="2025-09-06T01:37:57.944890176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:37:57.945063 env[1669]: time="2025-09-06T01:37:57.944960251Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ecb6df970591d9506ca605088fde5cf97c830fb572b0c75fc40b8d9f6630350 pid=2294 runtime=io.containerd.runc.v2 Sep 6 01:37:57.945063 env[1669]: time="2025-09-06T01:37:57.944955272Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6790cc7fde887accc9de64f8435e530807b8d5c34b80dc2e87a29f7588c1bbee pid=2293 runtime=io.containerd.runc.v2 Sep 6 01:37:57.971391 env[1669]: time="2025-09-06T01:37:57.971352771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-02071fe470,Uid:b4f60ba8edcd8e428d346301365b662c,Namespace:kube-system,Attempt:0,} returns sandbox id \"57ad5cca89e99ffc9a887dde9fb48ea672387c00738e9ed72e268fc338ca94d9\"" Sep 6 01:37:57.971672 env[1669]: time="2025-09-06T01:37:57.971654504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-02071fe470,Uid:ad4c284b65b03a83689a6d6282c4fdae,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ecb6df970591d9506ca605088fde5cf97c830fb572b0c75fc40b8d9f6630350\"" Sep 6 01:37:57.972325 env[1669]: time="2025-09-06T01:37:57.972306468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-02071fe470,Uid:b57a6311f4916e4bdb8ba148923d54c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6790cc7fde887accc9de64f8435e530807b8d5c34b80dc2e87a29f7588c1bbee\"" Sep 6 01:37:57.972876 env[1669]: time="2025-09-06T01:37:57.972862740Z" level=info msg="CreateContainer within sandbox \"4ecb6df970591d9506ca605088fde5cf97c830fb572b0c75fc40b8d9f6630350\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 01:37:57.972923 env[1669]: time="2025-09-06T01:37:57.972880140Z" level=info msg="CreateContainer within sandbox \"57ad5cca89e99ffc9a887dde9fb48ea672387c00738e9ed72e268fc338ca94d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 01:37:57.973189 env[1669]: time="2025-09-06T01:37:57.973176365Z" level=info msg="CreateContainer within sandbox \"6790cc7fde887accc9de64f8435e530807b8d5c34b80dc2e87a29f7588c1bbee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 01:37:57.978731 env[1669]: time="2025-09-06T01:37:57.978714788Z" level=info msg="CreateContainer within sandbox \"4ecb6df970591d9506ca605088fde5cf97c830fb572b0c75fc40b8d9f6630350\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b13dd71d1ec01b06ba8a7b214e8c642c55edeaea34bfd95d71f0f09438e4e185\"" Sep 6 01:37:57.979033 env[1669]: time="2025-09-06T01:37:57.978989849Z" level=info msg="StartContainer for \"b13dd71d1ec01b06ba8a7b214e8c642c55edeaea34bfd95d71f0f09438e4e185\"" Sep 6 01:37:57.980001 env[1669]: time="2025-09-06T01:37:57.979957731Z" level=info msg="CreateContainer within sandbox \"6790cc7fde887accc9de64f8435e530807b8d5c34b80dc2e87a29f7588c1bbee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"161437c71460d4866cd163589d78dcb9dc044d609bfd64662cb92b4a87d20bba\"" Sep 6 01:37:57.980142 env[1669]: time="2025-09-06T01:37:57.980126672Z" level=info msg="StartContainer for \"161437c71460d4866cd163589d78dcb9dc044d609bfd64662cb92b4a87d20bba\"" Sep 6 01:37:57.980317 env[1669]: time="2025-09-06T01:37:57.980302588Z" level=info msg="CreateContainer within sandbox \"57ad5cca89e99ffc9a887dde9fb48ea672387c00738e9ed72e268fc338ca94d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3ce737d9fab04fcb85340de0f55e8e9903f8976b608f67dfabe594a8146c6ec9\"" Sep 6 01:37:57.980458 env[1669]: time="2025-09-06T01:37:57.980446607Z" level=info msg="StartContainer for \"3ce737d9fab04fcb85340de0f55e8e9903f8976b608f67dfabe594a8146c6ec9\"" Sep 6 01:37:58.012860 env[1669]: time="2025-09-06T01:37:58.012820154Z" level=info msg="StartContainer for \"3ce737d9fab04fcb85340de0f55e8e9903f8976b608f67dfabe594a8146c6ec9\" returns successfully" Sep 6 01:37:58.012958 env[1669]: time="2025-09-06T01:37:58.012901390Z" level=info msg="StartContainer for \"b13dd71d1ec01b06ba8a7b214e8c642c55edeaea34bfd95d71f0f09438e4e185\" returns successfully" Sep 6 01:37:58.012958 env[1669]: time="2025-09-06T01:37:58.012929896Z" level=info msg="StartContainer for \"161437c71460d4866cd163589d78dcb9dc044d609bfd64662cb92b4a87d20bba\" returns successfully" Sep 6 01:37:58.434615 kubelet[2222]: I0906 01:37:58.434597 2222 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:58.459239 kubelet[2222]: E0906 01:37:58.459219 2222 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-02071fe470\" not found" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:58.562842 kubelet[2222]: I0906 01:37:58.562821 2222 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-02071fe470" Sep 6 01:37:58.680412 kubelet[2222]: I0906 01:37:58.680312 2222 apiserver.go:52] "Watching apiserver" Sep 6 01:37:58.729444 kubelet[2222]: I0906 01:37:58.729241 2222 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 01:37:58.783614 kubelet[2222]: E0906 01:37:58.783510 2222 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:37:58.783614 kubelet[2222]: E0906 01:37:58.783513 2222 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.8-n-02071fe470\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-02071fe470" Sep 6 01:37:58.783962 kubelet[2222]: E0906 01:37:58.783584 2222 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-02071fe470\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:37:59.799616 kubelet[2222]: W0906 01:37:59.799515 2222 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:37:59.931298 kubelet[2222]: W0906 01:37:59.931246 2222 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:38:00.852844 systemd[1]: Reloading. Sep 6 01:38:00.882213 /usr/lib/systemd/system-generators/torcx-generator[2552]: time="2025-09-06T01:38:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 01:38:00.882229 /usr/lib/systemd/system-generators/torcx-generator[2552]: time="2025-09-06T01:38:00Z" level=info msg="torcx already run" Sep 6 01:38:00.946247 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 01:38:00.946258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 01:38:00.959029 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 01:38:01.018640 systemd[1]: Stopping kubelet.service... Sep 6 01:38:01.041819 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 01:38:01.041957 systemd[1]: Stopped kubelet.service. Sep 6 01:38:01.042850 systemd[1]: Starting kubelet.service... Sep 6 01:38:01.288281 systemd[1]: Started kubelet.service. Sep 6 01:38:01.309675 kubelet[2627]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:38:01.309675 kubelet[2627]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 01:38:01.309675 kubelet[2627]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 01:38:01.309929 kubelet[2627]: I0906 01:38:01.309709 2627 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 01:38:01.313074 kubelet[2627]: I0906 01:38:01.313040 2627 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 01:38:01.313074 kubelet[2627]: I0906 01:38:01.313051 2627 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 01:38:01.313200 kubelet[2627]: I0906 01:38:01.313169 2627 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 01:38:01.313897 kubelet[2627]: I0906 01:38:01.313887 2627 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 6 01:38:01.315038 kubelet[2627]: I0906 01:38:01.315029 2627 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 01:38:01.317046 kubelet[2627]: E0906 01:38:01.317028 2627 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 01:38:01.317100 kubelet[2627]: I0906 01:38:01.317049 2627 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 01:38:01.352807 kubelet[2627]: I0906 01:38:01.352754 2627 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 01:38:01.353742 kubelet[2627]: I0906 01:38:01.353680 2627 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 01:38:01.353962 kubelet[2627]: I0906 01:38:01.353875 2627 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 01:38:01.354273 kubelet[2627]: I0906 01:38:01.353928 2627 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-02071fe470","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 6 01:38:01.354273 kubelet[2627]: I0906 01:38:01.354261 2627 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 01:38:01.354637 kubelet[2627]: I0906 01:38:01.354284 2627 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 01:38:01.354637 kubelet[2627]: I0906 01:38:01.354331 2627 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:38:01.354637 kubelet[2627]: I0906 01:38:01.354494 2627 kubelet.go:408] "Attempting to sync node with API server" Sep 6 01:38:01.354637 kubelet[2627]: I0906 01:38:01.354520 2627 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 01:38:01.354637 kubelet[2627]: I0906 01:38:01.354566 2627 kubelet.go:314] "Adding apiserver pod source" Sep 6 01:38:01.354637 kubelet[2627]: I0906 01:38:01.354586 2627 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 01:38:01.355606 kubelet[2627]: I0906 01:38:01.355525 2627 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 01:38:01.356484 kubelet[2627]: I0906 01:38:01.356455 2627 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 01:38:01.358042 kubelet[2627]: I0906 01:38:01.357990 2627 server.go:1274] "Started kubelet" Sep 6 01:38:01.358827 kubelet[2627]: I0906 01:38:01.358698 2627 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 01:38:01.359097 kubelet[2627]: I0906 01:38:01.358971 2627 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 01:38:01.359709 kubelet[2627]: I0906 01:38:01.359619 2627 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 01:38:01.362311 kubelet[2627]: I0906 01:38:01.362269 2627 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 01:38:01.362549 kubelet[2627]: E0906 01:38:01.362427 2627 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 01:38:01.362549 kubelet[2627]: I0906 01:38:01.362464 2627 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 01:38:01.362549 kubelet[2627]: I0906 01:38:01.362513 2627 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 01:38:01.362549 kubelet[2627]: I0906 01:38:01.362550 2627 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 01:38:01.363030 kubelet[2627]: E0906 01:38:01.362538 2627 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-02071fe470\" not found" Sep 6 01:38:01.363030 kubelet[2627]: I0906 01:38:01.362902 2627 reconciler.go:26] "Reconciler: start to sync state" Sep 6 01:38:01.363637 kubelet[2627]: I0906 01:38:01.363599 2627 server.go:449] "Adding debug handlers to kubelet server" Sep 6 01:38:01.364711 kubelet[2627]: I0906 01:38:01.364663 2627 factory.go:221] Registration of the systemd container factory successfully Sep 6 01:38:01.364953 kubelet[2627]: I0906 01:38:01.364907 2627 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 01:38:01.365924 kubelet[2627]: I0906 01:38:01.365913 2627 factory.go:221] Registration of the containerd container factory successfully Sep 6 01:38:01.369368 kubelet[2627]: I0906 01:38:01.369336 2627 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 01:38:01.369915 kubelet[2627]: I0906 01:38:01.369906 2627 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 01:38:01.369915 kubelet[2627]: I0906 01:38:01.369917 2627 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 01:38:01.369986 kubelet[2627]: I0906 01:38:01.369926 2627 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 01:38:01.369986 kubelet[2627]: E0906 01:38:01.369947 2627 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 01:38:01.385623 kubelet[2627]: I0906 01:38:01.385607 2627 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 01:38:01.385623 kubelet[2627]: I0906 01:38:01.385617 2627 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 01:38:01.385623 kubelet[2627]: I0906 01:38:01.385627 2627 state_mem.go:36] "Initialized new in-memory state store" Sep 6 01:38:01.385752 kubelet[2627]: I0906 01:38:01.385707 2627 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 01:38:01.385752 kubelet[2627]: I0906 01:38:01.385714 2627 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 01:38:01.385752 kubelet[2627]: I0906 01:38:01.385726 2627 policy_none.go:49] "None policy: Start" Sep 6 01:38:01.385978 kubelet[2627]: I0906 01:38:01.385970 2627 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 01:38:01.386012 kubelet[2627]: I0906 01:38:01.385980 2627 state_mem.go:35] "Initializing new in-memory state store" Sep 6 01:38:01.386059 kubelet[2627]: I0906 01:38:01.386053 2627 state_mem.go:75] "Updated machine memory state" Sep 6 01:38:01.386638 kubelet[2627]: I0906 01:38:01.386630 2627 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 01:38:01.386714 kubelet[2627]: I0906 01:38:01.386709 2627 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 01:38:01.386737 kubelet[2627]: I0906 01:38:01.386717 2627 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 01:38:01.386807 kubelet[2627]: I0906 01:38:01.386799 2627 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 01:38:01.480083 kubelet[2627]: W0906 01:38:01.480006 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:38:01.480491 kubelet[2627]: W0906 01:38:01.480254 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:38:01.480491 kubelet[2627]: E0906 01:38:01.480451 2627 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-02071fe470\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.481273 kubelet[2627]: W0906 01:38:01.481220 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:38:01.481469 kubelet[2627]: E0906 01:38:01.481423 2627 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.495970 kubelet[2627]: I0906 01:38:01.495915 2627 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.506811 kubelet[2627]: I0906 01:38:01.506716 2627 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.507059 kubelet[2627]: I0906 01:38:01.506898 2627 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.664095 kubelet[2627]: I0906 01:38:01.663978 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b57a6311f4916e4bdb8ba148923d54c6-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-02071fe470\" (UID: \"b57a6311f4916e4bdb8ba148923d54c6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.664095 kubelet[2627]: I0906 01:38:01.664077 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b57a6311f4916e4bdb8ba148923d54c6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-02071fe470\" (UID: \"b57a6311f4916e4bdb8ba148923d54c6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.664550 kubelet[2627]: I0906 01:38:01.664225 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.664550 kubelet[2627]: I0906 01:38:01.664322 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.664550 kubelet[2627]: I0906 01:38:01.664415 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.664550 kubelet[2627]: I0906 01:38:01.664481 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b57a6311f4916e4bdb8ba148923d54c6-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-02071fe470\" (UID: \"b57a6311f4916e4bdb8ba148923d54c6\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.664550 kubelet[2627]: I0906 01:38:01.664531 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.665084 kubelet[2627]: I0906 01:38:01.664595 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad4c284b65b03a83689a6d6282c4fdae-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" (UID: \"ad4c284b65b03a83689a6d6282c4fdae\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.665084 kubelet[2627]: I0906 01:38:01.664648 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4f60ba8edcd8e428d346301365b662c-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-02071fe470\" (UID: \"b4f60ba8edcd8e428d346301365b662c\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-02071fe470" Sep 6 01:38:01.875225 sudo[2671]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 01:38:01.875926 sudo[2671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 6 01:38:02.254440 sudo[2671]: pam_unix(sudo:session): session closed for user root Sep 6 01:38:02.355462 kubelet[2627]: I0906 01:38:02.355448 2627 apiserver.go:52] "Watching apiserver" Sep 6 01:38:02.362909 kubelet[2627]: I0906 01:38:02.362894 2627 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 01:38:02.376925 kubelet[2627]: W0906 01:38:02.376912 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:38:02.376999 kubelet[2627]: E0906 01:38:02.376944 2627 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.8-n-02071fe470\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" Sep 6 01:38:02.377030 kubelet[2627]: W0906 01:38:02.376912 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:38:02.377057 kubelet[2627]: E0906 01:38:02.377040 2627 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.8-n-02071fe470\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-02071fe470" Sep 6 01:38:02.377181 kubelet[2627]: W0906 01:38:02.377174 2627 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 6 01:38:02.377214 kubelet[2627]: E0906 01:38:02.377194 2627 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.8-n-02071fe470\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" Sep 6 01:38:02.401430 kubelet[2627]: I0906 01:38:02.401400 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-02071fe470" podStartSLOduration=3.401391116 podStartE2EDuration="3.401391116s" podCreationTimestamp="2025-09-06 01:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:38:02.401346816 +0000 UTC m=+1.110455429" watchObservedRunningTime="2025-09-06 01:38:02.401391116 +0000 UTC m=+1.110499726" Sep 6 01:38:02.406062 kubelet[2627]: I0906 01:38:02.406044 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-02071fe470" podStartSLOduration=3.406037747 podStartE2EDuration="3.406037747s" podCreationTimestamp="2025-09-06 01:37:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:38:02.405974508 +0000 UTC m=+1.115083121" watchObservedRunningTime="2025-09-06 01:38:02.406037747 +0000 UTC m=+1.115146357" Sep 6 01:38:02.410171 kubelet[2627]: I0906 01:38:02.410155 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-02071fe470" podStartSLOduration=1.410148451 podStartE2EDuration="1.410148451s" podCreationTimestamp="2025-09-06 01:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:38:02.409974532 +0000 UTC m=+1.119083154" watchObservedRunningTime="2025-09-06 01:38:02.410148451 +0000 UTC m=+1.119257061" Sep 6 01:38:03.507856 sudo[1883]: pam_unix(sudo:session): session closed for user root Sep 6 01:38:03.508769 sshd[1878]: pam_unix(sshd:session): session closed for user core Sep 6 01:38:03.510160 systemd[1]: sshd@6-139.178.94.47:22-139.178.68.195:39110.service: Deactivated successfully. Sep 6 01:38:03.510806 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 01:38:03.510839 systemd-logind[1711]: Session 9 logged out. Waiting for processes to exit. Sep 6 01:38:03.511308 systemd-logind[1711]: Removed session 9. Sep 6 01:38:06.019719 kubelet[2627]: I0906 01:38:06.019635 2627 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 01:38:06.020655 env[1669]: time="2025-09-06T01:38:06.020416989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 01:38:06.021300 kubelet[2627]: I0906 01:38:06.020932 2627 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 01:38:07.105379 kubelet[2627]: I0906 01:38:07.105319 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t746v\" (UniqueName: \"kubernetes.io/projected/cf1307d7-7580-4a17-adad-c317574068e7-kube-api-access-t746v\") pod \"kube-proxy-zs7vd\" (UID: \"cf1307d7-7580-4a17-adad-c317574068e7\") " pod="kube-system/kube-proxy-zs7vd" Sep 6 01:38:07.105379 kubelet[2627]: I0906 01:38:07.105382 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-etc-cni-netd\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.105885 kubelet[2627]: I0906 01:38:07.105412 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-hostproc\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.105885 kubelet[2627]: I0906 01:38:07.105437 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-config-path\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.105885 kubelet[2627]: I0906 01:38:07.105473 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdc5w\" (UniqueName: \"kubernetes.io/projected/61ce73e3-8db6-471d-9eb6-51405d8fb048-kube-api-access-xdc5w\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.105885 kubelet[2627]: I0906 01:38:07.105496 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-xtables-lock\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.105885 kubelet[2627]: I0906 01:38:07.105516 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-cgroup\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.105885 kubelet[2627]: I0906 01:38:07.105536 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61ce73e3-8db6-471d-9eb6-51405d8fb048-clustermesh-secrets\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.106142 kubelet[2627]: I0906 01:38:07.105559 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf1307d7-7580-4a17-adad-c317574068e7-lib-modules\") pod \"kube-proxy-zs7vd\" (UID: \"cf1307d7-7580-4a17-adad-c317574068e7\") " pod="kube-system/kube-proxy-zs7vd" Sep 6 01:38:07.106142 kubelet[2627]: I0906 01:38:07.105580 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-run\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.106142 kubelet[2627]: I0906 01:38:07.105611 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cni-path\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.106142 kubelet[2627]: I0906 01:38:07.105633 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf1307d7-7580-4a17-adad-c317574068e7-xtables-lock\") pod \"kube-proxy-zs7vd\" (UID: \"cf1307d7-7580-4a17-adad-c317574068e7\") " pod="kube-system/kube-proxy-zs7vd" Sep 6 01:38:07.106142 kubelet[2627]: I0906 01:38:07.105653 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-bpf-maps\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.106142 kubelet[2627]: I0906 01:38:07.105674 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-lib-modules\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.106402 kubelet[2627]: I0906 01:38:07.105694 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf1307d7-7580-4a17-adad-c317574068e7-kube-proxy\") pod \"kube-proxy-zs7vd\" (UID: \"cf1307d7-7580-4a17-adad-c317574068e7\") " pod="kube-system/kube-proxy-zs7vd" Sep 6 01:38:07.106402 kubelet[2627]: I0906 01:38:07.105713 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-host-proc-sys-net\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.106402 kubelet[2627]: I0906 01:38:07.105734 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-host-proc-sys-kernel\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.106402 kubelet[2627]: I0906 01:38:07.105753 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61ce73e3-8db6-471d-9eb6-51405d8fb048-hubble-tls\") pod \"cilium-cfr46\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " pod="kube-system/cilium-cfr46" Sep 6 01:38:07.207448 kubelet[2627]: I0906 01:38:07.207311 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01cafc41-26cc-4a36-94de-7431d82234c4-cilium-config-path\") pod \"cilium-operator-5d85765b45-8dszp\" (UID: \"01cafc41-26cc-4a36-94de-7431d82234c4\") " pod="kube-system/cilium-operator-5d85765b45-8dszp" Sep 6 01:38:07.207910 kubelet[2627]: I0906 01:38:07.207804 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgcx4\" (UniqueName: \"kubernetes.io/projected/01cafc41-26cc-4a36-94de-7431d82234c4-kube-api-access-dgcx4\") pod \"cilium-operator-5d85765b45-8dszp\" (UID: \"01cafc41-26cc-4a36-94de-7431d82234c4\") " pod="kube-system/cilium-operator-5d85765b45-8dszp" Sep 6 01:38:07.208114 kubelet[2627]: I0906 01:38:07.207991 2627 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 01:38:07.324041 env[1669]: time="2025-09-06T01:38:07.323911033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zs7vd,Uid:cf1307d7-7580-4a17-adad-c317574068e7,Namespace:kube-system,Attempt:0,}" Sep 6 01:38:07.331156 env[1669]: time="2025-09-06T01:38:07.331060791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cfr46,Uid:61ce73e3-8db6-471d-9eb6-51405d8fb048,Namespace:kube-system,Attempt:0,}" Sep 6 01:38:07.348142 env[1669]: time="2025-09-06T01:38:07.347999354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:38:07.348142 env[1669]: time="2025-09-06T01:38:07.348100893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:38:07.348635 env[1669]: time="2025-09-06T01:38:07.348170814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:38:07.348878 env[1669]: time="2025-09-06T01:38:07.348648517Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ebf06b4eba2482530e08204d7597e14bda69f7ce2e9336361c3f3cb2447b68f pid=2776 runtime=io.containerd.runc.v2 Sep 6 01:38:07.353665 env[1669]: time="2025-09-06T01:38:07.353533986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:38:07.353665 env[1669]: time="2025-09-06T01:38:07.353626786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:38:07.353949 env[1669]: time="2025-09-06T01:38:07.353665076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:38:07.354048 env[1669]: time="2025-09-06T01:38:07.353967311Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b pid=2789 runtime=io.containerd.runc.v2 Sep 6 01:38:07.399501 env[1669]: time="2025-09-06T01:38:07.399346538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zs7vd,Uid:cf1307d7-7580-4a17-adad-c317574068e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ebf06b4eba2482530e08204d7597e14bda69f7ce2e9336361c3f3cb2447b68f\"" Sep 6 01:38:07.401988 env[1669]: time="2025-09-06T01:38:07.401942483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cfr46,Uid:61ce73e3-8db6-471d-9eb6-51405d8fb048,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\"" Sep 6 01:38:07.402659 env[1669]: time="2025-09-06T01:38:07.402615532Z" level=info msg="CreateContainer within sandbox \"1ebf06b4eba2482530e08204d7597e14bda69f7ce2e9336361c3f3cb2447b68f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 01:38:07.403534 env[1669]: time="2025-09-06T01:38:07.403496481Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 01:38:07.412619 env[1669]: time="2025-09-06T01:38:07.412540509Z" level=info msg="CreateContainer within sandbox \"1ebf06b4eba2482530e08204d7597e14bda69f7ce2e9336361c3f3cb2447b68f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81ac51b08231ee9c70b861a9a7c2203d42198b1ea5706a0802b60784161a497e\"" Sep 6 01:38:07.413139 env[1669]: time="2025-09-06T01:38:07.413068389Z" level=info msg="StartContainer for \"81ac51b08231ee9c70b861a9a7c2203d42198b1ea5706a0802b60784161a497e\"" Sep 6 01:38:07.435256 env[1669]: time="2025-09-06T01:38:07.435173666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8dszp,Uid:01cafc41-26cc-4a36-94de-7431d82234c4,Namespace:kube-system,Attempt:0,}" Sep 6 01:38:07.448554 env[1669]: time="2025-09-06T01:38:07.448476970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:38:07.448554 env[1669]: time="2025-09-06T01:38:07.448530102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:38:07.448554 env[1669]: time="2025-09-06T01:38:07.448550413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:38:07.448882 env[1669]: time="2025-09-06T01:38:07.448825096Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf pid=2880 runtime=io.containerd.runc.v2 Sep 6 01:38:07.471395 env[1669]: time="2025-09-06T01:38:07.471337703Z" level=info msg="StartContainer for \"81ac51b08231ee9c70b861a9a7c2203d42198b1ea5706a0802b60784161a497e\" returns successfully" Sep 6 01:38:07.508904 env[1669]: time="2025-09-06T01:38:07.508854598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8dszp,Uid:01cafc41-26cc-4a36-94de-7431d82234c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf\"" Sep 6 01:38:08.409617 kubelet[2627]: I0906 01:38:08.409481 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zs7vd" podStartSLOduration=1.409439084 podStartE2EDuration="1.409439084s" podCreationTimestamp="2025-09-06 01:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:38:08.409303745 +0000 UTC m=+7.118412433" watchObservedRunningTime="2025-09-06 01:38:08.409439084 +0000 UTC m=+7.118547771" Sep 6 01:38:09.968896 update_engine[1660]: I0906 01:38:09.968848 1660 update_attempter.cc:509] Updating boot flags... Sep 6 01:38:11.372709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603411102.mount: Deactivated successfully. Sep 6 01:38:13.114223 env[1669]: time="2025-09-06T01:38:13.114166531Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:38:13.114835 env[1669]: time="2025-09-06T01:38:13.114795194Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:38:13.115572 env[1669]: time="2025-09-06T01:38:13.115530979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:38:13.116199 env[1669]: time="2025-09-06T01:38:13.116156528Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 6 01:38:13.116810 env[1669]: time="2025-09-06T01:38:13.116780989Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 01:38:13.117480 env[1669]: time="2025-09-06T01:38:13.117467528Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:38:13.139965 env[1669]: time="2025-09-06T01:38:13.139948756Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\"" Sep 6 01:38:13.140236 env[1669]: time="2025-09-06T01:38:13.140224976Z" level=info msg="StartContainer for \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\"" Sep 6 01:38:13.141917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757898234.mount: Deactivated successfully. Sep 6 01:38:13.160850 env[1669]: time="2025-09-06T01:38:13.160824598Z" level=info msg="StartContainer for \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\" returns successfully" Sep 6 01:38:14.144030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3-rootfs.mount: Deactivated successfully. Sep 6 01:38:15.261286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2258668267.mount: Deactivated successfully. Sep 6 01:38:16.095662 env[1669]: time="2025-09-06T01:38:16.095519273Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:38:16.096911 env[1669]: time="2025-09-06T01:38:16.096238341Z" level=info msg="shim disconnected" id=40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3 Sep 6 01:38:16.096911 env[1669]: time="2025-09-06T01:38:16.096342622Z" level=warning msg="cleaning up after shim disconnected" id=40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3 namespace=k8s.io Sep 6 01:38:16.096911 env[1669]: time="2025-09-06T01:38:16.096404144Z" level=info msg="cleaning up dead shim" Sep 6 01:38:16.098093 env[1669]: time="2025-09-06T01:38:16.098005323Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:38:16.102199 env[1669]: time="2025-09-06T01:38:16.102085367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 01:38:16.103933 env[1669]: time="2025-09-06T01:38:16.103821125Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 6 01:38:16.108662 env[1669]: time="2025-09-06T01:38:16.108642175Z" level=info msg="CreateContainer within sandbox \"f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 01:38:16.109786 env[1669]: time="2025-09-06T01:38:16.109751103Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:38:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3133 runtime=io.containerd.runc.v2\n" Sep 6 01:38:16.112598 env[1669]: time="2025-09-06T01:38:16.112584114Z" level=info msg="CreateContainer within sandbox \"f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\"" Sep 6 01:38:16.112866 env[1669]: time="2025-09-06T01:38:16.112851883Z" level=info msg="StartContainer for \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\"" Sep 6 01:38:16.133716 env[1669]: time="2025-09-06T01:38:16.133693020Z" level=info msg="StartContainer for \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\" returns successfully" Sep 6 01:38:16.416826 env[1669]: time="2025-09-06T01:38:16.416676275Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:38:16.423038 env[1669]: time="2025-09-06T01:38:16.422984383Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\"" Sep 6 01:38:16.423369 kubelet[2627]: I0906 01:38:16.423296 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8dszp" podStartSLOduration=0.826597139 podStartE2EDuration="9.423277758s" podCreationTimestamp="2025-09-06 01:38:07 +0000 UTC" firstStartedPulling="2025-09-06 01:38:07.509516704 +0000 UTC m=+6.218625318" lastFinishedPulling="2025-09-06 01:38:16.106197268 +0000 UTC m=+14.815305937" observedRunningTime="2025-09-06 01:38:16.422915164 +0000 UTC m=+15.132023796" watchObservedRunningTime="2025-09-06 01:38:16.423277758 +0000 UTC m=+15.132386392" Sep 6 01:38:16.423866 env[1669]: time="2025-09-06T01:38:16.423437431Z" level=info msg="StartContainer for \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\"" Sep 6 01:38:16.427543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount972534613.mount: Deactivated successfully. Sep 6 01:38:16.450569 env[1669]: time="2025-09-06T01:38:16.450540078Z" level=info msg="StartContainer for \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\" returns successfully" Sep 6 01:38:16.462761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 01:38:16.463059 systemd[1]: Stopped systemd-sysctl.service. Sep 6 01:38:16.463190 systemd[1]: Stopping systemd-sysctl.service... Sep 6 01:38:16.464125 systemd[1]: Starting systemd-sysctl.service... Sep 6 01:38:16.469232 systemd[1]: Finished systemd-sysctl.service. Sep 6 01:38:16.477615 env[1669]: time="2025-09-06T01:38:16.477582021Z" level=info msg="shim disconnected" id=a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092 Sep 6 01:38:16.477713 env[1669]: time="2025-09-06T01:38:16.477617336Z" level=warning msg="cleaning up after shim disconnected" id=a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092 namespace=k8s.io Sep 6 01:38:16.477713 env[1669]: time="2025-09-06T01:38:16.477628046Z" level=info msg="cleaning up dead shim" Sep 6 01:38:16.481032 env[1669]: time="2025-09-06T01:38:16.481007118Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:38:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3245 runtime=io.containerd.runc.v2\n" Sep 6 01:38:17.261601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092-rootfs.mount: Deactivated successfully. Sep 6 01:38:17.427525 env[1669]: time="2025-09-06T01:38:17.427396703Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:38:17.447196 env[1669]: time="2025-09-06T01:38:17.447097876Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\"" Sep 6 01:38:17.448157 env[1669]: time="2025-09-06T01:38:17.448064211Z" level=info msg="StartContainer for \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\"" Sep 6 01:38:17.507290 env[1669]: time="2025-09-06T01:38:17.507258213Z" level=info msg="StartContainer for \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\" returns successfully" Sep 6 01:38:17.523190 env[1669]: time="2025-09-06T01:38:17.523117399Z" level=info msg="shim disconnected" id=fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380 Sep 6 01:38:17.523190 env[1669]: time="2025-09-06T01:38:17.523154179Z" level=warning msg="cleaning up after shim disconnected" id=fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380 namespace=k8s.io Sep 6 01:38:17.523190 env[1669]: time="2025-09-06T01:38:17.523163842Z" level=info msg="cleaning up dead shim" Sep 6 01:38:17.528210 env[1669]: time="2025-09-06T01:38:17.528163214Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:38:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3301 runtime=io.containerd.runc.v2\n" Sep 6 01:38:18.261195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380-rootfs.mount: Deactivated successfully. Sep 6 01:38:18.433474 env[1669]: time="2025-09-06T01:38:18.433346753Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:38:18.448980 env[1669]: time="2025-09-06T01:38:18.448898228Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\"" Sep 6 01:38:18.449961 env[1669]: time="2025-09-06T01:38:18.449896109Z" level=info msg="StartContainer for \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\"" Sep 6 01:38:18.494525 env[1669]: time="2025-09-06T01:38:18.494493325Z" level=info msg="StartContainer for \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\" returns successfully" Sep 6 01:38:18.507232 env[1669]: time="2025-09-06T01:38:18.507193316Z" level=info msg="shim disconnected" id=55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987 Sep 6 01:38:18.507232 env[1669]: time="2025-09-06T01:38:18.507232635Z" level=warning msg="cleaning up after shim disconnected" id=55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987 namespace=k8s.io Sep 6 01:38:18.507421 env[1669]: time="2025-09-06T01:38:18.507242045Z" level=info msg="cleaning up dead shim" Sep 6 01:38:18.512375 env[1669]: time="2025-09-06T01:38:18.512303285Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:38:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3355 runtime=io.containerd.runc.v2\n" Sep 6 01:38:19.259628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987-rootfs.mount: Deactivated successfully. Sep 6 01:38:19.444912 env[1669]: time="2025-09-06T01:38:19.444814205Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:38:19.463072 env[1669]: time="2025-09-06T01:38:19.462929401Z" level=info msg="CreateContainer within sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\"" Sep 6 01:38:19.464175 env[1669]: time="2025-09-06T01:38:19.464085906Z" level=info msg="StartContainer for \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\"" Sep 6 01:38:19.515056 env[1669]: time="2025-09-06T01:38:19.514956981Z" level=info msg="StartContainer for \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\" returns successfully" Sep 6 01:38:19.591415 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:38:19.636821 kubelet[2627]: I0906 01:38:19.636803 2627 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 01:38:19.701175 kubelet[2627]: I0906 01:38:19.701156 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78pvc\" (UniqueName: \"kubernetes.io/projected/0ae30cef-ee94-4529-b0a6-d45cd92d4261-kube-api-access-78pvc\") pod \"coredns-7c65d6cfc9-chrfr\" (UID: \"0ae30cef-ee94-4529-b0a6-d45cd92d4261\") " pod="kube-system/coredns-7c65d6cfc9-chrfr" Sep 6 01:38:19.701259 kubelet[2627]: I0906 01:38:19.701185 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ae30cef-ee94-4529-b0a6-d45cd92d4261-config-volume\") pod \"coredns-7c65d6cfc9-chrfr\" (UID: \"0ae30cef-ee94-4529-b0a6-d45cd92d4261\") " pod="kube-system/coredns-7c65d6cfc9-chrfr" Sep 6 01:38:19.701259 kubelet[2627]: I0906 01:38:19.701203 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d96fd1de-c9f3-40cc-8cbb-646730e3a2fd-config-volume\") pod \"coredns-7c65d6cfc9-9nf98\" (UID: \"d96fd1de-c9f3-40cc-8cbb-646730e3a2fd\") " pod="kube-system/coredns-7c65d6cfc9-9nf98" Sep 6 01:38:19.701259 kubelet[2627]: I0906 01:38:19.701221 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r4xs\" (UniqueName: \"kubernetes.io/projected/d96fd1de-c9f3-40cc-8cbb-646730e3a2fd-kube-api-access-4r4xs\") pod \"coredns-7c65d6cfc9-9nf98\" (UID: \"d96fd1de-c9f3-40cc-8cbb-646730e3a2fd\") " pod="kube-system/coredns-7c65d6cfc9-9nf98" Sep 6 01:38:19.739371 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Sep 6 01:38:19.953025 env[1669]: time="2025-09-06T01:38:19.952901092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-chrfr,Uid:0ae30cef-ee94-4529-b0a6-d45cd92d4261,Namespace:kube-system,Attempt:0,}" Sep 6 01:38:19.953025 env[1669]: time="2025-09-06T01:38:19.952953169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9nf98,Uid:d96fd1de-c9f3-40cc-8cbb-646730e3a2fd,Namespace:kube-system,Attempt:0,}" Sep 6 01:38:20.484666 kubelet[2627]: I0906 01:38:20.484507 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cfr46" podStartSLOduration=7.770799316 podStartE2EDuration="13.484462452s" podCreationTimestamp="2025-09-06 01:38:07 +0000 UTC" firstStartedPulling="2025-09-06 01:38:07.402942208 +0000 UTC m=+6.112050840" lastFinishedPulling="2025-09-06 01:38:13.116605363 +0000 UTC m=+11.825713976" observedRunningTime="2025-09-06 01:38:20.483937078 +0000 UTC m=+19.193045762" watchObservedRunningTime="2025-09-06 01:38:20.484462452 +0000 UTC m=+19.193571124" Sep 6 01:38:21.337580 systemd-networkd[1390]: cilium_host: Link UP Sep 6 01:38:21.337687 systemd-networkd[1390]: cilium_net: Link UP Sep 6 01:38:21.344842 systemd-networkd[1390]: cilium_net: Gained carrier Sep 6 01:38:21.351981 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 01:38:21.352018 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 01:38:21.352126 systemd-networkd[1390]: cilium_host: Gained carrier Sep 6 01:38:21.397127 systemd-networkd[1390]: cilium_vxlan: Link UP Sep 6 01:38:21.397132 systemd-networkd[1390]: cilium_vxlan: Gained carrier Sep 6 01:38:21.530455 kernel: NET: Registered PF_ALG protocol family Sep 6 01:38:22.200401 systemd-networkd[1390]: lxc_health: Link UP Sep 6 01:38:22.224277 systemd-networkd[1390]: lxc_health: Gained carrier Sep 6 01:38:22.224409 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:38:22.229439 systemd-networkd[1390]: cilium_net: Gained IPv6LL Sep 6 01:38:22.229595 systemd-networkd[1390]: cilium_host: Gained IPv6LL Sep 6 01:38:22.501718 systemd-networkd[1390]: lxc92de402b7134: Link UP Sep 6 01:38:22.545433 kernel: eth0: renamed from tmpd6d2a Sep 6 01:38:22.559431 kernel: eth0: renamed from tmp89528 Sep 6 01:38:22.567833 systemd-networkd[1390]: lxc22f249ad692e: Link UP Sep 6 01:38:22.574365 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:38:22.574421 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc92de402b7134: link becomes ready Sep 6 01:38:22.582443 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 01:38:22.596167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc22f249ad692e: link becomes ready Sep 6 01:38:22.596607 systemd-networkd[1390]: lxc92de402b7134: Gained carrier Sep 6 01:38:22.596764 systemd-networkd[1390]: lxc22f249ad692e: Gained carrier Sep 6 01:38:22.613470 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL Sep 6 01:38:23.764497 systemd-networkd[1390]: lxc_health: Gained IPv6LL Sep 6 01:38:24.468454 systemd-networkd[1390]: lxc22f249ad692e: Gained IPv6LL Sep 6 01:38:24.596533 systemd-networkd[1390]: lxc92de402b7134: Gained IPv6LL Sep 6 01:38:24.893771 env[1669]: time="2025-09-06T01:38:24.893738339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:38:24.893771 env[1669]: time="2025-09-06T01:38:24.893762666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:38:24.893771 env[1669]: time="2025-09-06T01:38:24.893771419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:38:24.894054 env[1669]: time="2025-09-06T01:38:24.893852719Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/89528c97c03a6ec7477e34dbe54c2c7fdb85d8661b0126d4181572bd673f53e9 pid=4044 runtime=io.containerd.runc.v2 Sep 6 01:38:24.894539 env[1669]: time="2025-09-06T01:38:24.894505044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:38:24.894539 env[1669]: time="2025-09-06T01:38:24.894525061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:38:24.894539 env[1669]: time="2025-09-06T01:38:24.894535719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:38:24.894636 env[1669]: time="2025-09-06T01:38:24.894605750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6d2a9baab48d544e42aa19b67caf84f91370df56843a72a0e962d64c266ffc7 pid=4052 runtime=io.containerd.runc.v2 Sep 6 01:38:24.902307 systemd[1]: run-containerd-runc-k8s.io-d6d2a9baab48d544e42aa19b67caf84f91370df56843a72a0e962d64c266ffc7-runc.Nf02WU.mount: Deactivated successfully. Sep 6 01:38:24.925599 env[1669]: time="2025-09-06T01:38:24.925566024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9nf98,Uid:d96fd1de-c9f3-40cc-8cbb-646730e3a2fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6d2a9baab48d544e42aa19b67caf84f91370df56843a72a0e962d64c266ffc7\"" Sep 6 01:38:24.926041 env[1669]: time="2025-09-06T01:38:24.926025768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-chrfr,Uid:0ae30cef-ee94-4529-b0a6-d45cd92d4261,Namespace:kube-system,Attempt:0,} returns sandbox id \"89528c97c03a6ec7477e34dbe54c2c7fdb85d8661b0126d4181572bd673f53e9\"" Sep 6 01:38:24.926797 env[1669]: time="2025-09-06T01:38:24.926779727Z" level=info msg="CreateContainer within sandbox \"d6d2a9baab48d544e42aa19b67caf84f91370df56843a72a0e962d64c266ffc7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:38:24.927183 env[1669]: time="2025-09-06T01:38:24.927169682Z" level=info msg="CreateContainer within sandbox \"89528c97c03a6ec7477e34dbe54c2c7fdb85d8661b0126d4181572bd673f53e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 01:38:24.932003 env[1669]: time="2025-09-06T01:38:24.931949976Z" level=info msg="CreateContainer within sandbox \"d6d2a9baab48d544e42aa19b67caf84f91370df56843a72a0e962d64c266ffc7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfa3a99a61e27c6740fae4d68d67c0c49dde2d475cf9ef43fb58575eb975ea7a\"" Sep 6 01:38:24.932230 env[1669]: time="2025-09-06T01:38:24.932190525Z" level=info msg="StartContainer for \"cfa3a99a61e27c6740fae4d68d67c0c49dde2d475cf9ef43fb58575eb975ea7a\"" Sep 6 01:38:24.950092 env[1669]: time="2025-09-06T01:38:24.950061420Z" level=info msg="CreateContainer within sandbox \"89528c97c03a6ec7477e34dbe54c2c7fdb85d8661b0126d4181572bd673f53e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"951ee0002fe55b2b347222fe36c18c7dddff016d82ff5bcb455395d5aa2cf68d\"" Sep 6 01:38:24.950427 env[1669]: time="2025-09-06T01:38:24.950375877Z" level=info msg="StartContainer for \"951ee0002fe55b2b347222fe36c18c7dddff016d82ff5bcb455395d5aa2cf68d\"" Sep 6 01:38:24.954428 env[1669]: time="2025-09-06T01:38:24.954398345Z" level=info msg="StartContainer for \"cfa3a99a61e27c6740fae4d68d67c0c49dde2d475cf9ef43fb58575eb975ea7a\" returns successfully" Sep 6 01:38:24.973498 env[1669]: time="2025-09-06T01:38:24.973473531Z" level=info msg="StartContainer for \"951ee0002fe55b2b347222fe36c18c7dddff016d82ff5bcb455395d5aa2cf68d\" returns successfully" Sep 6 01:38:25.479722 kubelet[2627]: I0906 01:38:25.479615 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9nf98" podStartSLOduration=18.479578406999998 podStartE2EDuration="18.479578407s" podCreationTimestamp="2025-09-06 01:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:38:25.478656172 +0000 UTC m=+24.187764937" watchObservedRunningTime="2025-09-06 01:38:25.479578407 +0000 UTC m=+24.188687063" Sep 6 01:38:25.520707 kubelet[2627]: I0906 01:38:25.520567 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-chrfr" podStartSLOduration=18.520515538 podStartE2EDuration="18.520515538s" podCreationTimestamp="2025-09-06 01:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:38:25.497333632 +0000 UTC m=+24.206442362" watchObservedRunningTime="2025-09-06 01:38:25.520515538 +0000 UTC m=+24.229624203" Sep 6 01:38:31.163453 kubelet[2627]: I0906 01:38:31.163321 2627 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 01:43:57.167725 systemd[1]: Started sshd@7-139.178.94.47:22-139.178.68.195:56834.service. Sep 6 01:43:57.256871 sshd[4261]: Accepted publickey for core from 139.178.68.195 port 56834 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:43:57.258101 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:43:57.261751 systemd-logind[1711]: New session 10 of user core. Sep 6 01:43:57.262511 systemd[1]: Started session-10.scope. Sep 6 01:43:57.357905 sshd[4261]: pam_unix(sshd:session): session closed for user core Sep 6 01:43:57.359551 systemd[1]: sshd@7-139.178.94.47:22-139.178.68.195:56834.service: Deactivated successfully. Sep 6 01:43:57.360202 systemd-logind[1711]: Session 10 logged out. Waiting for processes to exit. Sep 6 01:43:57.360210 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 01:43:57.360814 systemd-logind[1711]: Removed session 10. Sep 6 01:44:02.360214 systemd[1]: Started sshd@8-139.178.94.47:22-139.178.68.195:52754.service. Sep 6 01:44:02.392302 sshd[4293]: Accepted publickey for core from 139.178.68.195 port 52754 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:02.393126 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:02.395979 systemd-logind[1711]: New session 11 of user core. Sep 6 01:44:02.396469 systemd[1]: Started session-11.scope. Sep 6 01:44:02.486105 sshd[4293]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:02.487472 systemd[1]: sshd@8-139.178.94.47:22-139.178.68.195:52754.service: Deactivated successfully. Sep 6 01:44:02.488057 systemd-logind[1711]: Session 11 logged out. Waiting for processes to exit. Sep 6 01:44:02.488098 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 01:44:02.488465 systemd-logind[1711]: Removed session 11. Sep 6 01:44:07.492592 systemd[1]: Started sshd@9-139.178.94.47:22-139.178.68.195:52762.service. Sep 6 01:44:07.535659 sshd[4321]: Accepted publickey for core from 139.178.68.195 port 52762 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:07.536632 sshd[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:07.540024 systemd-logind[1711]: New session 12 of user core. Sep 6 01:44:07.540660 systemd[1]: Started session-12.scope. Sep 6 01:44:07.669029 sshd[4321]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:07.670313 systemd[1]: sshd@9-139.178.94.47:22-139.178.68.195:52762.service: Deactivated successfully. Sep 6 01:44:07.670956 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 01:44:07.670978 systemd-logind[1711]: Session 12 logged out. Waiting for processes to exit. Sep 6 01:44:07.671365 systemd-logind[1711]: Removed session 12. Sep 6 01:44:12.675957 systemd[1]: Started sshd@10-139.178.94.47:22-139.178.68.195:59170.service. Sep 6 01:44:12.719371 sshd[4352]: Accepted publickey for core from 139.178.68.195 port 59170 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:12.720263 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:12.723272 systemd-logind[1711]: New session 13 of user core. Sep 6 01:44:12.723891 systemd[1]: Started session-13.scope. Sep 6 01:44:12.807849 sshd[4352]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:12.809412 systemd[1]: Started sshd@11-139.178.94.47:22-139.178.68.195:59172.service. Sep 6 01:44:12.809737 systemd[1]: sshd@10-139.178.94.47:22-139.178.68.195:59170.service: Deactivated successfully. Sep 6 01:44:12.810242 systemd-logind[1711]: Session 13 logged out. Waiting for processes to exit. Sep 6 01:44:12.810280 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 01:44:12.810760 systemd-logind[1711]: Removed session 13. Sep 6 01:44:12.905418 sshd[4378]: Accepted publickey for core from 139.178.68.195 port 59172 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:12.907990 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:12.916094 systemd-logind[1711]: New session 14 of user core. Sep 6 01:44:12.917791 systemd[1]: Started session-14.scope. Sep 6 01:44:13.067204 sshd[4378]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:13.068846 systemd[1]: Started sshd@12-139.178.94.47:22-139.178.68.195:59186.service. Sep 6 01:44:13.069347 systemd[1]: sshd@11-139.178.94.47:22-139.178.68.195:59172.service: Deactivated successfully. Sep 6 01:44:13.069943 systemd-logind[1711]: Session 14 logged out. Waiting for processes to exit. Sep 6 01:44:13.069998 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 01:44:13.070481 systemd-logind[1711]: Removed session 14. Sep 6 01:44:13.100563 sshd[4404]: Accepted publickey for core from 139.178.68.195 port 59186 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:13.101364 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:13.103725 systemd-logind[1711]: New session 15 of user core. Sep 6 01:44:13.104400 systemd[1]: Started session-15.scope. Sep 6 01:44:13.231741 sshd[4404]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:13.233072 systemd[1]: sshd@12-139.178.94.47:22-139.178.68.195:59186.service: Deactivated successfully. Sep 6 01:44:13.233741 systemd-logind[1711]: Session 15 logged out. Waiting for processes to exit. Sep 6 01:44:13.233746 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 01:44:13.234234 systemd-logind[1711]: Removed session 15. Sep 6 01:44:18.239405 systemd[1]: Started sshd@13-139.178.94.47:22-139.178.68.195:59194.service. Sep 6 01:44:18.271129 sshd[4432]: Accepted publickey for core from 139.178.68.195 port 59194 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:18.271825 sshd[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:18.274423 systemd-logind[1711]: New session 16 of user core. Sep 6 01:44:18.274994 systemd[1]: Started session-16.scope. Sep 6 01:44:18.418941 sshd[4432]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:18.421324 systemd[1]: sshd@13-139.178.94.47:22-139.178.68.195:59194.service: Deactivated successfully. Sep 6 01:44:18.422370 systemd-logind[1711]: Session 16 logged out. Waiting for processes to exit. Sep 6 01:44:18.422489 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 01:44:18.423284 systemd-logind[1711]: Removed session 16. Sep 6 01:44:23.425144 systemd[1]: Started sshd@14-139.178.94.47:22-139.178.68.195:42442.service. Sep 6 01:44:23.463471 sshd[4460]: Accepted publickey for core from 139.178.68.195 port 42442 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:23.464306 sshd[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:23.467151 systemd-logind[1711]: New session 17 of user core. Sep 6 01:44:23.467996 systemd[1]: Started session-17.scope. Sep 6 01:44:23.555205 sshd[4460]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:23.557385 systemd[1]: Started sshd@15-139.178.94.47:22-139.178.68.195:42456.service. Sep 6 01:44:23.557875 systemd[1]: sshd@14-139.178.94.47:22-139.178.68.195:42442.service: Deactivated successfully. Sep 6 01:44:23.558531 systemd-logind[1711]: Session 17 logged out. Waiting for processes to exit. Sep 6 01:44:23.558579 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 01:44:23.559104 systemd-logind[1711]: Removed session 17. Sep 6 01:44:23.588807 sshd[4485]: Accepted publickey for core from 139.178.68.195 port 42456 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:23.589561 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:23.592100 systemd-logind[1711]: New session 18 of user core. Sep 6 01:44:23.592598 systemd[1]: Started session-18.scope. Sep 6 01:44:23.862072 sshd[4485]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:23.871050 systemd[1]: Started sshd@16-139.178.94.47:22-139.178.68.195:42466.service. Sep 6 01:44:23.873451 systemd[1]: sshd@15-139.178.94.47:22-139.178.68.195:42456.service: Deactivated successfully. Sep 6 01:44:23.876447 systemd-logind[1711]: Session 18 logged out. Waiting for processes to exit. Sep 6 01:44:23.876751 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 01:44:23.879794 systemd-logind[1711]: Removed session 18. Sep 6 01:44:23.927587 sshd[4508]: Accepted publickey for core from 139.178.68.195 port 42466 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:23.928295 sshd[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:23.930743 systemd-logind[1711]: New session 19 of user core. Sep 6 01:44:23.931436 systemd[1]: Started session-19.scope. Sep 6 01:44:24.919471 sshd[4508]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:24.922291 systemd[1]: Started sshd@17-139.178.94.47:22-139.178.68.195:42474.service. Sep 6 01:44:24.922935 systemd[1]: sshd@16-139.178.94.47:22-139.178.68.195:42466.service: Deactivated successfully. Sep 6 01:44:24.924001 systemd-logind[1711]: Session 19 logged out. Waiting for processes to exit. Sep 6 01:44:24.924059 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 01:44:24.925337 systemd-logind[1711]: Removed session 19. Sep 6 01:44:24.973802 sshd[4537]: Accepted publickey for core from 139.178.68.195 port 42474 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:24.975184 sshd[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:24.979504 systemd-logind[1711]: New session 20 of user core. Sep 6 01:44:24.980442 systemd[1]: Started session-20.scope. Sep 6 01:44:25.188465 sshd[4537]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:25.190371 systemd[1]: Started sshd@18-139.178.94.47:22-139.178.68.195:42482.service. Sep 6 01:44:25.190835 systemd[1]: sshd@17-139.178.94.47:22-139.178.68.195:42474.service: Deactivated successfully. Sep 6 01:44:25.191533 systemd-logind[1711]: Session 20 logged out. Waiting for processes to exit. Sep 6 01:44:25.191579 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 01:44:25.192177 systemd-logind[1711]: Removed session 20. Sep 6 01:44:25.236613 sshd[4565]: Accepted publickey for core from 139.178.68.195 port 42482 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:25.237553 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:25.240779 systemd-logind[1711]: New session 21 of user core. Sep 6 01:44:25.241384 systemd[1]: Started session-21.scope. Sep 6 01:44:25.370240 sshd[4565]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:25.371625 systemd[1]: sshd@18-139.178.94.47:22-139.178.68.195:42482.service: Deactivated successfully. Sep 6 01:44:25.372146 systemd-logind[1711]: Session 21 logged out. Waiting for processes to exit. Sep 6 01:44:25.372184 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 01:44:25.372547 systemd-logind[1711]: Removed session 21. Sep 6 01:44:30.373001 systemd[1]: Started sshd@19-139.178.94.47:22-139.178.68.195:36978.service. Sep 6 01:44:30.470522 sshd[4597]: Accepted publickey for core from 139.178.68.195 port 36978 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:30.472415 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:30.477472 systemd-logind[1711]: New session 22 of user core. Sep 6 01:44:30.478576 systemd[1]: Started session-22.scope. Sep 6 01:44:30.568972 sshd[4597]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:30.570468 systemd[1]: sshd@19-139.178.94.47:22-139.178.68.195:36978.service: Deactivated successfully. Sep 6 01:44:30.571078 systemd-logind[1711]: Session 22 logged out. Waiting for processes to exit. Sep 6 01:44:30.571098 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 01:44:30.571603 systemd-logind[1711]: Removed session 22. Sep 6 01:44:35.576295 systemd[1]: Started sshd@20-139.178.94.47:22-139.178.68.195:36992.service. Sep 6 01:44:35.620446 sshd[4624]: Accepted publickey for core from 139.178.68.195 port 36992 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:35.623803 sshd[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:35.634432 systemd-logind[1711]: New session 23 of user core. Sep 6 01:44:35.638344 systemd[1]: Started session-23.scope. Sep 6 01:44:35.730540 sshd[4624]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:35.732088 systemd[1]: sshd@20-139.178.94.47:22-139.178.68.195:36992.service: Deactivated successfully. Sep 6 01:44:35.732730 systemd-logind[1711]: Session 23 logged out. Waiting for processes to exit. Sep 6 01:44:35.732768 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 01:44:35.733245 systemd-logind[1711]: Removed session 23. Sep 6 01:44:40.738309 systemd[1]: Started sshd@21-139.178.94.47:22-139.178.68.195:35194.service. Sep 6 01:44:40.775338 sshd[4651]: Accepted publickey for core from 139.178.68.195 port 35194 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:40.776056 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:40.778295 systemd-logind[1711]: New session 24 of user core. Sep 6 01:44:40.779033 systemd[1]: Started session-24.scope. Sep 6 01:44:40.861652 sshd[4651]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:40.863300 systemd[1]: Started sshd@22-139.178.94.47:22-139.178.68.195:35204.service. Sep 6 01:44:40.863662 systemd[1]: sshd@21-139.178.94.47:22-139.178.68.195:35194.service: Deactivated successfully. Sep 6 01:44:40.864225 systemd-logind[1711]: Session 24 logged out. Waiting for processes to exit. Sep 6 01:44:40.864284 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 01:44:40.864931 systemd-logind[1711]: Removed session 24. Sep 6 01:44:40.895798 sshd[4676]: Accepted publickey for core from 139.178.68.195 port 35204 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:40.896569 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:40.899234 systemd-logind[1711]: New session 25 of user core. Sep 6 01:44:40.899810 systemd[1]: Started session-25.scope. Sep 6 01:44:42.265492 env[1669]: time="2025-09-06T01:44:42.265351382Z" level=info msg="StopContainer for \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\" with timeout 30 (s)" Sep 6 01:44:42.266462 env[1669]: time="2025-09-06T01:44:42.266151274Z" level=info msg="Stop container \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\" with signal terminated" Sep 6 01:44:42.305547 env[1669]: time="2025-09-06T01:44:42.305486143Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 01:44:42.306398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf-rootfs.mount: Deactivated successfully. Sep 6 01:44:42.307959 env[1669]: time="2025-09-06T01:44:42.307928735Z" level=info msg="shim disconnected" id=3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf Sep 6 01:44:42.308047 env[1669]: time="2025-09-06T01:44:42.307964790Z" level=warning msg="cleaning up after shim disconnected" id=3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf namespace=k8s.io Sep 6 01:44:42.308047 env[1669]: time="2025-09-06T01:44:42.307976160Z" level=info msg="cleaning up dead shim" Sep 6 01:44:42.310833 env[1669]: time="2025-09-06T01:44:42.310803609Z" level=info msg="StopContainer for \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\" with timeout 2 (s)" Sep 6 01:44:42.311021 env[1669]: time="2025-09-06T01:44:42.311001961Z" level=info msg="Stop container \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\" with signal terminated" Sep 6 01:44:42.313930 env[1669]: time="2025-09-06T01:44:42.313899302Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4737 runtime=io.containerd.runc.v2\n" Sep 6 01:44:42.314878 env[1669]: time="2025-09-06T01:44:42.314835506Z" level=info msg="StopContainer for \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\" returns successfully" Sep 6 01:44:42.315329 env[1669]: time="2025-09-06T01:44:42.315299459Z" level=info msg="StopPodSandbox for \"f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf\"" Sep 6 01:44:42.315441 env[1669]: time="2025-09-06T01:44:42.315373742Z" level=info msg="Container to stop \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:44:42.315941 systemd-networkd[1390]: lxc_health: Link DOWN Sep 6 01:44:42.315948 systemd-networkd[1390]: lxc_health: Lost carrier Sep 6 01:44:42.318105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf-shm.mount: Deactivated successfully. Sep 6 01:44:42.336527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf-rootfs.mount: Deactivated successfully. Sep 6 01:44:42.353795 env[1669]: time="2025-09-06T01:44:42.353666637Z" level=info msg="shim disconnected" id=f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf Sep 6 01:44:42.354162 env[1669]: time="2025-09-06T01:44:42.353807308Z" level=warning msg="cleaning up after shim disconnected" id=f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf namespace=k8s.io Sep 6 01:44:42.354162 env[1669]: time="2025-09-06T01:44:42.353851535Z" level=info msg="cleaning up dead shim" Sep 6 01:44:42.372571 env[1669]: time="2025-09-06T01:44:42.372456314Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4778 runtime=io.containerd.runc.v2\n" Sep 6 01:44:42.373230 env[1669]: time="2025-09-06T01:44:42.373128509Z" level=info msg="TearDown network for sandbox \"f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf\" successfully" Sep 6 01:44:42.373230 env[1669]: time="2025-09-06T01:44:42.373196488Z" level=info msg="StopPodSandbox for \"f2c15edeb4245628ddf2550e5d12e778f6b1ed8df4673a9bf7f65cae984186cf\" returns successfully" Sep 6 01:44:42.412681 env[1669]: time="2025-09-06T01:44:42.412553754Z" level=info msg="shim disconnected" id=1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a Sep 6 01:44:42.412681 env[1669]: time="2025-09-06T01:44:42.412659094Z" level=warning msg="cleaning up after shim disconnected" id=1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a namespace=k8s.io Sep 6 01:44:42.413047 env[1669]: time="2025-09-06T01:44:42.412692315Z" level=info msg="cleaning up dead shim" Sep 6 01:44:42.414557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a-rootfs.mount: Deactivated successfully. Sep 6 01:44:42.425199 env[1669]: time="2025-09-06T01:44:42.425106522Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4806 runtime=io.containerd.runc.v2\n" Sep 6 01:44:42.426713 env[1669]: time="2025-09-06T01:44:42.426625820Z" level=info msg="StopContainer for \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\" returns successfully" Sep 6 01:44:42.427433 env[1669]: time="2025-09-06T01:44:42.427353791Z" level=info msg="StopPodSandbox for \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\"" Sep 6 01:44:42.427592 env[1669]: time="2025-09-06T01:44:42.427488327Z" level=info msg="Container to stop \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:44:42.427592 env[1669]: time="2025-09-06T01:44:42.427523317Z" level=info msg="Container to stop \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:44:42.427592 env[1669]: time="2025-09-06T01:44:42.427545776Z" level=info msg="Container to stop \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:44:42.427592 env[1669]: time="2025-09-06T01:44:42.427568786Z" level=info msg="Container to stop \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:44:42.427900 env[1669]: time="2025-09-06T01:44:42.427590101Z" level=info msg="Container to stop \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 01:44:42.460183 env[1669]: time="2025-09-06T01:44:42.460086452Z" level=info msg="shim disconnected" id=fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b Sep 6 01:44:42.460512 env[1669]: time="2025-09-06T01:44:42.460189685Z" level=warning msg="cleaning up after shim disconnected" id=fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b namespace=k8s.io Sep 6 01:44:42.460512 env[1669]: time="2025-09-06T01:44:42.460224502Z" level=info msg="cleaning up dead shim" Sep 6 01:44:42.472590 env[1669]: time="2025-09-06T01:44:42.472495830Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:44:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4838 runtime=io.containerd.runc.v2\n" Sep 6 01:44:42.473092 env[1669]: time="2025-09-06T01:44:42.473009097Z" level=info msg="TearDown network for sandbox \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" successfully" Sep 6 01:44:42.473092 env[1669]: time="2025-09-06T01:44:42.473057286Z" level=info msg="StopPodSandbox for \"fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b\" returns successfully" Sep 6 01:44:42.519388 kubelet[2627]: I0906 01:44:42.519184 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgcx4\" (UniqueName: \"kubernetes.io/projected/01cafc41-26cc-4a36-94de-7431d82234c4-kube-api-access-dgcx4\") pod \"01cafc41-26cc-4a36-94de-7431d82234c4\" (UID: \"01cafc41-26cc-4a36-94de-7431d82234c4\") " Sep 6 01:44:42.519388 kubelet[2627]: I0906 01:44:42.519311 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01cafc41-26cc-4a36-94de-7431d82234c4-cilium-config-path\") pod \"01cafc41-26cc-4a36-94de-7431d82234c4\" (UID: \"01cafc41-26cc-4a36-94de-7431d82234c4\") " Sep 6 01:44:42.524804 kubelet[2627]: I0906 01:44:42.524684 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01cafc41-26cc-4a36-94de-7431d82234c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "01cafc41-26cc-4a36-94de-7431d82234c4" (UID: "01cafc41-26cc-4a36-94de-7431d82234c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 01:44:42.526153 kubelet[2627]: I0906 01:44:42.526049 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cafc41-26cc-4a36-94de-7431d82234c4-kube-api-access-dgcx4" (OuterVolumeSpecName: "kube-api-access-dgcx4") pod "01cafc41-26cc-4a36-94de-7431d82234c4" (UID: "01cafc41-26cc-4a36-94de-7431d82234c4"). InnerVolumeSpecName "kube-api-access-dgcx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:44:42.583511 kubelet[2627]: I0906 01:44:42.583440 2627 scope.go:117] "RemoveContainer" containerID="1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a" Sep 6 01:44:42.586106 env[1669]: time="2025-09-06T01:44:42.586010729Z" level=info msg="RemoveContainer for \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\"" Sep 6 01:44:42.591647 env[1669]: time="2025-09-06T01:44:42.591569352Z" level=info msg="RemoveContainer for \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\" returns successfully" Sep 6 01:44:42.592097 kubelet[2627]: I0906 01:44:42.592042 2627 scope.go:117] "RemoveContainer" containerID="55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987" Sep 6 01:44:42.594699 env[1669]: time="2025-09-06T01:44:42.594621146Z" level=info msg="RemoveContainer for \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\"" Sep 6 01:44:42.599396 env[1669]: time="2025-09-06T01:44:42.599255810Z" level=info msg="RemoveContainer for \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\" returns successfully" Sep 6 01:44:42.599752 kubelet[2627]: I0906 01:44:42.599676 2627 scope.go:117] "RemoveContainer" containerID="fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380" Sep 6 01:44:42.602158 env[1669]: time="2025-09-06T01:44:42.602084832Z" level=info msg="RemoveContainer for \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\"" Sep 6 01:44:42.607061 env[1669]: time="2025-09-06T01:44:42.606974513Z" level=info msg="RemoveContainer for \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\" returns successfully" Sep 6 01:44:42.607549 kubelet[2627]: I0906 01:44:42.607499 2627 scope.go:117] "RemoveContainer" containerID="a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092" Sep 6 01:44:42.610425 env[1669]: time="2025-09-06T01:44:42.610324401Z" level=info msg="RemoveContainer for \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\"" Sep 6 01:44:42.614609 env[1669]: time="2025-09-06T01:44:42.614504461Z" level=info msg="RemoveContainer for \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\" returns successfully" Sep 6 01:44:42.614866 kubelet[2627]: I0906 01:44:42.614819 2627 scope.go:117] "RemoveContainer" containerID="40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3" Sep 6 01:44:42.617247 env[1669]: time="2025-09-06T01:44:42.617164656Z" level=info msg="RemoveContainer for \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\"" Sep 6 01:44:42.620573 kubelet[2627]: I0906 01:44:42.620509 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61ce73e3-8db6-471d-9eb6-51405d8fb048-clustermesh-secrets\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.620573 kubelet[2627]: I0906 01:44:42.620570 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61ce73e3-8db6-471d-9eb6-51405d8fb048-hubble-tls\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.620819 kubelet[2627]: I0906 01:44:42.620619 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cni-path\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.620819 kubelet[2627]: I0906 01:44:42.620658 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-bpf-maps\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.620819 kubelet[2627]: I0906 01:44:42.620702 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdc5w\" (UniqueName: \"kubernetes.io/projected/61ce73e3-8db6-471d-9eb6-51405d8fb048-kube-api-access-xdc5w\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.620819 kubelet[2627]: I0906 01:44:42.620747 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-xtables-lock\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.620819 kubelet[2627]: I0906 01:44:42.620741 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cni-path" (OuterVolumeSpecName: "cni-path") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.620819 kubelet[2627]: I0906 01:44:42.620786 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-hostproc\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.621309 kubelet[2627]: I0906 01:44:42.620796 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.621309 kubelet[2627]: I0906 01:44:42.620823 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.621309 kubelet[2627]: I0906 01:44:42.620874 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-run\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.621309 kubelet[2627]: I0906 01:44:42.620870 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-hostproc" (OuterVolumeSpecName: "hostproc") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.621309 kubelet[2627]: I0906 01:44:42.620906 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.621754 env[1669]: time="2025-09-06T01:44:42.620987404Z" level=info msg="RemoveContainer for \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\" returns successfully" Sep 6 01:44:42.621844 kubelet[2627]: I0906 01:44:42.620917 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-host-proc-sys-net\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.621844 kubelet[2627]: I0906 01:44:42.620964 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.621844 kubelet[2627]: I0906 01:44:42.621008 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-host-proc-sys-kernel\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.621844 kubelet[2627]: I0906 01:44:42.621076 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-config-path\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.621844 kubelet[2627]: I0906 01:44:42.621111 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.622202 env[1669]: time="2025-09-06T01:44:42.621630064Z" level=error msg="ContainerStatus for \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\": not found" Sep 6 01:44:42.622292 kubelet[2627]: I0906 01:44:42.621140 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-etc-cni-netd\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.622292 kubelet[2627]: I0906 01:44:42.621202 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-lib-modules\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.622292 kubelet[2627]: I0906 01:44:42.621226 2627 scope.go:117] "RemoveContainer" containerID="1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a" Sep 6 01:44:42.622292 kubelet[2627]: I0906 01:44:42.621232 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.622292 kubelet[2627]: I0906 01:44:42.621262 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-cgroup\") pod \"61ce73e3-8db6-471d-9eb6-51405d8fb048\" (UID: \"61ce73e3-8db6-471d-9eb6-51405d8fb048\") " Sep 6 01:44:42.622292 kubelet[2627]: I0906 01:44:42.621325 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.622755 kubelet[2627]: I0906 01:44:42.621395 2627 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgcx4\" (UniqueName: \"kubernetes.io/projected/01cafc41-26cc-4a36-94de-7431d82234c4-kube-api-access-dgcx4\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.622755 kubelet[2627]: I0906 01:44:42.621437 2627 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-hostproc\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.622755 kubelet[2627]: I0906 01:44:42.621460 2627 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-host-proc-sys-net\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.622755 kubelet[2627]: I0906 01:44:42.621488 2627 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.622755 kubelet[2627]: I0906 01:44:42.621525 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01cafc41-26cc-4a36-94de-7431d82234c4-cilium-config-path\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.622755 kubelet[2627]: I0906 01:44:42.621433 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:42.622755 kubelet[2627]: I0906 01:44:42.621560 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-run\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.623307 kubelet[2627]: I0906 01:44:42.621596 2627 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-etc-cni-netd\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.623307 kubelet[2627]: I0906 01:44:42.621624 2627 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-xtables-lock\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.623307 kubelet[2627]: I0906 01:44:42.621643 2627 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cni-path\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.623307 kubelet[2627]: I0906 01:44:42.621660 2627 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-bpf-maps\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.623307 kubelet[2627]: E0906 01:44:42.621997 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\": not found" containerID="1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a" Sep 6 01:44:42.623307 kubelet[2627]: I0906 01:44:42.622073 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a"} err="failed to get container status \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1020c3758bb88498794b63a7254c489a2c2b9ee298c109da7704274b7c8c0a5a\": not found" Sep 6 01:44:42.623307 kubelet[2627]: I0906 01:44:42.622281 2627 scope.go:117] "RemoveContainer" containerID="55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987" Sep 6 01:44:42.623867 env[1669]: time="2025-09-06T01:44:42.622685847Z" level=error msg="ContainerStatus for \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\": not found" Sep 6 01:44:42.623867 env[1669]: time="2025-09-06T01:44:42.623333231Z" level=error msg="ContainerStatus for \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\": not found" Sep 6 01:44:42.624030 kubelet[2627]: E0906 01:44:42.622965 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\": not found" containerID="55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987" Sep 6 01:44:42.624030 kubelet[2627]: I0906 01:44:42.623009 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987"} err="failed to get container status \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\": rpc error: code = NotFound desc = an error occurred when try to find container \"55cd79060374140f32e2bb918317fed729a6577efb0d243904311f54aaee5987\": not found" Sep 6 01:44:42.624030 kubelet[2627]: I0906 01:44:42.623044 2627 scope.go:117] "RemoveContainer" containerID="fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380" Sep 6 01:44:42.624030 kubelet[2627]: E0906 01:44:42.623630 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\": not found" containerID="fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380" Sep 6 01:44:42.624030 kubelet[2627]: I0906 01:44:42.623672 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380"} err="failed to get container status \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe7fa54bc2ff37836f68acc5be99160568a4aa33cfad9c97aa91ce68a9853380\": not found" Sep 6 01:44:42.624030 kubelet[2627]: I0906 01:44:42.623701 2627 scope.go:117] "RemoveContainer" containerID="a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092" Sep 6 01:44:42.624534 kubelet[2627]: E0906 01:44:42.624232 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\": not found" containerID="a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092" Sep 6 01:44:42.624534 kubelet[2627]: I0906 01:44:42.624269 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092"} err="failed to get container status \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\": rpc error: code = NotFound desc = an error occurred when try to find container \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\": not found" Sep 6 01:44:42.624534 kubelet[2627]: I0906 01:44:42.624298 2627 scope.go:117] "RemoveContainer" containerID="40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3" Sep 6 01:44:42.624801 env[1669]: time="2025-09-06T01:44:42.623997240Z" level=error msg="ContainerStatus for \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a49e4ceedc91f5226190e755e4ee4f3a8786281334eae802d664bd14721b2092\": not found" Sep 6 01:44:42.624801 env[1669]: time="2025-09-06T01:44:42.624639887Z" level=error msg="ContainerStatus for \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\": not found" Sep 6 01:44:42.625009 kubelet[2627]: E0906 01:44:42.624925 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\": not found" containerID="40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3" Sep 6 01:44:42.625103 kubelet[2627]: I0906 01:44:42.624986 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3"} err="failed to get container status \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"40fb592d3684ff59b28cfc31882792fb721ed5daab570d0b80816683c25d97b3\": not found" Sep 6 01:44:42.625103 kubelet[2627]: I0906 01:44:42.625036 2627 scope.go:117] "RemoveContainer" containerID="3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf" Sep 6 01:44:42.625832 kubelet[2627]: I0906 01:44:42.625785 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ce73e3-8db6-471d-9eb6-51405d8fb048-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:44:42.625967 kubelet[2627]: I0906 01:44:42.625882 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ce73e3-8db6-471d-9eb6-51405d8fb048-kube-api-access-xdc5w" (OuterVolumeSpecName: "kube-api-access-xdc5w") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "kube-api-access-xdc5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:44:42.626057 kubelet[2627]: I0906 01:44:42.625955 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61ce73e3-8db6-471d-9eb6-51405d8fb048-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 01:44:42.626057 kubelet[2627]: I0906 01:44:42.626014 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61ce73e3-8db6-471d-9eb6-51405d8fb048" (UID: "61ce73e3-8db6-471d-9eb6-51405d8fb048"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 01:44:42.626994 env[1669]: time="2025-09-06T01:44:42.626927467Z" level=info msg="RemoveContainer for \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\"" Sep 6 01:44:42.630002 env[1669]: time="2025-09-06T01:44:42.629920443Z" level=info msg="RemoveContainer for \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\" returns successfully" Sep 6 01:44:42.630250 kubelet[2627]: I0906 01:44:42.630213 2627 scope.go:117] "RemoveContainer" containerID="3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf" Sep 6 01:44:42.630726 env[1669]: time="2025-09-06T01:44:42.630562560Z" level=error msg="ContainerStatus for \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\": not found" Sep 6 01:44:42.630985 kubelet[2627]: E0906 01:44:42.630899 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\": not found" containerID="3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf" Sep 6 01:44:42.630985 kubelet[2627]: I0906 01:44:42.630963 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf"} err="failed to get container status \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e3524d41183320147b10dd8aa2d58e86171c9d6fe46e061efab1500e61754bf\": not found" Sep 6 01:44:42.722202 kubelet[2627]: I0906 01:44:42.722090 2627 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdc5w\" (UniqueName: \"kubernetes.io/projected/61ce73e3-8db6-471d-9eb6-51405d8fb048-kube-api-access-xdc5w\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.722202 kubelet[2627]: I0906 01:44:42.722165 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-config-path\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.722202 kubelet[2627]: I0906 01:44:42.722200 2627 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-lib-modules\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.722759 kubelet[2627]: I0906 01:44:42.722229 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61ce73e3-8db6-471d-9eb6-51405d8fb048-cilium-cgroup\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.722759 kubelet[2627]: I0906 01:44:42.722260 2627 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61ce73e3-8db6-471d-9eb6-51405d8fb048-clustermesh-secrets\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:42.722759 kubelet[2627]: I0906 01:44:42.722290 2627 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61ce73e3-8db6-471d-9eb6-51405d8fb048-hubble-tls\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:43.288100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b-rootfs.mount: Deactivated successfully. Sep 6 01:44:43.288173 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbc6e6bc4873c6a38b1988273caf4aa52ea5cc885703f1e8c2fbb7333e71e14b-shm.mount: Deactivated successfully. Sep 6 01:44:43.288227 systemd[1]: var-lib-kubelet-pods-01cafc41\x2d26cc\x2d4a36\x2d94de\x2d7431d82234c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddgcx4.mount: Deactivated successfully. Sep 6 01:44:43.288274 systemd[1]: var-lib-kubelet-pods-61ce73e3\x2d8db6\x2d471d\x2d9eb6\x2d51405d8fb048-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxdc5w.mount: Deactivated successfully. Sep 6 01:44:43.288329 systemd[1]: var-lib-kubelet-pods-61ce73e3\x2d8db6\x2d471d\x2d9eb6\x2d51405d8fb048-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:44:43.288402 systemd[1]: var-lib-kubelet-pods-61ce73e3\x2d8db6\x2d471d\x2d9eb6\x2d51405d8fb048-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:44:43.375244 kubelet[2627]: I0906 01:44:43.375130 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01cafc41-26cc-4a36-94de-7431d82234c4" path="/var/lib/kubelet/pods/01cafc41-26cc-4a36-94de-7431d82234c4/volumes" Sep 6 01:44:43.376388 kubelet[2627]: I0906 01:44:43.376300 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ce73e3-8db6-471d-9eb6-51405d8fb048" path="/var/lib/kubelet/pods/61ce73e3-8db6-471d-9eb6-51405d8fb048/volumes" Sep 6 01:44:44.208467 sshd[4676]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:44.213684 systemd[1]: Started sshd@23-139.178.94.47:22-139.178.68.195:35208.service. Sep 6 01:44:44.213964 systemd[1]: sshd@22-139.178.94.47:22-139.178.68.195:35204.service: Deactivated successfully. Sep 6 01:44:44.214556 systemd-logind[1711]: Session 25 logged out. Waiting for processes to exit. Sep 6 01:44:44.214560 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 01:44:44.214998 systemd-logind[1711]: Removed session 25. Sep 6 01:44:44.251028 sshd[4854]: Accepted publickey for core from 139.178.68.195 port 35208 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:44.251885 sshd[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:44.254612 systemd-logind[1711]: New session 26 of user core. Sep 6 01:44:44.255132 systemd[1]: Started session-26.scope. Sep 6 01:44:44.729035 sshd[4854]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:44.730343 systemd[1]: sshd@23-139.178.94.47:22-139.178.68.195:35208.service: Deactivated successfully. Sep 6 01:44:44.731057 systemd-logind[1711]: Session 26 logged out. Waiting for processes to exit. Sep 6 01:44:44.732130 systemd[1]: Started sshd@24-139.178.94.47:22-139.178.68.195:35220.service. Sep 6 01:44:44.732538 systemd[1]: session-26.scope: Deactivated successfully. Sep 6 01:44:44.732939 systemd-logind[1711]: Removed session 26. Sep 6 01:44:44.737347 kubelet[2627]: E0906 01:44:44.737328 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61ce73e3-8db6-471d-9eb6-51405d8fb048" containerName="mount-cgroup" Sep 6 01:44:44.737347 kubelet[2627]: E0906 01:44:44.737346 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61ce73e3-8db6-471d-9eb6-51405d8fb048" containerName="mount-bpf-fs" Sep 6 01:44:44.737731 kubelet[2627]: E0906 01:44:44.737353 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61ce73e3-8db6-471d-9eb6-51405d8fb048" containerName="cilium-agent" Sep 6 01:44:44.737731 kubelet[2627]: E0906 01:44:44.737366 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="01cafc41-26cc-4a36-94de-7431d82234c4" containerName="cilium-operator" Sep 6 01:44:44.737731 kubelet[2627]: E0906 01:44:44.737373 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61ce73e3-8db6-471d-9eb6-51405d8fb048" containerName="apply-sysctl-overwrites" Sep 6 01:44:44.737731 kubelet[2627]: E0906 01:44:44.737379 2627 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61ce73e3-8db6-471d-9eb6-51405d8fb048" containerName="clean-cilium-state" Sep 6 01:44:44.737731 kubelet[2627]: I0906 01:44:44.737402 2627 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ce73e3-8db6-471d-9eb6-51405d8fb048" containerName="cilium-agent" Sep 6 01:44:44.737731 kubelet[2627]: I0906 01:44:44.737409 2627 memory_manager.go:354] "RemoveStaleState removing state" podUID="01cafc41-26cc-4a36-94de-7431d82234c4" containerName="cilium-operator" Sep 6 01:44:44.785019 sshd[4880]: Accepted publickey for core from 139.178.68.195 port 35220 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:44.786419 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:44.790410 systemd-logind[1711]: New session 27 of user core. Sep 6 01:44:44.791308 systemd[1]: Started session-27.scope. Sep 6 01:44:44.836928 kubelet[2627]: I0906 01:44:44.836849 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6619e450-29bc-4959-9d13-ad7bc6ae3902-clustermesh-secrets\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837212 kubelet[2627]: I0906 01:44:44.836949 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-host-proc-sys-net\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837212 kubelet[2627]: I0906 01:44:44.837010 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6619e450-29bc-4959-9d13-ad7bc6ae3902-hubble-tls\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837212 kubelet[2627]: I0906 01:44:44.837113 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-etc-cni-netd\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837212 kubelet[2627]: I0906 01:44:44.837170 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-xtables-lock\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837683 kubelet[2627]: I0906 01:44:44.837216 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-lib-modules\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837683 kubelet[2627]: I0906 01:44:44.837264 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-cgroup\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837683 kubelet[2627]: I0906 01:44:44.837311 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-ipsec-secrets\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837683 kubelet[2627]: I0906 01:44:44.837457 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hrxm\" (UniqueName: \"kubernetes.io/projected/6619e450-29bc-4959-9d13-ad7bc6ae3902-kube-api-access-5hrxm\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837683 kubelet[2627]: I0906 01:44:44.837562 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-config-path\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.837683 kubelet[2627]: I0906 01:44:44.837618 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cni-path\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.838272 kubelet[2627]: I0906 01:44:44.837674 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-run\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.838272 kubelet[2627]: I0906 01:44:44.837720 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-hostproc\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.838272 kubelet[2627]: I0906 01:44:44.837765 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-host-proc-sys-kernel\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.838272 kubelet[2627]: I0906 01:44:44.837811 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-bpf-maps\") pod \"cilium-kl2l6\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " pod="kube-system/cilium-kl2l6" Sep 6 01:44:44.939063 sshd[4880]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:44.947539 systemd[1]: Started sshd@25-139.178.94.47:22-139.178.68.195:35236.service. Sep 6 01:44:44.956372 kubelet[2627]: E0906 01:44:44.956296 2627 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-5hrxm], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-kl2l6" podUID="6619e450-29bc-4959-9d13-ad7bc6ae3902" Sep 6 01:44:44.967707 systemd[1]: sshd@24-139.178.94.47:22-139.178.68.195:35220.service: Deactivated successfully. Sep 6 01:44:44.968336 systemd[1]: session-27.scope: Deactivated successfully. Sep 6 01:44:44.968897 systemd-logind[1711]: Session 27 logged out. Waiting for processes to exit. Sep 6 01:44:44.969926 systemd-logind[1711]: Removed session 27. Sep 6 01:44:44.996466 sshd[4907]: Accepted publickey for core from 139.178.68.195 port 35236 ssh2: RSA SHA256:YKMZf0IgmLK+SGzuMVBitsBJlSZ/TMdY+tuptaSKkE0 Sep 6 01:44:44.997408 sshd[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 01:44:45.000014 systemd-logind[1711]: New session 28 of user core. Sep 6 01:44:45.000590 systemd[1]: Started session-28.scope. Sep 6 01:44:45.645127 kubelet[2627]: I0906 01:44:45.644990 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-lib-modules\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.645127 kubelet[2627]: I0906 01:44:45.645087 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-bpf-maps\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.645642 kubelet[2627]: I0906 01:44:45.645151 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6619e450-29bc-4959-9d13-ad7bc6ae3902-hubble-tls\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.645642 kubelet[2627]: I0906 01:44:45.645146 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.645642 kubelet[2627]: I0906 01:44:45.645209 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-ipsec-secrets\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.645642 kubelet[2627]: I0906 01:44:45.645271 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-config-path\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.645642 kubelet[2627]: I0906 01:44:45.645262 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.645642 kubelet[2627]: I0906 01:44:45.645317 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cni-path\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.646335 kubelet[2627]: I0906 01:44:45.645410 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-host-proc-sys-net\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.646335 kubelet[2627]: I0906 01:44:45.645478 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cni-path" (OuterVolumeSpecName: "cni-path") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.646335 kubelet[2627]: I0906 01:44:45.645502 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-etc-cni-netd\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.646335 kubelet[2627]: I0906 01:44:45.645580 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.646335 kubelet[2627]: I0906 01:44:45.645656 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-xtables-lock\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.646984 kubelet[2627]: I0906 01:44:45.645615 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.646984 kubelet[2627]: I0906 01:44:45.645753 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6619e450-29bc-4959-9d13-ad7bc6ae3902-clustermesh-secrets\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.646984 kubelet[2627]: I0906 01:44:45.645731 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.646984 kubelet[2627]: I0906 01:44:45.645913 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-run\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.646984 kubelet[2627]: I0906 01:44:45.646002 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.647556 kubelet[2627]: I0906 01:44:45.646021 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-host-proc-sys-kernel\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.647556 kubelet[2627]: I0906 01:44:45.646091 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.647556 kubelet[2627]: I0906 01:44:45.646156 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-cgroup\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.647556 kubelet[2627]: I0906 01:44:45.646241 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-hostproc\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.647556 kubelet[2627]: I0906 01:44:45.646282 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.648077 kubelet[2627]: I0906 01:44:45.646337 2627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hrxm\" (UniqueName: \"kubernetes.io/projected/6619e450-29bc-4959-9d13-ad7bc6ae3902-kube-api-access-5hrxm\") pod \"6619e450-29bc-4959-9d13-ad7bc6ae3902\" (UID: \"6619e450-29bc-4959-9d13-ad7bc6ae3902\") " Sep 6 01:44:45.648077 kubelet[2627]: I0906 01:44:45.646427 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-hostproc" (OuterVolumeSpecName: "hostproc") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 01:44:45.648077 kubelet[2627]: I0906 01:44:45.646513 2627 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-etc-cni-netd\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.648077 kubelet[2627]: I0906 01:44:45.646571 2627 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-xtables-lock\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.648077 kubelet[2627]: I0906 01:44:45.646641 2627 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.648077 kubelet[2627]: I0906 01:44:45.646685 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-run\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.648077 kubelet[2627]: I0906 01:44:45.646733 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-cgroup\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.648786 kubelet[2627]: I0906 01:44:45.646779 2627 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-bpf-maps\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.648786 kubelet[2627]: I0906 01:44:45.646819 2627 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-lib-modules\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.648786 kubelet[2627]: I0906 01:44:45.646854 2627 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-cni-path\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.648786 kubelet[2627]: I0906 01:44:45.646898 2627 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-host-proc-sys-net\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.650416 kubelet[2627]: I0906 01:44:45.650307 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 01:44:45.651137 kubelet[2627]: I0906 01:44:45.651071 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6619e450-29bc-4959-9d13-ad7bc6ae3902-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 01:44:45.651210 kubelet[2627]: I0906 01:44:45.651184 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 01:44:45.651236 kubelet[2627]: I0906 01:44:45.651215 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6619e450-29bc-4959-9d13-ad7bc6ae3902-kube-api-access-5hrxm" (OuterVolumeSpecName: "kube-api-access-5hrxm") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "kube-api-access-5hrxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:44:45.651262 kubelet[2627]: I0906 01:44:45.651236 2627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6619e450-29bc-4959-9d13-ad7bc6ae3902-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6619e450-29bc-4959-9d13-ad7bc6ae3902" (UID: "6619e450-29bc-4959-9d13-ad7bc6ae3902"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 01:44:45.652258 systemd[1]: var-lib-kubelet-pods-6619e450\x2d29bc\x2d4959\x2d9d13\x2dad7bc6ae3902-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5hrxm.mount: Deactivated successfully. Sep 6 01:44:45.652331 systemd[1]: var-lib-kubelet-pods-6619e450\x2d29bc\x2d4959\x2d9d13\x2dad7bc6ae3902-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 01:44:45.652413 systemd[1]: var-lib-kubelet-pods-6619e450\x2d29bc\x2d4959\x2d9d13\x2dad7bc6ae3902-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 01:44:45.652479 systemd[1]: var-lib-kubelet-pods-6619e450\x2d29bc\x2d4959\x2d9d13\x2dad7bc6ae3902-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 01:44:45.747288 kubelet[2627]: I0906 01:44:45.747178 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-config-path\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.747288 kubelet[2627]: I0906 01:44:45.747250 2627 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6619e450-29bc-4959-9d13-ad7bc6ae3902-clustermesh-secrets\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.747288 kubelet[2627]: I0906 01:44:45.747287 2627 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6619e450-29bc-4959-9d13-ad7bc6ae3902-hostproc\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.748497 kubelet[2627]: I0906 01:44:45.747317 2627 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hrxm\" (UniqueName: \"kubernetes.io/projected/6619e450-29bc-4959-9d13-ad7bc6ae3902-kube-api-access-5hrxm\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.748497 kubelet[2627]: I0906 01:44:45.747350 2627 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6619e450-29bc-4959-9d13-ad7bc6ae3902-hubble-tls\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:45.748497 kubelet[2627]: I0906 01:44:45.747412 2627 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6619e450-29bc-4959-9d13-ad7bc6ae3902-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-02071fe470\" DevicePath \"\"" Sep 6 01:44:46.505253 kubelet[2627]: E0906 01:44:46.505133 2627 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 01:44:46.756032 kubelet[2627]: I0906 01:44:46.755791 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-cilium-run\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.756032 kubelet[2627]: I0906 01:44:46.755891 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/17084236-a676-4b96-b22c-4d9ee05b736e-hubble-tls\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.756032 kubelet[2627]: I0906 01:44:46.755975 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-hostproc\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.756032 kubelet[2627]: I0906 01:44:46.756031 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8qf\" (UniqueName: \"kubernetes.io/projected/17084236-a676-4b96-b22c-4d9ee05b736e-kube-api-access-rb8qf\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.757294 kubelet[2627]: I0906 01:44:46.756089 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-cni-path\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.757294 kubelet[2627]: I0906 01:44:46.756136 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-lib-modules\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.757294 kubelet[2627]: I0906 01:44:46.756183 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/17084236-a676-4b96-b22c-4d9ee05b736e-cilium-ipsec-secrets\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.757294 kubelet[2627]: I0906 01:44:46.756231 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-host-proc-sys-net\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.757294 kubelet[2627]: I0906 01:44:46.756288 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-xtables-lock\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.757294 kubelet[2627]: I0906 01:44:46.756340 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/17084236-a676-4b96-b22c-4d9ee05b736e-clustermesh-secrets\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.758001 kubelet[2627]: I0906 01:44:46.756491 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-bpf-maps\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.758001 kubelet[2627]: I0906 01:44:46.756601 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-etc-cni-netd\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.758001 kubelet[2627]: I0906 01:44:46.756659 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-host-proc-sys-kernel\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.758001 kubelet[2627]: I0906 01:44:46.756717 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17084236-a676-4b96-b22c-4d9ee05b736e-cilium-config-path\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.758001 kubelet[2627]: I0906 01:44:46.756770 2627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/17084236-a676-4b96-b22c-4d9ee05b736e-cilium-cgroup\") pod \"cilium-kvkwq\" (UID: \"17084236-a676-4b96-b22c-4d9ee05b736e\") " pod="kube-system/cilium-kvkwq" Sep 6 01:44:46.927492 env[1669]: time="2025-09-06T01:44:46.927339573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvkwq,Uid:17084236-a676-4b96-b22c-4d9ee05b736e,Namespace:kube-system,Attempt:0,}" Sep 6 01:44:46.941593 env[1669]: time="2025-09-06T01:44:46.941563425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 01:44:46.941593 env[1669]: time="2025-09-06T01:44:46.941584369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 01:44:46.941593 env[1669]: time="2025-09-06T01:44:46.941591118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 01:44:46.941746 env[1669]: time="2025-09-06T01:44:46.941704938Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7 pid=4947 runtime=io.containerd.runc.v2 Sep 6 01:44:46.958441 env[1669]: time="2025-09-06T01:44:46.958412305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvkwq,Uid:17084236-a676-4b96-b22c-4d9ee05b736e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\"" Sep 6 01:44:46.960318 env[1669]: time="2025-09-06T01:44:46.960299564Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 01:44:46.964633 env[1669]: time="2025-09-06T01:44:46.964590640Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a45421b2e4fee378ea802a4a8bc645a3c5e822a43229f81ab079a6a8ecf0c78\"" Sep 6 01:44:46.964824 env[1669]: time="2025-09-06T01:44:46.964782640Z" level=info msg="StartContainer for \"7a45421b2e4fee378ea802a4a8bc645a3c5e822a43229f81ab079a6a8ecf0c78\"" Sep 6 01:44:46.985856 env[1669]: time="2025-09-06T01:44:46.985828228Z" level=info msg="StartContainer for \"7a45421b2e4fee378ea802a4a8bc645a3c5e822a43229f81ab079a6a8ecf0c78\" returns successfully" Sep 6 01:44:47.004089 env[1669]: time="2025-09-06T01:44:47.004051818Z" level=info msg="shim disconnected" id=7a45421b2e4fee378ea802a4a8bc645a3c5e822a43229f81ab079a6a8ecf0c78 Sep 6 01:44:47.004089 env[1669]: time="2025-09-06T01:44:47.004089177Z" level=warning msg="cleaning up after shim disconnected" id=7a45421b2e4fee378ea802a4a8bc645a3c5e822a43229f81ab079a6a8ecf0c78 namespace=k8s.io Sep 6 01:44:47.004221 env[1669]: time="2025-09-06T01:44:47.004098325Z" level=info msg="cleaning up dead shim" Sep 6 01:44:47.007981 env[1669]: time="2025-09-06T01:44:47.007930999Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:44:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5028 runtime=io.containerd.runc.v2\n" Sep 6 01:44:47.375969 kubelet[2627]: I0906 01:44:47.375845 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6619e450-29bc-4959-9d13-ad7bc6ae3902" path="/var/lib/kubelet/pods/6619e450-29bc-4959-9d13-ad7bc6ae3902/volumes" Sep 6 01:44:47.605705 env[1669]: time="2025-09-06T01:44:47.605658547Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 01:44:47.612812 env[1669]: time="2025-09-06T01:44:47.612757689Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"445172f49396cc0820a19b1a3f2d3e97cc4682260f4d6c1cf74e3eb2cec92acd\"" Sep 6 01:44:47.613355 env[1669]: time="2025-09-06T01:44:47.613282896Z" level=info msg="StartContainer for \"445172f49396cc0820a19b1a3f2d3e97cc4682260f4d6c1cf74e3eb2cec92acd\"" Sep 6 01:44:47.676885 env[1669]: time="2025-09-06T01:44:47.676704292Z" level=info msg="StartContainer for \"445172f49396cc0820a19b1a3f2d3e97cc4682260f4d6c1cf74e3eb2cec92acd\" returns successfully" Sep 6 01:44:47.711150 env[1669]: time="2025-09-06T01:44:47.711075826Z" level=info msg="shim disconnected" id=445172f49396cc0820a19b1a3f2d3e97cc4682260f4d6c1cf74e3eb2cec92acd Sep 6 01:44:47.711457 env[1669]: time="2025-09-06T01:44:47.711155239Z" level=warning msg="cleaning up after shim disconnected" id=445172f49396cc0820a19b1a3f2d3e97cc4682260f4d6c1cf74e3eb2cec92acd namespace=k8s.io Sep 6 01:44:47.711457 env[1669]: time="2025-09-06T01:44:47.711181402Z" level=info msg="cleaning up dead shim" Sep 6 01:44:47.722215 env[1669]: time="2025-09-06T01:44:47.722095921Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:44:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5090 runtime=io.containerd.runc.v2\n" Sep 6 01:44:48.615504 env[1669]: time="2025-09-06T01:44:48.615342198Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 01:44:48.625517 env[1669]: time="2025-09-06T01:44:48.625456439Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3396ac70461fd356077f5e94f9c1605b925953c4c078b6430b205facbfe26048\"" Sep 6 01:44:48.625880 env[1669]: time="2025-09-06T01:44:48.625846416Z" level=info msg="StartContainer for \"3396ac70461fd356077f5e94f9c1605b925953c4c078b6430b205facbfe26048\"" Sep 6 01:44:48.649470 env[1669]: time="2025-09-06T01:44:48.649398076Z" level=info msg="StartContainer for \"3396ac70461fd356077f5e94f9c1605b925953c4c078b6430b205facbfe26048\" returns successfully" Sep 6 01:44:48.673966 env[1669]: time="2025-09-06T01:44:48.673917789Z" level=info msg="shim disconnected" id=3396ac70461fd356077f5e94f9c1605b925953c4c078b6430b205facbfe26048 Sep 6 01:44:48.673966 env[1669]: time="2025-09-06T01:44:48.673945977Z" level=warning msg="cleaning up after shim disconnected" id=3396ac70461fd356077f5e94f9c1605b925953c4c078b6430b205facbfe26048 namespace=k8s.io Sep 6 01:44:48.673966 env[1669]: time="2025-09-06T01:44:48.673953398Z" level=info msg="cleaning up dead shim" Sep 6 01:44:48.678005 env[1669]: time="2025-09-06T01:44:48.677958910Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:44:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5148 runtime=io.containerd.runc.v2\n" Sep 6 01:44:48.870007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3396ac70461fd356077f5e94f9c1605b925953c4c078b6430b205facbfe26048-rootfs.mount: Deactivated successfully. Sep 6 01:44:49.314302 kubelet[2627]: I0906 01:44:49.314047 2627 setters.go:600] "Node became not ready" node="ci-3510.3.8-n-02071fe470" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T01:44:49Z","lastTransitionTime":"2025-09-06T01:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 01:44:49.623208 env[1669]: time="2025-09-06T01:44:49.623116209Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 01:44:49.634050 env[1669]: time="2025-09-06T01:44:49.634031727Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"84cebdc34dab6eff92c5a5324b4785ea6b1e48cf957d97088eb100e23f3a6f9a\"" Sep 6 01:44:49.634413 env[1669]: time="2025-09-06T01:44:49.634349578Z" level=info msg="StartContainer for \"84cebdc34dab6eff92c5a5324b4785ea6b1e48cf957d97088eb100e23f3a6f9a\"" Sep 6 01:44:49.661811 env[1669]: time="2025-09-06T01:44:49.661757369Z" level=info msg="StartContainer for \"84cebdc34dab6eff92c5a5324b4785ea6b1e48cf957d97088eb100e23f3a6f9a\" returns successfully" Sep 6 01:44:49.670473 env[1669]: time="2025-09-06T01:44:49.670445079Z" level=info msg="shim disconnected" id=84cebdc34dab6eff92c5a5324b4785ea6b1e48cf957d97088eb100e23f3a6f9a Sep 6 01:44:49.670473 env[1669]: time="2025-09-06T01:44:49.670473332Z" level=warning msg="cleaning up after shim disconnected" id=84cebdc34dab6eff92c5a5324b4785ea6b1e48cf957d97088eb100e23f3a6f9a namespace=k8s.io Sep 6 01:44:49.670609 env[1669]: time="2025-09-06T01:44:49.670479527Z" level=info msg="cleaning up dead shim" Sep 6 01:44:49.674597 env[1669]: time="2025-09-06T01:44:49.674545559Z" level=warning msg="cleanup warnings time=\"2025-09-06T01:44:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5202 runtime=io.containerd.runc.v2\n" Sep 6 01:44:49.869977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84cebdc34dab6eff92c5a5324b4785ea6b1e48cf957d97088eb100e23f3a6f9a-rootfs.mount: Deactivated successfully. Sep 6 01:44:50.632666 env[1669]: time="2025-09-06T01:44:50.632460738Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 01:44:50.661755 env[1669]: time="2025-09-06T01:44:50.661623166Z" level=info msg="CreateContainer within sandbox \"8150b20479feaf6fc4b03902e0999362ae0059e5dd94c6d82540e049c7528bc7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d2b20f3d0db5ddcff5f93613b7179ea48007b9dbc20cadb3a43e018009390d07\"" Sep 6 01:44:50.662614 env[1669]: time="2025-09-06T01:44:50.662556040Z" level=info msg="StartContainer for \"d2b20f3d0db5ddcff5f93613b7179ea48007b9dbc20cadb3a43e018009390d07\"" Sep 6 01:44:50.701008 env[1669]: time="2025-09-06T01:44:50.700948385Z" level=info msg="StartContainer for \"d2b20f3d0db5ddcff5f93613b7179ea48007b9dbc20cadb3a43e018009390d07\" returns successfully" Sep 6 01:44:50.872393 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 6 01:44:51.655446 kubelet[2627]: I0906 01:44:51.655414 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kvkwq" podStartSLOduration=5.655402857 podStartE2EDuration="5.655402857s" podCreationTimestamp="2025-09-06 01:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 01:44:51.655141321 +0000 UTC m=+410.364249934" watchObservedRunningTime="2025-09-06 01:44:51.655402857 +0000 UTC m=+410.364511469" Sep 6 01:44:54.016249 systemd-networkd[1390]: lxc_health: Link UP Sep 6 01:44:54.043377 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 01:44:54.043437 systemd-networkd[1390]: lxc_health: Gained carrier Sep 6 01:44:55.636523 systemd-networkd[1390]: lxc_health: Gained IPv6LL Sep 6 01:44:59.697680 sshd[4907]: pam_unix(sshd:session): session closed for user core Sep 6 01:44:59.702896 systemd[1]: sshd@25-139.178.94.47:22-139.178.68.195:35236.service: Deactivated successfully. Sep 6 01:44:59.705257 systemd-logind[1711]: Session 28 logged out. Waiting for processes to exit. Sep 6 01:44:59.705280 systemd[1]: session-28.scope: Deactivated successfully. Sep 6 01:44:59.707651 systemd-logind[1711]: Removed session 28.