Apr 30 13:49:06.467544 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 13:49:06.467559 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:49:06.467566 kernel: BIOS-provided physical RAM map: Apr 30 13:49:06.467572 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Apr 30 13:49:06.467576 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Apr 30 13:49:06.467580 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Apr 30 13:49:06.467584 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Apr 30 13:49:06.467589 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Apr 30 13:49:06.467593 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819c3fff] usable Apr 30 13:49:06.467597 kernel: BIOS-e820: [mem 0x00000000819c4000-0x00000000819c4fff] ACPI NVS Apr 30 13:49:06.467601 kernel: BIOS-e820: [mem 0x00000000819c5000-0x00000000819c5fff] reserved Apr 30 13:49:06.467605 kernel: BIOS-e820: [mem 0x00000000819c6000-0x000000008afcdfff] usable Apr 30 13:49:06.467611 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Apr 30 13:49:06.467615 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Apr 30 13:49:06.467621 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Apr 30 13:49:06.467626 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Apr 30 13:49:06.467631 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Apr 30 13:49:06.467636 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Apr 30 13:49:06.467641 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 13:49:06.467646 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Apr 30 13:49:06.467650 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Apr 30 13:49:06.467655 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Apr 30 13:49:06.467660 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Apr 30 13:49:06.467665 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Apr 30 13:49:06.467669 kernel: NX (Execute Disable) protection: active Apr 30 13:49:06.467674 kernel: APIC: Static calls initialized Apr 30 13:49:06.467679 kernel: SMBIOS 3.2.1 present. Apr 30 13:49:06.467684 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Apr 30 13:49:06.467690 kernel: tsc: Detected 3400.000 MHz processor Apr 30 13:49:06.467695 kernel: tsc: Detected 3399.906 MHz TSC Apr 30 13:49:06.467700 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 13:49:06.467705 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 13:49:06.467710 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Apr 30 13:49:06.467715 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Apr 30 13:49:06.467720 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 13:49:06.467725 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Apr 30 13:49:06.467730 kernel: Using GB pages for direct mapping Apr 30 13:49:06.467735 kernel: ACPI: Early table checksum verification disabled Apr 30 13:49:06.467741 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Apr 30 13:49:06.467746 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Apr 30 13:49:06.467753 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Apr 30 13:49:06.467758 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Apr 30 13:49:06.467763 kernel: ACPI: FACS 0x000000008C66DF80 000040 Apr 30 13:49:06.467769 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Apr 30 13:49:06.467775 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Apr 30 13:49:06.467780 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Apr 30 13:49:06.467785 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Apr 30 13:49:06.467790 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Apr 30 13:49:06.467796 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Apr 30 13:49:06.467801 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Apr 30 13:49:06.467806 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Apr 30 13:49:06.467812 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:49:06.467817 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Apr 30 13:49:06.467822 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Apr 30 13:49:06.467827 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:49:06.467833 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:49:06.467838 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Apr 30 13:49:06.467843 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Apr 30 13:49:06.467848 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:49:06.467853 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:49:06.467859 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Apr 30 13:49:06.467865 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Apr 30 13:49:06.467870 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Apr 30 13:49:06.467875 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Apr 30 13:49:06.467880 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Apr 30 13:49:06.467885 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Apr 30 13:49:06.467891 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Apr 30 13:49:06.467896 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Apr 30 13:49:06.467902 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Apr 30 13:49:06.467907 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Apr 30 13:49:06.467912 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Apr 30 13:49:06.467917 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Apr 30 13:49:06.467923 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Apr 30 13:49:06.467928 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Apr 30 13:49:06.467933 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Apr 30 13:49:06.467938 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Apr 30 13:49:06.467943 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Apr 30 13:49:06.467949 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Apr 30 13:49:06.467954 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Apr 30 13:49:06.467960 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Apr 30 13:49:06.467965 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Apr 30 13:49:06.467970 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Apr 30 13:49:06.467975 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Apr 30 13:49:06.467980 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Apr 30 13:49:06.467985 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Apr 30 13:49:06.467990 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Apr 30 13:49:06.467996 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Apr 30 13:49:06.468001 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Apr 30 13:49:06.468006 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Apr 30 13:49:06.468012 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Apr 30 13:49:06.468017 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Apr 30 13:49:06.468022 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Apr 30 13:49:06.468027 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Apr 30 13:49:06.468032 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Apr 30 13:49:06.468037 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Apr 30 13:49:06.468042 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Apr 30 13:49:06.468048 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Apr 30 13:49:06.468053 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Apr 30 13:49:06.468058 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Apr 30 13:49:06.468063 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Apr 30 13:49:06.468069 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Apr 30 13:49:06.468097 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Apr 30 13:49:06.468127 kernel: No NUMA configuration found Apr 30 13:49:06.468132 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Apr 30 13:49:06.468137 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Apr 30 13:49:06.468144 kernel: Zone ranges: Apr 30 13:49:06.468149 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 13:49:06.468155 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 13:49:06.468160 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Apr 30 13:49:06.468165 kernel: Movable zone start for each node Apr 30 13:49:06.468170 kernel: Early memory node ranges Apr 30 13:49:06.468175 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Apr 30 13:49:06.468180 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Apr 30 13:49:06.468185 kernel: node 0: [mem 0x0000000040400000-0x00000000819c3fff] Apr 30 13:49:06.468191 kernel: node 0: [mem 0x00000000819c6000-0x000000008afcdfff] Apr 30 13:49:06.468196 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Apr 30 13:49:06.468201 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Apr 30 13:49:06.468206 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Apr 30 13:49:06.468214 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Apr 30 13:49:06.468220 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 13:49:06.468226 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Apr 30 13:49:06.468231 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 30 13:49:06.468237 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Apr 30 13:49:06.468243 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Apr 30 13:49:06.468248 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Apr 30 13:49:06.468254 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Apr 30 13:49:06.468259 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Apr 30 13:49:06.468265 kernel: ACPI: PM-Timer IO Port: 0x1808 Apr 30 13:49:06.468270 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Apr 30 13:49:06.468275 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Apr 30 13:49:06.468281 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Apr 30 13:49:06.468287 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Apr 30 13:49:06.468292 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Apr 30 13:49:06.468298 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Apr 30 13:49:06.468303 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Apr 30 13:49:06.468308 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Apr 30 13:49:06.468314 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Apr 30 13:49:06.468319 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Apr 30 13:49:06.468324 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Apr 30 13:49:06.468329 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Apr 30 13:49:06.468336 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Apr 30 13:49:06.468341 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Apr 30 13:49:06.468346 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Apr 30 13:49:06.468352 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Apr 30 13:49:06.468357 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Apr 30 13:49:06.468362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 13:49:06.468367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 13:49:06.468373 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 13:49:06.468378 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 13:49:06.468383 kernel: TSC deadline timer available Apr 30 13:49:06.468390 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Apr 30 13:49:06.468395 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Apr 30 13:49:06.468401 kernel: Booting paravirtualized kernel on bare hardware Apr 30 13:49:06.468406 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 13:49:06.468412 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 13:49:06.468417 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 13:49:06.468422 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 13:49:06.468428 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 13:49:06.468434 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:49:06.468441 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 13:49:06.468446 kernel: random: crng init done Apr 30 13:49:06.468451 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Apr 30 13:49:06.468457 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 30 13:49:06.468462 kernel: Fallback order for Node 0: 0 Apr 30 13:49:06.468467 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Apr 30 13:49:06.468473 kernel: Policy zone: Normal Apr 30 13:49:06.468478 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 13:49:06.468484 kernel: software IO TLB: area num 16. Apr 30 13:49:06.468490 kernel: Memory: 32718264K/33452984K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 734460K reserved, 0K cma-reserved) Apr 30 13:49:06.468496 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 13:49:06.468501 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 13:49:06.468506 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 13:49:06.468512 kernel: Dynamic Preempt: voluntary Apr 30 13:49:06.468518 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 13:49:06.468524 kernel: rcu: RCU event tracing is enabled. Apr 30 13:49:06.468529 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 13:49:06.468536 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 13:49:06.468541 kernel: Rude variant of Tasks RCU enabled. Apr 30 13:49:06.468546 kernel: Tracing variant of Tasks RCU enabled. Apr 30 13:49:06.468552 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 13:49:06.468557 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 13:49:06.468562 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Apr 30 13:49:06.468568 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 13:49:06.468573 kernel: Console: colour VGA+ 80x25 Apr 30 13:49:06.468578 kernel: printk: console [tty0] enabled Apr 30 13:49:06.468585 kernel: printk: console [ttyS1] enabled Apr 30 13:49:06.468590 kernel: ACPI: Core revision 20230628 Apr 30 13:49:06.468596 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Apr 30 13:49:06.468601 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 13:49:06.468606 kernel: DMAR: Host address width 39 Apr 30 13:49:06.468612 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Apr 30 13:49:06.468617 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Apr 30 13:49:06.468623 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Apr 30 13:49:06.468628 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Apr 30 13:49:06.468634 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Apr 30 13:49:06.468640 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Apr 30 13:49:06.468645 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Apr 30 13:49:06.468651 kernel: x2apic enabled Apr 30 13:49:06.468656 kernel: APIC: Switched APIC routing to: cluster x2apic Apr 30 13:49:06.468661 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 13:49:06.468667 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Apr 30 13:49:06.468672 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Apr 30 13:49:06.468678 kernel: CPU0: Thermal monitoring enabled (TM1) Apr 30 13:49:06.468684 kernel: process: using mwait in idle threads Apr 30 13:49:06.468689 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 13:49:06.468695 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 13:49:06.468700 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 13:49:06.468706 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 13:49:06.468711 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 13:49:06.468716 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 30 13:49:06.468722 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 13:49:06.468727 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Apr 30 13:49:06.468733 kernel: RETBleed: Mitigation: Enhanced IBRS Apr 30 13:49:06.468739 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 13:49:06.468744 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 13:49:06.468750 kernel: TAA: Mitigation: TSX disabled Apr 30 13:49:06.468755 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Apr 30 13:49:06.468760 kernel: SRBDS: Mitigation: Microcode Apr 30 13:49:06.468766 kernel: GDS: Mitigation: Microcode Apr 30 13:49:06.468771 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 13:49:06.468777 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 13:49:06.468783 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 13:49:06.468788 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 13:49:06.468794 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 13:49:06.468799 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 13:49:06.468804 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 13:49:06.468810 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 13:49:06.468815 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Apr 30 13:49:06.468820 kernel: Freeing SMP alternatives memory: 32K Apr 30 13:49:06.468826 kernel: pid_max: default: 32768 minimum: 301 Apr 30 13:49:06.468832 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 13:49:06.468837 kernel: landlock: Up and running. Apr 30 13:49:06.468843 kernel: SELinux: Initializing. Apr 30 13:49:06.468848 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 13:49:06.468854 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 13:49:06.468859 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Apr 30 13:49:06.468865 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:49:06.468870 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:49:06.468877 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:49:06.468882 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Apr 30 13:49:06.468887 kernel: ... version: 4 Apr 30 13:49:06.468893 kernel: ... bit width: 48 Apr 30 13:49:06.468898 kernel: ... generic registers: 4 Apr 30 13:49:06.468903 kernel: ... value mask: 0000ffffffffffff Apr 30 13:49:06.468909 kernel: ... max period: 00007fffffffffff Apr 30 13:49:06.468914 kernel: ... fixed-purpose events: 3 Apr 30 13:49:06.468920 kernel: ... event mask: 000000070000000f Apr 30 13:49:06.468925 kernel: signal: max sigframe size: 2032 Apr 30 13:49:06.468931 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Apr 30 13:49:06.468937 kernel: rcu: Hierarchical SRCU implementation. Apr 30 13:49:06.468942 kernel: rcu: Max phase no-delay instances is 400. Apr 30 13:49:06.468948 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Apr 30 13:49:06.468953 kernel: smp: Bringing up secondary CPUs ... Apr 30 13:49:06.468958 kernel: smpboot: x86: Booting SMP configuration: Apr 30 13:49:06.468964 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Apr 30 13:49:06.468970 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 13:49:06.468976 kernel: smp: Brought up 1 node, 16 CPUs Apr 30 13:49:06.468981 kernel: smpboot: Max logical packages: 1 Apr 30 13:49:06.468987 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Apr 30 13:49:06.468992 kernel: devtmpfs: initialized Apr 30 13:49:06.468998 kernel: x86/mm: Memory block size: 128MB Apr 30 13:49:06.469003 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819c4000-0x819c4fff] (4096 bytes) Apr 30 13:49:06.469009 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Apr 30 13:49:06.469014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 13:49:06.469019 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 13:49:06.469026 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 13:49:06.469031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 13:49:06.469037 kernel: audit: initializing netlink subsys (disabled) Apr 30 13:49:06.469042 kernel: audit: type=2000 audit(1746020941.122:1): state=initialized audit_enabled=0 res=1 Apr 30 13:49:06.469047 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 13:49:06.469052 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 13:49:06.469058 kernel: cpuidle: using governor menu Apr 30 13:49:06.469063 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 13:49:06.469068 kernel: dca service started, version 1.12.1 Apr 30 13:49:06.469078 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 13:49:06.469084 kernel: PCI: Using configuration type 1 for base access Apr 30 13:49:06.469090 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Apr 30 13:49:06.469116 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 13:49:06.469122 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 13:49:06.469144 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 13:49:06.469164 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 13:49:06.469169 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 13:49:06.469174 kernel: ACPI: Added _OSI(Module Device) Apr 30 13:49:06.469181 kernel: ACPI: Added _OSI(Processor Device) Apr 30 13:49:06.469187 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 13:49:06.469192 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 13:49:06.469197 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Apr 30 13:49:06.469203 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:49:06.469208 kernel: ACPI: SSDT 0xFFFF8D3141545400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Apr 30 13:49:06.469213 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:49:06.469219 kernel: ACPI: SSDT 0xFFFF8D3141550000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Apr 30 13:49:06.469224 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:49:06.469230 kernel: ACPI: SSDT 0xFFFF8D3141568C00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Apr 30 13:49:06.469236 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:49:06.469241 kernel: ACPI: SSDT 0xFFFF8D3141554000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Apr 30 13:49:06.469246 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:49:06.469252 kernel: ACPI: SSDT 0xFFFF8D314154A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Apr 30 13:49:06.469257 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:49:06.469262 kernel: ACPI: SSDT 0xFFFF8D3140E39400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Apr 30 13:49:06.469268 kernel: ACPI: _OSC evaluated successfully for all CPUs Apr 30 13:49:06.469273 kernel: ACPI: Interpreter enabled Apr 30 13:49:06.469278 kernel: ACPI: PM: (supports S0 S5) Apr 30 13:49:06.469285 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 13:49:06.469290 kernel: HEST: Enabling Firmware First mode for corrected errors. Apr 30 13:49:06.469296 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Apr 30 13:49:06.469301 kernel: HEST: Table parsing has been initialized. Apr 30 13:49:06.469306 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Apr 30 13:49:06.469312 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 13:49:06.469317 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 13:49:06.469323 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Apr 30 13:49:06.469328 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Apr 30 13:49:06.469335 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Apr 30 13:49:06.469340 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Apr 30 13:49:06.469345 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Apr 30 13:49:06.469351 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Apr 30 13:49:06.469356 kernel: ACPI: \_TZ_.FN00: New power resource Apr 30 13:49:06.469362 kernel: ACPI: \_TZ_.FN01: New power resource Apr 30 13:49:06.469367 kernel: ACPI: \_TZ_.FN02: New power resource Apr 30 13:49:06.469372 kernel: ACPI: \_TZ_.FN03: New power resource Apr 30 13:49:06.469378 kernel: ACPI: \_TZ_.FN04: New power resource Apr 30 13:49:06.469384 kernel: ACPI: \PIN_: New power resource Apr 30 13:49:06.469390 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Apr 30 13:49:06.469475 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 13:49:06.469567 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Apr 30 13:49:06.469615 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Apr 30 13:49:06.469624 kernel: PCI host bridge to bus 0000:00 Apr 30 13:49:06.469673 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 13:49:06.469720 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 13:49:06.469764 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 13:49:06.469805 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Apr 30 13:49:06.469848 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Apr 30 13:49:06.469889 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Apr 30 13:49:06.469951 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Apr 30 13:49:06.470012 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Apr 30 13:49:06.470063 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.470175 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Apr 30 13:49:06.470225 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.470278 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Apr 30 13:49:06.470327 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Apr 30 13:49:06.470383 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Apr 30 13:49:06.470432 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Apr 30 13:49:06.470484 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Apr 30 13:49:06.470533 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Apr 30 13:49:06.470581 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Apr 30 13:49:06.470635 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Apr 30 13:49:06.470683 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Apr 30 13:49:06.470735 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Apr 30 13:49:06.470786 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Apr 30 13:49:06.470835 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:49:06.470888 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Apr 30 13:49:06.470937 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:49:06.470991 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Apr 30 13:49:06.471046 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Apr 30 13:49:06.471126 kernel: pci 0000:00:16.0: PME# supported from D3hot Apr 30 13:49:06.471194 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Apr 30 13:49:06.471242 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Apr 30 13:49:06.471290 kernel: pci 0000:00:16.1: PME# supported from D3hot Apr 30 13:49:06.471343 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Apr 30 13:49:06.471394 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Apr 30 13:49:06.471442 kernel: pci 0000:00:16.4: PME# supported from D3hot Apr 30 13:49:06.471497 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Apr 30 13:49:06.471545 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Apr 30 13:49:06.471593 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Apr 30 13:49:06.471640 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Apr 30 13:49:06.471688 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Apr 30 13:49:06.471739 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Apr 30 13:49:06.471787 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Apr 30 13:49:06.471835 kernel: pci 0000:00:17.0: PME# supported from D3hot Apr 30 13:49:06.471888 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Apr 30 13:49:06.471938 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.471992 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Apr 30 13:49:06.472042 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.472121 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Apr 30 13:49:06.472187 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.472240 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Apr 30 13:49:06.472292 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.472345 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Apr 30 13:49:06.472393 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.472446 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Apr 30 13:49:06.472494 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:49:06.472549 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Apr 30 13:49:06.472605 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Apr 30 13:49:06.472653 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Apr 30 13:49:06.472701 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Apr 30 13:49:06.472752 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Apr 30 13:49:06.472801 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Apr 30 13:49:06.472850 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 13:49:06.472908 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Apr 30 13:49:06.472962 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Apr 30 13:49:06.473011 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Apr 30 13:49:06.473061 kernel: pci 0000:02:00.0: PME# supported from D3cold Apr 30 13:49:06.473139 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 13:49:06.473207 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 13:49:06.473262 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Apr 30 13:49:06.473312 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Apr 30 13:49:06.473365 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Apr 30 13:49:06.473414 kernel: pci 0000:02:00.1: PME# supported from D3cold Apr 30 13:49:06.473464 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 13:49:06.473513 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 13:49:06.473563 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Apr 30 13:49:06.473612 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Apr 30 13:49:06.473660 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:49:06.473712 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Apr 30 13:49:06.473766 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Apr 30 13:49:06.473817 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Apr 30 13:49:06.473867 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Apr 30 13:49:06.473916 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Apr 30 13:49:06.473966 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Apr 30 13:49:06.474015 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.474065 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Apr 30 13:49:06.474174 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 13:49:06.474223 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 13:49:06.474279 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Apr 30 13:49:06.474332 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Apr 30 13:49:06.474382 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Apr 30 13:49:06.474431 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Apr 30 13:49:06.474481 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Apr 30 13:49:06.474533 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Apr 30 13:49:06.474581 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Apr 30 13:49:06.474629 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 13:49:06.474677 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 13:49:06.474727 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Apr 30 13:49:06.474810 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Apr 30 13:49:06.474861 kernel: pci 0000:07:00.0: enabling Extended Tags Apr 30 13:49:06.474930 kernel: pci 0000:07:00.0: supports D1 D2 Apr 30 13:49:06.474993 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 13:49:06.475043 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Apr 30 13:49:06.475129 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Apr 30 13:49:06.475193 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:49:06.475245 kernel: pci_bus 0000:08: extended config space not accessible Apr 30 13:49:06.475305 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Apr 30 13:49:06.475359 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Apr 30 13:49:06.475414 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Apr 30 13:49:06.475466 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Apr 30 13:49:06.475517 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 13:49:06.475568 kernel: pci 0000:08:00.0: supports D1 D2 Apr 30 13:49:06.475619 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 13:49:06.475670 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Apr 30 13:49:06.475720 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Apr 30 13:49:06.475773 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:49:06.475782 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Apr 30 13:49:06.475788 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Apr 30 13:49:06.475794 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Apr 30 13:49:06.475799 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Apr 30 13:49:06.475805 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Apr 30 13:49:06.475811 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Apr 30 13:49:06.475816 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Apr 30 13:49:06.475822 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Apr 30 13:49:06.475829 kernel: iommu: Default domain type: Translated Apr 30 13:49:06.475835 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 13:49:06.475841 kernel: PCI: Using ACPI for IRQ routing Apr 30 13:49:06.475847 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 13:49:06.475852 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Apr 30 13:49:06.475858 kernel: e820: reserve RAM buffer [mem 0x819c4000-0x83ffffff] Apr 30 13:49:06.475863 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Apr 30 13:49:06.475868 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Apr 30 13:49:06.475874 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Apr 30 13:49:06.475881 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Apr 30 13:49:06.475930 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Apr 30 13:49:06.475982 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Apr 30 13:49:06.476032 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 13:49:06.476056 kernel: vgaarb: loaded Apr 30 13:49:06.476062 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 13:49:06.476084 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Apr 30 13:49:06.476109 kernel: clocksource: Switched to clocksource tsc-early Apr 30 13:49:06.476114 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 13:49:06.476122 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 13:49:06.476141 kernel: pnp: PnP ACPI init Apr 30 13:49:06.476195 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Apr 30 13:49:06.476247 kernel: pnp 00:02: [dma 0 disabled] Apr 30 13:49:06.476296 kernel: pnp 00:03: [dma 0 disabled] Apr 30 13:49:06.476344 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Apr 30 13:49:06.476392 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Apr 30 13:49:06.476439 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Apr 30 13:49:06.476487 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Apr 30 13:49:06.476532 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Apr 30 13:49:06.476576 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Apr 30 13:49:06.476620 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Apr 30 13:49:06.476665 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Apr 30 13:49:06.476711 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Apr 30 13:49:06.476756 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Apr 30 13:49:06.476799 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Apr 30 13:49:06.476847 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Apr 30 13:49:06.476891 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Apr 30 13:49:06.476935 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Apr 30 13:49:06.476979 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Apr 30 13:49:06.477026 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Apr 30 13:49:06.477070 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Apr 30 13:49:06.477190 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Apr 30 13:49:06.477238 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Apr 30 13:49:06.477247 kernel: pnp: PnP ACPI: found 10 devices Apr 30 13:49:06.477253 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 13:49:06.477259 kernel: NET: Registered PF_INET protocol family Apr 30 13:49:06.477267 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 13:49:06.477273 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 30 13:49:06.477278 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 13:49:06.477284 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 13:49:06.477290 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 13:49:06.477296 kernel: TCP: Hash tables configured (established 262144 bind 65536) Apr 30 13:49:06.477302 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 13:49:06.477307 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 13:49:06.477313 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 13:49:06.477320 kernel: NET: Registered PF_XDP protocol family Apr 30 13:49:06.477369 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Apr 30 13:49:06.477419 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Apr 30 13:49:06.477496 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Apr 30 13:49:06.477545 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 13:49:06.477596 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 13:49:06.477647 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 13:49:06.477698 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 13:49:06.477751 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 13:49:06.477800 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Apr 30 13:49:06.477849 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Apr 30 13:49:06.477898 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:49:06.477947 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Apr 30 13:49:06.477998 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Apr 30 13:49:06.478046 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 13:49:06.478121 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 13:49:06.478184 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Apr 30 13:49:06.478233 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 13:49:06.478281 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 13:49:06.478330 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Apr 30 13:49:06.478379 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Apr 30 13:49:06.478432 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Apr 30 13:49:06.478483 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:49:06.478531 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Apr 30 13:49:06.478580 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Apr 30 13:49:06.478628 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:49:06.478673 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Apr 30 13:49:06.478717 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 13:49:06.478760 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 13:49:06.478803 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 13:49:06.478877 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Apr 30 13:49:06.478920 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Apr 30 13:49:06.478971 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Apr 30 13:49:06.479017 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:49:06.479066 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Apr 30 13:49:06.479151 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Apr 30 13:49:06.479216 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Apr 30 13:49:06.479265 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Apr 30 13:49:06.479314 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Apr 30 13:49:06.479358 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Apr 30 13:49:06.479406 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Apr 30 13:49:06.479452 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Apr 30 13:49:06.479460 kernel: PCI: CLS 64 bytes, default 64 Apr 30 13:49:06.479468 kernel: DMAR: No ATSR found Apr 30 13:49:06.479474 kernel: DMAR: No SATC found Apr 30 13:49:06.479479 kernel: DMAR: dmar0: Using Queued invalidation Apr 30 13:49:06.479529 kernel: pci 0000:00:00.0: Adding to iommu group 0 Apr 30 13:49:06.479580 kernel: pci 0000:00:01.0: Adding to iommu group 1 Apr 30 13:49:06.479630 kernel: pci 0000:00:01.1: Adding to iommu group 1 Apr 30 13:49:06.479680 kernel: pci 0000:00:08.0: Adding to iommu group 2 Apr 30 13:49:06.479730 kernel: pci 0000:00:12.0: Adding to iommu group 3 Apr 30 13:49:06.479780 kernel: pci 0000:00:14.0: Adding to iommu group 4 Apr 30 13:49:06.479831 kernel: pci 0000:00:14.2: Adding to iommu group 4 Apr 30 13:49:06.479881 kernel: pci 0000:00:15.0: Adding to iommu group 5 Apr 30 13:49:06.479929 kernel: pci 0000:00:15.1: Adding to iommu group 5 Apr 30 13:49:06.479979 kernel: pci 0000:00:16.0: Adding to iommu group 6 Apr 30 13:49:06.480028 kernel: pci 0000:00:16.1: Adding to iommu group 6 Apr 30 13:49:06.480078 kernel: pci 0000:00:16.4: Adding to iommu group 6 Apr 30 13:49:06.480185 kernel: pci 0000:00:17.0: Adding to iommu group 7 Apr 30 13:49:06.480233 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Apr 30 13:49:06.480314 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Apr 30 13:49:06.480363 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Apr 30 13:49:06.480413 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Apr 30 13:49:06.480461 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Apr 30 13:49:06.480509 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Apr 30 13:49:06.480559 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Apr 30 13:49:06.480607 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Apr 30 13:49:06.480656 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Apr 30 13:49:06.480708 kernel: pci 0000:02:00.0: Adding to iommu group 1 Apr 30 13:49:06.480758 kernel: pci 0000:02:00.1: Adding to iommu group 1 Apr 30 13:49:06.480808 kernel: pci 0000:04:00.0: Adding to iommu group 15 Apr 30 13:49:06.480858 kernel: pci 0000:05:00.0: Adding to iommu group 16 Apr 30 13:49:06.480908 kernel: pci 0000:07:00.0: Adding to iommu group 17 Apr 30 13:49:06.480960 kernel: pci 0000:08:00.0: Adding to iommu group 17 Apr 30 13:49:06.480968 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Apr 30 13:49:06.480974 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 13:49:06.480982 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Apr 30 13:49:06.480988 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Apr 30 13:49:06.480994 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Apr 30 13:49:06.481000 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Apr 30 13:49:06.481005 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Apr 30 13:49:06.481059 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Apr 30 13:49:06.481069 kernel: Initialise system trusted keyrings Apr 30 13:49:06.481078 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Apr 30 13:49:06.481086 kernel: Key type asymmetric registered Apr 30 13:49:06.481092 kernel: Asymmetric key parser 'x509' registered Apr 30 13:49:06.481119 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 13:49:06.481124 kernel: io scheduler mq-deadline registered Apr 30 13:49:06.481130 kernel: io scheduler kyber registered Apr 30 13:49:06.481149 kernel: io scheduler bfq registered Apr 30 13:49:06.481200 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Apr 30 13:49:06.481250 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Apr 30 13:49:06.481300 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Apr 30 13:49:06.481351 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Apr 30 13:49:06.481401 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Apr 30 13:49:06.481449 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Apr 30 13:49:06.481497 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Apr 30 13:49:06.481552 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Apr 30 13:49:06.481580 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Apr 30 13:49:06.481586 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Apr 30 13:49:06.481607 kernel: pstore: Using crash dump compression: deflate Apr 30 13:49:06.481613 kernel: pstore: Registered erst as persistent store backend Apr 30 13:49:06.481619 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 13:49:06.481625 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 13:49:06.481631 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 13:49:06.481636 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 13:49:06.481686 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Apr 30 13:49:06.481695 kernel: i8042: PNP: No PS/2 controller found. Apr 30 13:49:06.481742 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Apr 30 13:49:06.481787 kernel: rtc_cmos rtc_cmos: registered as rtc0 Apr 30 13:49:06.481833 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-04-30T13:49:05 UTC (1746020945) Apr 30 13:49:06.481877 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Apr 30 13:49:06.481885 kernel: intel_pstate: Intel P-state driver initializing Apr 30 13:49:06.481892 kernel: intel_pstate: Disabling energy efficiency optimization Apr 30 13:49:06.481897 kernel: intel_pstate: HWP enabled Apr 30 13:49:06.481903 kernel: NET: Registered PF_INET6 protocol family Apr 30 13:49:06.481909 kernel: Segment Routing with IPv6 Apr 30 13:49:06.481917 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 13:49:06.481922 kernel: NET: Registered PF_PACKET protocol family Apr 30 13:49:06.481928 kernel: Key type dns_resolver registered Apr 30 13:49:06.481934 kernel: microcode: Current revision: 0x00000102 Apr 30 13:49:06.481940 kernel: microcode: Microcode Update Driver: v2.2. Apr 30 13:49:06.481945 kernel: IPI shorthand broadcast: enabled Apr 30 13:49:06.481951 kernel: sched_clock: Marking stable (2655000677, 1435036709)->(4562850453, -472813067) Apr 30 13:49:06.481957 kernel: registered taskstats version 1 Apr 30 13:49:06.481963 kernel: Loading compiled-in X.509 certificates Apr 30 13:49:06.481970 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 13:49:06.481975 kernel: Key type .fscrypt registered Apr 30 13:49:06.481981 kernel: Key type fscrypt-provisioning registered Apr 30 13:49:06.481987 kernel: ima: Allocated hash algorithm: sha1 Apr 30 13:49:06.481992 kernel: ima: No architecture policies found Apr 30 13:49:06.481998 kernel: clk: Disabling unused clocks Apr 30 13:49:06.482004 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 13:49:06.482010 kernel: Write protecting the kernel read-only data: 38912k Apr 30 13:49:06.482016 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 13:49:06.482022 kernel: Run /init as init process Apr 30 13:49:06.482028 kernel: with arguments: Apr 30 13:49:06.482034 kernel: /init Apr 30 13:49:06.482039 kernel: with environment: Apr 30 13:49:06.482045 kernel: HOME=/ Apr 30 13:49:06.482050 kernel: TERM=linux Apr 30 13:49:06.482056 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 13:49:06.482062 systemd[1]: Successfully made /usr/ read-only. Apr 30 13:49:06.482070 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 13:49:06.482081 systemd[1]: Detected architecture x86-64. Apr 30 13:49:06.482087 systemd[1]: Running in initrd. Apr 30 13:49:06.482093 systemd[1]: No hostname configured, using default hostname. Apr 30 13:49:06.482120 systemd[1]: Hostname set to . Apr 30 13:49:06.482126 systemd[1]: Initializing machine ID from random generator. Apr 30 13:49:06.482145 systemd[1]: Queued start job for default target initrd.target. Apr 30 13:49:06.482152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:49:06.482159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:49:06.482165 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 13:49:06.482171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 13:49:06.482177 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 13:49:06.482183 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 13:49:06.482190 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 13:49:06.482197 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 13:49:06.482203 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:49:06.482209 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:49:06.482215 systemd[1]: Reached target paths.target - Path Units. Apr 30 13:49:06.482221 systemd[1]: Reached target slices.target - Slice Units. Apr 30 13:49:06.482227 systemd[1]: Reached target swap.target - Swaps. Apr 30 13:49:06.482233 systemd[1]: Reached target timers.target - Timer Units. Apr 30 13:49:06.482239 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 13:49:06.482245 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 13:49:06.482252 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 13:49:06.482258 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 13:49:06.482264 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:49:06.482270 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 13:49:06.482276 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:49:06.482282 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 13:49:06.482288 kernel: tsc: Refined TSC clocksource calibration: 3407.996 MHz Apr 30 13:49:06.482294 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd044566, max_idle_ns: 440795343519 ns Apr 30 13:49:06.482301 kernel: clocksource: Switched to clocksource tsc Apr 30 13:49:06.482307 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 13:49:06.482313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 13:49:06.482319 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 13:49:06.482325 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 13:49:06.482331 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 13:49:06.482348 systemd-journald[267]: Collecting audit messages is disabled. Apr 30 13:49:06.482363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 13:49:06.482370 systemd-journald[267]: Journal started Apr 30 13:49:06.482386 systemd-journald[267]: Runtime Journal (/run/log/journal/e914a8cd3ff247c2a26803553a33de13) is 8M, max 639.9M, 631.9M free. Apr 30 13:49:06.483948 systemd-modules-load[270]: Inserted module 'overlay' Apr 30 13:49:06.501216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:49:06.524136 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 13:49:06.524154 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 13:49:06.531181 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 13:49:06.536807 systemd-modules-load[270]: Inserted module 'br_netfilter' Apr 30 13:49:06.536848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:49:06.536938 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 13:49:06.537080 kernel: Bridge firewalling registered Apr 30 13:49:06.537210 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 13:49:06.538020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:49:06.538410 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 13:49:06.538812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 13:49:06.568349 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:49:06.679711 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:49:06.690593 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:49:06.720441 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:49:06.761344 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:49:06.773059 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 13:49:06.773562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 13:49:06.779523 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:49:06.789342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:49:06.799939 systemd-resolved[297]: Positive Trust Anchors: Apr 30 13:49:06.799947 systemd-resolved[297]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 13:49:06.799985 systemd-resolved[297]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 13:49:06.802353 systemd-resolved[297]: Defaulting to hostname 'linux'. Apr 30 13:49:06.810349 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 13:49:06.821298 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 13:49:06.828339 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:49:06.951476 dracut-cmdline[311]: dracut-dracut-053 Apr 30 13:49:06.951476 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:49:07.120150 kernel: SCSI subsystem initialized Apr 30 13:49:07.131126 kernel: Loading iSCSI transport class v2.0-870. Apr 30 13:49:07.144114 kernel: iscsi: registered transport (tcp) Apr 30 13:49:07.165355 kernel: iscsi: registered transport (qla4xxx) Apr 30 13:49:07.165371 kernel: QLogic iSCSI HBA Driver Apr 30 13:49:07.188553 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 13:49:07.210355 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 13:49:07.244682 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 13:49:07.244696 kernel: device-mapper: uevent: version 1.0.3 Apr 30 13:49:07.253461 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 13:49:07.289108 kernel: raid6: avx2x4 gen() 47286 MB/s Apr 30 13:49:07.310151 kernel: raid6: avx2x2 gen() 53831 MB/s Apr 30 13:49:07.336240 kernel: raid6: avx2x1 gen() 45149 MB/s Apr 30 13:49:07.336257 kernel: raid6: using algorithm avx2x2 gen() 53831 MB/s Apr 30 13:49:07.363309 kernel: raid6: .... xor() 32250 MB/s, rmw enabled Apr 30 13:49:07.363326 kernel: raid6: using avx2x2 recovery algorithm Apr 30 13:49:07.383133 kernel: xor: automatically using best checksumming function avx Apr 30 13:49:07.482121 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 13:49:07.487243 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 13:49:07.508324 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:49:07.516928 systemd-udevd[498]: Using default interface naming scheme 'v255'. Apr 30 13:49:07.519767 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:49:07.548260 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 13:49:07.593385 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Apr 30 13:49:07.610211 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 13:49:07.630311 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 13:49:07.692954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:49:07.738130 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 13:49:07.738147 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 13:49:07.738156 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 13:49:07.738163 kernel: ACPI: bus type USB registered Apr 30 13:49:07.738174 kernel: usbcore: registered new interface driver usbfs Apr 30 13:49:07.705252 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 13:49:07.749572 kernel: usbcore: registered new interface driver hub Apr 30 13:49:07.749590 kernel: usbcore: registered new device driver usb Apr 30 13:49:07.757082 kernel: PTP clock support registered Apr 30 13:49:07.757104 kernel: libata version 3.00 loaded. Apr 30 13:49:07.775326 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 13:49:07.775364 kernel: ahci 0000:00:17.0: version 3.0 Apr 30 13:49:07.980021 kernel: AES CTR mode by8 optimization enabled Apr 30 13:49:07.980033 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Apr 30 13:49:07.980119 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Apr 30 13:49:07.980187 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Apr 30 13:49:07.980196 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 13:49:07.980260 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Apr 30 13:49:07.980268 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Apr 30 13:49:07.980331 kernel: scsi host0: ahci Apr 30 13:49:07.980398 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Apr 30 13:49:07.980462 kernel: scsi host1: ahci Apr 30 13:49:07.980522 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 13:49:07.980584 kernel: igb 0000:04:00.0: added PHC on eth0 Apr 30 13:49:07.980651 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 13:49:07.980714 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:52 Apr 30 13:49:07.980776 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Apr 30 13:49:07.980841 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 13:49:07.980903 kernel: scsi host2: ahci Apr 30 13:49:07.980963 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Apr 30 13:49:07.981032 kernel: scsi host3: ahci Apr 30 13:49:07.981101 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Apr 30 13:49:07.981165 kernel: scsi host4: ahci Apr 30 13:49:07.981225 kernel: hub 1-0:1.0: USB hub found Apr 30 13:49:07.981299 kernel: igb 0000:05:00.0: added PHC on eth1 Apr 30 13:49:07.981365 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 13:49:07.981428 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:53 Apr 30 13:49:07.981490 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Apr 30 13:49:07.981552 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 13:49:07.981614 kernel: scsi host5: ahci Apr 30 13:49:07.981674 kernel: hub 1-0:1.0: 16 ports detected Apr 30 13:49:07.981741 kernel: scsi host6: ahci Apr 30 13:49:07.981801 kernel: hub 2-0:1.0: USB hub found Apr 30 13:49:07.981873 kernel: scsi host7: ahci Apr 30 13:49:07.981935 kernel: hub 2-0:1.0: 10 ports detected Apr 30 13:49:07.982000 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Apr 30 13:49:07.982009 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Apr 30 13:49:07.982071 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Apr 30 13:49:07.982085 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Apr 30 13:49:07.982095 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Apr 30 13:49:07.982102 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Apr 30 13:49:07.982110 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Apr 30 13:49:07.982117 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Apr 30 13:49:07.982124 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 128 Apr 30 13:49:07.806445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 13:49:07.806587 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:49:08.028415 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:49:08.028434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 13:49:08.028539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:49:08.119181 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Apr 30 13:49:08.571422 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 13:49:08.571507 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Apr 30 13:49:08.571580 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Apr 30 13:49:08.661260 kernel: hub 1-14:1.0: USB hub found Apr 30 13:49:08.661351 kernel: hub 1-14:1.0: 4 ports detected Apr 30 13:49:08.661424 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 13:49:08.661432 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 13:49:08.661440 kernel: ata8: SATA link down (SStatus 0 SControl 300) Apr 30 13:49:08.661447 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 13:49:08.661454 kernel: ata7: SATA link down (SStatus 0 SControl 300) Apr 30 13:49:08.661462 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 13:49:08.661471 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 13:49:08.661544 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 13:49:08.661552 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Apr 30 13:49:08.661618 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 13:49:08.661627 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 13:49:08.661634 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 13:49:08.661641 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 13:49:08.661649 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 13:49:08.661656 kernel: ata2.00: Features: NCQ-prio Apr 30 13:49:08.661665 kernel: ata1.00: Features: NCQ-prio Apr 30 13:49:08.661673 kernel: ata2.00: configured for UDMA/133 Apr 30 13:49:08.661680 kernel: ata1.00: configured for UDMA/133 Apr 30 13:49:08.661687 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 13:49:08.661755 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 13:49:08.661818 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 13:49:08.661827 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:49:08.661834 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 13:49:08.661895 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 13:49:08.661956 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Apr 30 13:49:08.662018 kernel: sd 1:0:0:0: [sdb] Write Protect is off Apr 30 13:49:08.662084 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 13:49:08.662189 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Apr 30 13:49:08.662251 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 13:49:08.662308 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 13:49:08.662368 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Apr 30 13:49:08.662427 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 13:49:08.662484 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Apr 30 13:49:08.662542 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Apr 30 13:49:08.662599 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 13:49:08.662608 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:49:08.662615 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 13:49:08.662672 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 13:49:08.662682 kernel: GPT:9289727 != 937703087 Apr 30 13:49:08.662690 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 13:49:08.662697 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 13:49:08.662763 kernel: GPT:9289727 != 937703087 Apr 30 13:49:08.662771 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 13:49:08.662778 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 13:49:08.662785 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Apr 30 13:49:08.662891 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Apr 30 13:49:09.200415 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Apr 30 13:49:09.200855 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 13:49:09.201265 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by (udev-worker) (551) Apr 30 13:49:09.201310 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/sdb3 scanned by (udev-worker) (575) Apr 30 13:49:09.201345 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 13:49:09.201379 kernel: usbcore: registered new interface driver usbhid Apr 30 13:49:09.201413 kernel: usbhid: USB HID core driver Apr 30 13:49:09.201447 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Apr 30 13:49:09.201497 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 13:49:09.201533 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Apr 30 13:49:09.202046 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 13:49:09.202117 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Apr 30 13:49:09.202158 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Apr 30 13:49:09.202614 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 13:49:09.203133 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Apr 30 13:49:09.203666 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 13:49:08.028628 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:49:09.220305 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Apr 30 13:49:08.153273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:49:08.163558 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 13:49:09.245347 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Apr 30 13:49:08.167093 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 13:49:08.184294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:49:08.184313 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 13:49:08.198192 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 13:49:09.287242 disk-uuid[706]: Primary Header is updated. Apr 30 13:49:09.287242 disk-uuid[706]: Secondary Entries is updated. Apr 30 13:49:09.287242 disk-uuid[706]: Secondary Header is updated. Apr 30 13:49:08.222242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:49:08.233295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 13:49:08.264283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:49:08.278307 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:49:08.623863 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Apr 30 13:49:08.660366 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Apr 30 13:49:08.688145 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 13:49:08.716879 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 13:49:08.728155 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 13:49:08.749181 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 13:49:09.782451 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 13:49:09.790128 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Apr 30 13:49:09.790596 disk-uuid[707]: The operation has completed successfully. Apr 30 13:49:09.829045 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 13:49:09.829157 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 13:49:09.879348 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 13:49:09.905209 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 13:49:09.905266 sh[737]: Success Apr 30 13:49:09.939919 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 13:49:09.959017 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 13:49:09.973457 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 13:49:10.034173 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 13:49:10.034189 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:49:10.034197 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 13:49:10.034204 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 13:49:10.034211 kernel: BTRFS info (device dm-0): using free space tree Apr 30 13:49:10.043120 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 13:49:10.046162 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 13:49:10.046402 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 13:49:10.116531 kernel: BTRFS info (device sdb6): first mount of filesystem e4f69af6-ab85-4338-a66c-b8762fac213b Apr 30 13:49:10.116547 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:49:10.116555 kernel: BTRFS info (device sdb6): using free space tree Apr 30 13:49:10.116562 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 13:49:10.116573 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 13:49:10.058426 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 13:49:10.137275 kernel: BTRFS info (device sdb6): last unmount of filesystem e4f69af6-ab85-4338-a66c-b8762fac213b Apr 30 13:49:10.060859 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 13:49:10.137383 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 13:49:10.175275 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 13:49:10.210564 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 13:49:10.231575 ignition[811]: Ignition 2.20.0 Apr 30 13:49:10.231581 ignition[811]: Stage: fetch-offline Apr 30 13:49:10.234042 unknown[811]: fetched base config from "system" Apr 30 13:49:10.231599 ignition[811]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:49:10.234046 unknown[811]: fetched user config from "system" Apr 30 13:49:10.231603 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:49:10.238416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 13:49:10.231656 ignition[811]: parsed url from cmdline: "" Apr 30 13:49:10.249601 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 13:49:10.231658 ignition[811]: no config URL provided Apr 30 13:49:10.251346 systemd-networkd[923]: lo: Link UP Apr 30 13:49:10.231660 ignition[811]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 13:49:10.251348 systemd-networkd[923]: lo: Gained carrier Apr 30 13:49:10.231681 ignition[811]: parsing config with SHA512: 985617a13e5cfeb3685a731e8f2e292c51f51d9851df906d1be9bd9f128e26ced4b41bf7b67ab55e29520327a0e22348a90ecbe33ffb9e08cfd3fa3ccd6acece Apr 30 13:49:10.253859 systemd-networkd[923]: Enumeration completed Apr 30 13:49:10.234260 ignition[811]: fetch-offline: fetch-offline passed Apr 30 13:49:10.254611 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:49:10.234265 ignition[811]: POST message to Packet Timeline Apr 30 13:49:10.255311 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 13:49:10.234268 ignition[811]: POST Status error: resource requires networking Apr 30 13:49:10.271417 systemd[1]: Reached target network.target - Network. Apr 30 13:49:10.234308 ignition[811]: Ignition finished successfully Apr 30 13:49:10.283629 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:49:10.309540 ignition[932]: Ignition 2.20.0 Apr 30 13:49:10.286431 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 13:49:10.309545 ignition[932]: Stage: kargs Apr 30 13:49:10.293358 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 13:49:10.309660 ignition[932]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:49:10.311970 systemd-networkd[923]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:49:10.309667 ignition[932]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:49:10.537271 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Apr 30 13:49:10.532198 systemd-networkd[923]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:49:10.310325 ignition[932]: kargs: kargs passed Apr 30 13:49:10.310328 ignition[932]: POST message to Packet Timeline Apr 30 13:49:10.310343 ignition[932]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:49:10.310883 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54443->[::1]:53: read: connection refused Apr 30 13:49:10.511507 ignition[932]: GET https://metadata.packet.net/metadata: attempt #2 Apr 30 13:49:10.511819 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33251->[::1]:53: read: connection refused Apr 30 13:49:10.805112 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Apr 30 13:49:10.806321 systemd-networkd[923]: eno1: Link UP Apr 30 13:49:10.806440 systemd-networkd[923]: eno2: Link UP Apr 30 13:49:10.806549 systemd-networkd[923]: enp2s0f0np0: Link UP Apr 30 13:49:10.806680 systemd-networkd[923]: enp2s0f0np0: Gained carrier Apr 30 13:49:10.817283 systemd-networkd[923]: enp2s0f1np1: Link UP Apr 30 13:49:10.855461 systemd-networkd[923]: enp2s0f0np0: DHCPv4 address 147.75.202.179/31, gateway 147.75.202.178 acquired from 145.40.83.140 Apr 30 13:49:10.912106 ignition[932]: GET https://metadata.packet.net/metadata: attempt #3 Apr 30 13:49:10.913147 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:48515->[::1]:53: read: connection refused Apr 30 13:49:11.578765 systemd-networkd[923]: enp2s0f1np1: Gained carrier Apr 30 13:49:11.713271 ignition[932]: GET https://metadata.packet.net/metadata: attempt #4 Apr 30 13:49:11.714420 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54855->[::1]:53: read: connection refused Apr 30 13:49:12.154579 systemd-networkd[923]: enp2s0f0np0: Gained IPv6LL Apr 30 13:49:12.666590 systemd-networkd[923]: enp2s0f1np1: Gained IPv6LL Apr 30 13:49:13.315269 ignition[932]: GET https://metadata.packet.net/metadata: attempt #5 Apr 30 13:49:13.316365 ignition[932]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50736->[::1]:53: read: connection refused Apr 30 13:49:16.518812 ignition[932]: GET https://metadata.packet.net/metadata: attempt #6 Apr 30 13:49:17.379858 ignition[932]: GET result: OK Apr 30 13:49:17.729923 ignition[932]: Ignition finished successfully Apr 30 13:49:17.731992 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 13:49:17.760364 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 13:49:17.766405 ignition[951]: Ignition 2.20.0 Apr 30 13:49:17.766409 ignition[951]: Stage: disks Apr 30 13:49:17.766508 ignition[951]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:49:17.766514 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:49:17.767035 ignition[951]: disks: disks passed Apr 30 13:49:17.767038 ignition[951]: POST message to Packet Timeline Apr 30 13:49:17.767052 ignition[951]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:49:18.857034 ignition[951]: GET result: OK Apr 30 13:49:19.207788 ignition[951]: Ignition finished successfully Apr 30 13:49:19.210488 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 13:49:19.226422 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 13:49:19.246346 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 13:49:19.267363 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 13:49:19.289389 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 13:49:19.309518 systemd[1]: Reached target basic.target - Basic System. Apr 30 13:49:19.343348 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 13:49:19.378717 systemd-fsck[966]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 13:49:19.388494 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 13:49:19.423509 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 13:49:19.494024 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 13:49:19.509328 kernel: EXT4-fs (sdb9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 13:49:19.502599 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 13:49:19.533269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 13:49:19.580146 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sdb6 scanned by mount (977) Apr 30 13:49:19.580163 kernel: BTRFS info (device sdb6): first mount of filesystem e4f69af6-ab85-4338-a66c-b8762fac213b Apr 30 13:49:19.580172 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:49:19.580179 kernel: BTRFS info (device sdb6): using free space tree Apr 30 13:49:19.541938 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 13:49:19.610290 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 13:49:19.610302 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 13:49:19.613583 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 13:49:19.625859 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Apr 30 13:49:19.636338 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 13:49:19.636362 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 13:49:19.685175 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 13:49:19.711213 coreos-metadata[995]: Apr 30 13:49:19.707 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:49:19.732272 coreos-metadata[994]: Apr 30 13:49:19.706 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:49:19.703347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 13:49:19.734357 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 13:49:19.772254 initrd-setup-root[1009]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 13:49:19.782190 initrd-setup-root[1016]: cut: /sysroot/etc/group: No such file or directory Apr 30 13:49:19.792205 initrd-setup-root[1023]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 13:49:19.802188 initrd-setup-root[1030]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 13:49:19.818359 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 13:49:19.842266 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 13:49:19.869280 kernel: BTRFS info (device sdb6): last unmount of filesystem e4f69af6-ab85-4338-a66c-b8762fac213b Apr 30 13:49:19.859711 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 13:49:19.878713 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 13:49:19.900072 ignition[1097]: INFO : Ignition 2.20.0 Apr 30 13:49:19.900072 ignition[1097]: INFO : Stage: mount Apr 30 13:49:19.908230 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:49:19.908230 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:49:19.908230 ignition[1097]: INFO : mount: mount passed Apr 30 13:49:19.908230 ignition[1097]: INFO : POST message to Packet Timeline Apr 30 13:49:19.908230 ignition[1097]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:49:19.905298 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 13:49:20.695598 coreos-metadata[995]: Apr 30 13:49:20.695 INFO Fetch successful Apr 30 13:49:20.779627 systemd[1]: flatcar-static-network.service: Deactivated successfully. Apr 30 13:49:20.779682 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Apr 30 13:49:20.812146 ignition[1097]: INFO : GET result: OK Apr 30 13:49:20.819158 coreos-metadata[994]: Apr 30 13:49:20.818 INFO Fetch successful Apr 30 13:49:20.847494 coreos-metadata[994]: Apr 30 13:49:20.847 INFO wrote hostname ci-4230.1.1-a-70e1417a44 to /sysroot/etc/hostname Apr 30 13:49:20.848662 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 13:49:21.169394 ignition[1097]: INFO : Ignition finished successfully Apr 30 13:49:21.172576 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 13:49:21.213609 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 13:49:21.230391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 13:49:21.272953 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sdb6 scanned by mount (1120) Apr 30 13:49:21.272972 kernel: BTRFS info (device sdb6): first mount of filesystem e4f69af6-ab85-4338-a66c-b8762fac213b Apr 30 13:49:21.281037 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:49:21.286921 kernel: BTRFS info (device sdb6): using free space tree Apr 30 13:49:21.301973 kernel: BTRFS info (device sdb6): enabling ssd optimizations Apr 30 13:49:21.301994 kernel: BTRFS info (device sdb6): auto enabling async discard Apr 30 13:49:21.303889 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 13:49:21.330133 ignition[1137]: INFO : Ignition 2.20.0 Apr 30 13:49:21.330133 ignition[1137]: INFO : Stage: files Apr 30 13:49:21.344325 ignition[1137]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:49:21.344325 ignition[1137]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:49:21.344325 ignition[1137]: DEBUG : files: compiled without relabeling support, skipping Apr 30 13:49:21.344325 ignition[1137]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 13:49:21.344325 ignition[1137]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 13:49:21.344325 ignition[1137]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 13:49:21.344325 ignition[1137]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 13:49:21.344325 ignition[1137]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 13:49:21.344325 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 13:49:21.344325 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 30 13:49:21.333956 unknown[1137]: wrote ssh authorized keys file for user: core Apr 30 13:49:21.477147 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 13:49:21.497787 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 30 13:49:21.514319 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 13:49:21.514319 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 13:49:21.974610 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 13:49:22.170238 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 13:49:22.170238 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 13:49:22.202379 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Apr 30 13:49:22.633774 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 13:49:23.514098 ignition[1137]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Apr 30 13:49:23.514098 ignition[1137]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 13:49:23.543343 ignition[1137]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 13:49:23.543343 ignition[1137]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 13:49:23.543343 ignition[1137]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 13:49:23.543343 ignition[1137]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 13:49:23.543343 ignition[1137]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 13:49:23.543343 ignition[1137]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 13:49:23.543343 ignition[1137]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 13:49:23.543343 ignition[1137]: INFO : files: files passed Apr 30 13:49:23.543343 ignition[1137]: INFO : POST message to Packet Timeline Apr 30 13:49:23.543343 ignition[1137]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:49:24.422269 ignition[1137]: INFO : GET result: OK Apr 30 13:49:24.824990 ignition[1137]: INFO : Ignition finished successfully Apr 30 13:49:24.828127 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 13:49:24.856341 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 13:49:24.866698 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 13:49:24.896502 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 13:49:24.896581 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 13:49:24.923317 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 13:49:24.939638 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 13:49:24.970345 initrd-setup-root-after-ignition[1177]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:49:24.970345 initrd-setup-root-after-ignition[1177]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:49:24.984303 initrd-setup-root-after-ignition[1181]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:49:24.979225 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 13:49:25.039463 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 13:49:25.039516 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 13:49:25.058478 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 13:49:25.080280 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 13:49:25.100451 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 13:49:25.115513 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 13:49:25.193428 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 13:49:25.227883 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 13:49:25.260035 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:49:25.272676 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:49:25.293893 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 13:49:25.312836 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 13:49:25.313291 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 13:49:25.340957 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 13:49:25.362832 systemd[1]: Stopped target basic.target - Basic System. Apr 30 13:49:25.381840 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 13:49:25.400699 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 13:49:25.421699 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 13:49:25.443693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 13:49:25.463711 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 13:49:25.484742 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 13:49:25.506866 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 13:49:25.526696 systemd[1]: Stopped target swap.target - Swaps. Apr 30 13:49:25.545750 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 13:49:25.546183 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 13:49:25.570948 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:49:25.590724 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:49:25.611578 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 13:49:25.612049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:49:25.633590 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 13:49:25.634007 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 13:49:25.665696 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 13:49:25.666182 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 13:49:25.685917 systemd[1]: Stopped target paths.target - Path Units. Apr 30 13:49:25.704703 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 13:49:25.705167 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:49:25.725710 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 13:49:25.744704 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 13:49:25.763682 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 13:49:25.763988 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 13:49:25.783721 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 13:49:25.784010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 13:49:25.806832 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 13:49:25.807269 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 13:49:25.826789 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 13:49:25.946369 ignition[1201]: INFO : Ignition 2.20.0 Apr 30 13:49:25.946369 ignition[1201]: INFO : Stage: umount Apr 30 13:49:25.946369 ignition[1201]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:49:25.946369 ignition[1201]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:49:25.946369 ignition[1201]: INFO : umount: umount passed Apr 30 13:49:25.946369 ignition[1201]: INFO : POST message to Packet Timeline Apr 30 13:49:25.946369 ignition[1201]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:49:25.827205 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 13:49:25.845797 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 13:49:25.846219 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 13:49:25.877259 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 13:49:25.897929 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 13:49:25.916257 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 13:49:25.916386 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:49:25.928777 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 13:49:25.929137 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 13:49:25.980732 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 13:49:25.983363 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 13:49:25.983581 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 13:49:26.020630 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 13:49:26.020919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 13:49:26.826249 ignition[1201]: INFO : GET result: OK Apr 30 13:49:27.232219 ignition[1201]: INFO : Ignition finished successfully Apr 30 13:49:27.235536 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 13:49:27.235922 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 13:49:27.252575 systemd[1]: Stopped target network.target - Network. Apr 30 13:49:27.267433 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 13:49:27.267627 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 13:49:27.285533 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 13:49:27.285681 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 13:49:27.303580 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 13:49:27.303752 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 13:49:27.311883 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 13:49:27.312052 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 13:49:27.328736 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 13:49:27.328910 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 13:49:27.346138 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 13:49:27.372700 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 13:49:27.391201 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 13:49:27.391474 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 13:49:27.414158 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 13:49:27.414277 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 13:49:27.414327 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 13:49:27.432226 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 13:49:27.432882 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 13:49:27.432924 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:49:27.465255 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 13:49:27.482267 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 13:49:27.482511 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 13:49:27.504592 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 13:49:27.504762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:49:27.524789 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 13:49:27.524950 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 13:49:27.542540 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 13:49:27.542710 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:49:27.563784 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:49:27.588940 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 13:49:27.589169 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:49:27.590343 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 13:49:27.590696 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:49:27.617483 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 13:49:27.617517 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 13:49:27.643197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 13:49:27.643227 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:49:27.663299 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 13:49:27.663385 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 13:49:27.704289 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 13:49:27.704454 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 13:49:27.744281 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 13:49:27.744433 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:49:27.794340 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 13:49:27.827158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 13:49:27.827209 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:49:27.846365 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 13:49:27.846449 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:49:27.868356 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 13:49:27.868484 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:49:28.111283 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Apr 30 13:49:27.889375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 13:49:27.889508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:49:27.913830 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 13:49:27.913998 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:49:27.915025 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 13:49:27.915290 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 13:49:27.931967 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 13:49:27.932201 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 13:49:27.954204 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 13:49:27.989525 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 13:49:28.045561 systemd[1]: Switching root. Apr 30 13:49:28.219143 systemd-journald[267]: Journal stopped Apr 30 13:49:29.909190 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 13:49:29.909206 kernel: SELinux: policy capability open_perms=1 Apr 30 13:49:29.909213 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 13:49:29.909219 kernel: SELinux: policy capability always_check_network=0 Apr 30 13:49:29.909225 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 13:49:29.909231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 13:49:29.909237 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 13:49:29.909243 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 13:49:29.909248 kernel: audit: type=1403 audit(1746020968.322:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 13:49:29.909255 systemd[1]: Successfully loaded SELinux policy in 72.913ms. Apr 30 13:49:29.909264 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.959ms. Apr 30 13:49:29.909271 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 13:49:29.909277 systemd[1]: Detected architecture x86-64. Apr 30 13:49:29.909283 systemd[1]: Detected first boot. Apr 30 13:49:29.909290 systemd[1]: Hostname set to . Apr 30 13:49:29.909298 systemd[1]: Initializing machine ID from random generator. Apr 30 13:49:29.909305 zram_generator::config[1257]: No configuration found. Apr 30 13:49:29.909312 systemd[1]: Populated /etc with preset unit settings. Apr 30 13:49:29.909319 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 13:49:29.909326 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 13:49:29.909332 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 13:49:29.909338 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 13:49:29.909346 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 13:49:29.909353 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 13:49:29.909359 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 13:49:29.909366 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 13:49:29.909372 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 13:49:29.909379 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 13:49:29.909386 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 13:49:29.909393 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 13:49:29.909400 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:49:29.909407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:49:29.909414 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 13:49:29.909420 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 13:49:29.909427 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 13:49:29.909434 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 13:49:29.909441 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Apr 30 13:49:29.909448 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:49:29.909455 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 13:49:29.909462 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 13:49:29.909470 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 13:49:29.909477 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 13:49:29.909484 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:49:29.909491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 13:49:29.909498 systemd[1]: Reached target slices.target - Slice Units. Apr 30 13:49:29.909506 systemd[1]: Reached target swap.target - Swaps. Apr 30 13:49:29.909513 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 13:49:29.909519 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 13:49:29.909526 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 13:49:29.909533 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:49:29.909543 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 13:49:29.909550 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:49:29.909557 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 13:49:29.909564 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 13:49:29.909571 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 13:49:29.909578 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 13:49:29.909585 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:49:29.909592 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 13:49:29.909600 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 13:49:29.909607 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 13:49:29.909614 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 13:49:29.909621 systemd[1]: Reached target machines.target - Containers. Apr 30 13:49:29.909628 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 13:49:29.909635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:49:29.909642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 13:49:29.909649 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 13:49:29.909657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 13:49:29.909664 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 13:49:29.909671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 13:49:29.909677 kernel: ACPI: bus type drm_connector registered Apr 30 13:49:29.909684 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 13:49:29.909691 kernel: fuse: init (API version 7.39) Apr 30 13:49:29.909697 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 13:49:29.909704 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 13:49:29.909712 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 13:49:29.909719 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 13:49:29.909726 kernel: loop: module loaded Apr 30 13:49:29.909732 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 13:49:29.909739 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 13:49:29.909746 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:49:29.909753 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 13:49:29.909769 systemd-journald[1361]: Collecting audit messages is disabled. Apr 30 13:49:29.909786 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 13:49:29.909794 systemd-journald[1361]: Journal started Apr 30 13:49:29.909809 systemd-journald[1361]: Runtime Journal (/run/log/journal/61d8d6840d294cbd8a99c939a774e907) is 8M, max 639.9M, 631.9M free. Apr 30 13:49:28.754346 systemd[1]: Queued start job for default target multi-user.target. Apr 30 13:49:28.766927 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Apr 30 13:49:28.767163 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 13:49:29.937119 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 13:49:29.948110 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 13:49:29.980130 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 13:49:30.001173 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 13:49:30.022267 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 13:49:30.022295 systemd[1]: Stopped verity-setup.service. Apr 30 13:49:30.047115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:49:30.055115 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 13:49:30.064524 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 13:49:30.075375 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 13:49:30.086348 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 13:49:30.096348 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 13:49:30.106333 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 13:49:30.116309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 13:49:30.126417 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 13:49:30.137439 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:49:30.148504 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 13:49:30.148669 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 13:49:30.159575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 13:49:30.159787 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 13:49:30.171930 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 13:49:30.172386 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 13:49:30.182947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 13:49:30.183445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 13:49:30.194936 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 13:49:30.195371 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 13:49:30.206112 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 13:49:30.206538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 13:49:30.217016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 13:49:30.228024 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 13:49:30.240010 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 13:49:30.252015 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 13:49:30.264009 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:49:30.297603 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 13:49:30.326519 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 13:49:30.339005 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 13:49:30.349294 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 13:49:30.349314 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 13:49:30.349971 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 13:49:30.373017 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 13:49:30.385163 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 13:49:30.396413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:49:30.398387 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 13:49:30.408685 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 13:49:30.419189 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 13:49:30.419830 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 13:49:30.423251 systemd-journald[1361]: Time spent on flushing to /var/log/journal/61d8d6840d294cbd8a99c939a774e907 is 13.704ms for 1384 entries. Apr 30 13:49:30.423251 systemd-journald[1361]: System Journal (/var/log/journal/61d8d6840d294cbd8a99c939a774e907) is 8M, max 195.6M, 187.6M free. Apr 30 13:49:30.452952 systemd-journald[1361]: Received client request to flush runtime journal. Apr 30 13:49:30.437200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 13:49:30.437992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:49:30.447879 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 13:49:30.459859 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 13:49:30.470080 kernel: loop0: detected capacity change from 0 to 8 Apr 30 13:49:30.480920 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 13:49:30.481080 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 13:49:30.492832 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Apr 30 13:49:30.492859 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Apr 30 13:49:30.493467 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 13:49:30.504294 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 13:49:30.515312 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 13:49:30.526331 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 13:49:30.537080 kernel: loop1: detected capacity change from 0 to 210664 Apr 30 13:49:30.544272 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 13:49:30.556330 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:49:30.567427 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:49:30.582198 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 13:49:30.602083 kernel: loop2: detected capacity change from 0 to 147912 Apr 30 13:49:30.605231 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 13:49:30.616811 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 13:49:30.627752 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 13:49:30.628312 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 13:49:30.640270 udevadm[1406]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 13:49:30.644463 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 13:49:30.665277 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 13:49:30.672417 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Apr 30 13:49:30.672428 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Apr 30 13:49:30.679134 kernel: loop3: detected capacity change from 0 to 138176 Apr 30 13:49:30.683884 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:49:30.754127 kernel: loop4: detected capacity change from 0 to 8 Apr 30 13:49:30.761125 kernel: loop5: detected capacity change from 0 to 210664 Apr 30 13:49:30.770353 ldconfig[1393]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 13:49:30.771701 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 13:49:30.781142 kernel: loop6: detected capacity change from 0 to 147912 Apr 30 13:49:30.800121 kernel: loop7: detected capacity change from 0 to 138176 Apr 30 13:49:30.845287 (sd-merge)[1427]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Apr 30 13:49:30.845545 (sd-merge)[1427]: Merged extensions into '/usr'. Apr 30 13:49:30.848062 systemd[1]: Reload requested from client PID 1399 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 13:49:30.848071 systemd[1]: Reloading... Apr 30 13:49:30.871147 zram_generator::config[1452]: No configuration found. Apr 30 13:49:30.945420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:49:30.999230 systemd[1]: Reloading finished in 150 ms. Apr 30 13:49:31.018289 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 13:49:31.030464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 13:49:31.053035 systemd[1]: Starting ensure-sysext.service... Apr 30 13:49:31.062037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 13:49:31.075296 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:49:31.088925 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 13:49:31.089139 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 13:49:31.089778 systemd-tmpfiles[1512]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 13:49:31.089997 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Apr 30 13:49:31.090047 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Apr 30 13:49:31.092329 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 13:49:31.092333 systemd-tmpfiles[1512]: Skipping /boot Apr 30 13:49:31.092611 systemd[1]: Reload requested from client PID 1511 ('systemctl') (unit ensure-sysext.service)... Apr 30 13:49:31.092634 systemd[1]: Reloading... Apr 30 13:49:31.098371 systemd-tmpfiles[1512]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 13:49:31.098375 systemd-tmpfiles[1512]: Skipping /boot Apr 30 13:49:31.104867 systemd-udevd[1513]: Using default interface naming scheme 'v255'. Apr 30 13:49:31.127092 zram_generator::config[1542]: No configuration found. Apr 30 13:49:31.160895 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Apr 30 13:49:31.160946 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 40 scanned by (udev-worker) (1631) Apr 30 13:49:31.160963 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 13:49:31.174333 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 13:49:31.181085 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 13:49:31.193082 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Apr 30 13:49:31.210749 kernel: ACPI: button: Power Button [PWRF] Apr 30 13:49:31.210775 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Apr 30 13:49:31.210919 kernel: IPMI message handler: version 39.2 Apr 30 13:49:31.210939 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Apr 30 13:49:31.221197 kernel: ipmi device interface Apr 30 13:49:31.229083 kernel: iTCO_vendor_support: vendor-support=0 Apr 30 13:49:31.229128 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Apr 30 13:49:31.233521 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Apr 30 13:49:31.238306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:49:31.301325 kernel: ipmi_si: IPMI System Interface driver Apr 30 13:49:31.301381 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Apr 30 13:49:31.301516 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Apr 30 13:49:31.322226 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Apr 30 13:49:31.322238 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Apr 30 13:49:31.322252 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Apr 30 13:49:31.354634 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Apr 30 13:49:31.354766 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Apr 30 13:49:31.354837 kernel: ipmi_si: Adding ACPI-specified kcs state machine Apr 30 13:49:31.354851 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Apr 30 13:49:31.333726 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 13:49:31.373392 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Apr 30 13:49:31.373561 systemd[1]: Reloading finished in 280 ms. Apr 30 13:49:31.388830 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:49:31.402189 kernel: intel_rapl_common: Found RAPL domain package Apr 30 13:49:31.402214 kernel: intel_rapl_common: Found RAPL domain core Apr 30 13:49:31.408675 kernel: intel_rapl_common: Found RAPL domain dram Apr 30 13:49:31.429770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:49:31.451752 systemd[1]: Finished ensure-sysext.service. Apr 30 13:49:31.454080 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Apr 30 13:49:31.481375 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Apr 30 13:49:31.487123 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Apr 30 13:49:31.496181 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:49:31.508216 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:49:31.517140 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 13:49:31.523080 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Apr 30 13:49:31.527652 augenrules[1716]: No rules Apr 30 13:49:31.531082 kernel: ipmi_ssif: IPMI SSIF Interface driver Apr 30 13:49:31.538312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:49:31.538955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 13:49:31.560465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 13:49:31.570659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 13:49:31.581678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 13:49:31.591239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:49:31.591758 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 13:49:31.602179 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:49:31.602730 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 13:49:31.614041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 13:49:31.615029 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 13:49:31.615952 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 13:49:31.640744 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 13:49:31.661478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:49:31.671171 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:49:31.671770 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 13:49:31.684237 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:49:31.684339 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:49:31.684597 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 13:49:31.684731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 13:49:31.684812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 13:49:31.684950 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 13:49:31.685030 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 13:49:31.685186 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 13:49:31.685266 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 13:49:31.685398 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 13:49:31.685479 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 13:49:31.685624 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 13:49:31.685860 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 13:49:31.690785 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 13:49:31.691794 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 13:49:31.691824 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 13:49:31.691857 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 13:49:31.692459 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 13:49:31.693379 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 13:49:31.693404 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 13:49:31.698521 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 13:49:31.699994 lvm[1745]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 13:49:31.715878 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 13:49:31.748623 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 13:49:31.752240 systemd-resolved[1729]: Positive Trust Anchors: Apr 30 13:49:31.752247 systemd-resolved[1729]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 13:49:31.752284 systemd-resolved[1729]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 13:49:31.757206 systemd-resolved[1729]: Using system hostname 'ci-4230.1.1-a-70e1417a44'. Apr 30 13:49:31.762366 systemd-networkd[1728]: lo: Link UP Apr 30 13:49:31.762369 systemd-networkd[1728]: lo: Gained carrier Apr 30 13:49:31.765109 systemd-networkd[1728]: bond0: netdev ready Apr 30 13:49:31.766249 systemd-networkd[1728]: Enumeration completed Apr 30 13:49:31.775881 systemd-networkd[1728]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:a2:ec.network. Apr 30 13:49:31.815468 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 13:49:31.825139 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 13:49:31.835278 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:49:31.847258 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 13:49:31.860326 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:49:31.870164 systemd[1]: Reached target network.target - Network. Apr 30 13:49:31.879115 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:49:31.890159 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 13:49:31.900203 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 13:49:31.911132 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 13:49:31.922150 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 13:49:31.933113 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 13:49:31.933130 systemd[1]: Reached target paths.target - Path Units. Apr 30 13:49:31.941146 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 13:49:31.951227 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 13:49:31.961208 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 13:49:31.972144 systemd[1]: Reached target timers.target - Timer Units. Apr 30 13:49:31.980843 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 13:49:31.990795 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 13:49:32.000322 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 13:49:32.019443 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 13:49:32.029330 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 13:49:32.052220 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 13:49:32.054230 lvm[1766]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 13:49:32.063940 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 13:49:32.075968 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 13:49:32.094275 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 13:49:32.104273 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 13:49:32.115564 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 13:49:32.126115 systemd[1]: Reached target basic.target - Basic System. Apr 30 13:49:32.135188 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 13:49:32.135207 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 13:49:32.135833 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 13:49:32.146820 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 13:49:32.157651 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 13:49:32.167678 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 13:49:32.170964 coreos-metadata[1771]: Apr 30 13:49:32.170 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:49:32.171821 coreos-metadata[1771]: Apr 30 13:49:32.171 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Apr 30 13:49:32.178724 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 13:49:32.178979 dbus-daemon[1772]: [system] SELinux support is enabled Apr 30 13:49:32.180429 jq[1775]: false Apr 30 13:49:32.189140 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 13:49:32.189761 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 13:49:32.197477 extend-filesystems[1777]: Found loop4 Apr 30 13:49:32.197477 extend-filesystems[1777]: Found loop5 Apr 30 13:49:32.233287 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Apr 30 13:49:32.233304 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 40 scanned by (udev-worker) (1559) Apr 30 13:49:32.233314 extend-filesystems[1777]: Found loop6 Apr 30 13:49:32.233314 extend-filesystems[1777]: Found loop7 Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sda Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sdb Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sdb1 Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sdb2 Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sdb3 Apr 30 13:49:32.233314 extend-filesystems[1777]: Found usr Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sdb4 Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sdb6 Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sdb7 Apr 30 13:49:32.233314 extend-filesystems[1777]: Found sdb9 Apr 30 13:49:32.233314 extend-filesystems[1777]: Checking size of /dev/sdb9 Apr 30 13:49:32.233314 extend-filesystems[1777]: Resized partition /dev/sdb9 Apr 30 13:49:32.200674 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 13:49:32.418190 extend-filesystems[1785]: resize2fs 1.47.1 (20-May-2024) Apr 30 13:49:32.223467 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 13:49:32.249765 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 13:49:32.272489 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 13:49:32.293290 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Apr 30 13:49:32.438435 sshd_keygen[1800]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 13:49:32.307496 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 13:49:32.438556 update_engine[1802]: I20250430 13:49:32.337599 1802 main.cc:92] Flatcar Update Engine starting Apr 30 13:49:32.438556 update_engine[1802]: I20250430 13:49:32.338383 1802 update_check_scheduler.cc:74] Next update check in 3m56s Apr 30 13:49:32.307880 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 13:49:32.438732 jq[1803]: true Apr 30 13:49:32.322026 systemd-logind[1797]: Watching system buttons on /dev/input/event3 (Power Button) Apr 30 13:49:32.322035 systemd-logind[1797]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 13:49:32.322045 systemd-logind[1797]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Apr 30 13:49:32.322208 systemd-logind[1797]: New seat seat0. Apr 30 13:49:32.323752 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 13:49:32.345576 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 13:49:32.372148 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 13:49:32.393228 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 13:49:32.393339 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 13:49:32.393490 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 13:49:32.393586 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 13:49:32.410551 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 13:49:32.410654 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 13:49:32.418345 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 13:49:32.455991 jq[1815]: true Apr 30 13:49:32.456648 (ntainerd)[1816]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 13:49:32.461037 tar[1812]: linux-amd64/helm Apr 30 13:49:32.461178 dbus-daemon[1772]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 13:49:32.467322 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Apr 30 13:49:32.467436 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Apr 30 13:49:32.476000 systemd[1]: Started update-engine.service - Update Engine. Apr 30 13:49:32.494240 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 13:49:32.502189 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 13:49:32.502298 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 13:49:32.514187 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 13:49:32.514267 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 13:49:32.517352 bash[1843]: Updated "/home/core/.ssh/authorized_keys" Apr 30 13:49:32.542268 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 13:49:32.554705 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 13:49:32.565400 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 13:49:32.565511 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 13:49:32.574462 locksmithd[1851]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 13:49:32.576870 systemd[1]: Starting sshkeys.service... Apr 30 13:49:32.584081 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Apr 30 13:49:32.596325 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 13:49:32.598083 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Apr 30 13:49:32.598820 systemd-networkd[1728]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:a2:ed.network. Apr 30 13:49:32.613127 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 13:49:32.625160 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 13:49:32.637614 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 13:49:32.648572 coreos-metadata[1865]: Apr 30 13:49:32.648 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:49:32.649300 containerd[1816]: time="2025-04-30T13:49:32.649259887Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 13:49:32.649403 coreos-metadata[1865]: Apr 30 13:49:32.649 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Apr 30 13:49:32.661350 containerd[1816]: time="2025-04-30T13:49:32.661323999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662173 containerd[1816]: time="2025-04-30T13:49:32.662106737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662209 containerd[1816]: time="2025-04-30T13:49:32.662173568Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 13:49:32.662209 containerd[1816]: time="2025-04-30T13:49:32.662189362Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 13:49:32.662355 containerd[1816]: time="2025-04-30T13:49:32.662340642Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 13:49:32.662396 containerd[1816]: time="2025-04-30T13:49:32.662380837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662461 containerd[1816]: time="2025-04-30T13:49:32.662448718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662482 containerd[1816]: time="2025-04-30T13:49:32.662462990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662634 containerd[1816]: time="2025-04-30T13:49:32.662623077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662654 containerd[1816]: time="2025-04-30T13:49:32.662633297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662654 containerd[1816]: time="2025-04-30T13:49:32.662641884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662654 containerd[1816]: time="2025-04-30T13:49:32.662647186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662699 containerd[1816]: time="2025-04-30T13:49:32.662692049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662814 containerd[1816]: time="2025-04-30T13:49:32.662805372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662883 containerd[1816]: time="2025-04-30T13:49:32.662873788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:49:32.662902 containerd[1816]: time="2025-04-30T13:49:32.662882467Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 13:49:32.662931 containerd[1816]: time="2025-04-30T13:49:32.662924652Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 13:49:32.662959 containerd[1816]: time="2025-04-30T13:49:32.662952844Z" level=info msg="metadata content store policy set" policy=shared Apr 30 13:49:32.666460 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 13:49:32.675168 containerd[1816]: time="2025-04-30T13:49:32.675131202Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 13:49:32.675168 containerd[1816]: time="2025-04-30T13:49:32.675158378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 13:49:32.675168 containerd[1816]: time="2025-04-30T13:49:32.675167999Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 13:49:32.675260 containerd[1816]: time="2025-04-30T13:49:32.675177423Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 13:49:32.675260 containerd[1816]: time="2025-04-30T13:49:32.675185209Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 13:49:32.675224 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Apr 30 13:49:32.675353 containerd[1816]: time="2025-04-30T13:49:32.675257933Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 13:49:32.675414 containerd[1816]: time="2025-04-30T13:49:32.675403425Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 13:49:32.675474 containerd[1816]: time="2025-04-30T13:49:32.675464924Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 13:49:32.675504 containerd[1816]: time="2025-04-30T13:49:32.675475193Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 13:49:32.675504 containerd[1816]: time="2025-04-30T13:49:32.675483597Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 13:49:32.675504 containerd[1816]: time="2025-04-30T13:49:32.675490727Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 13:49:32.675504 containerd[1816]: time="2025-04-30T13:49:32.675498281Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 13:49:32.675504 containerd[1816]: time="2025-04-30T13:49:32.675504676Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675511581Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675518938Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675525716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675532448Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675538353Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675550403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675558202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675564797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675571451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675578160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675587366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675594285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675601101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675623 containerd[1816]: time="2025-04-30T13:49:32.675608180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675947 containerd[1816]: time="2025-04-30T13:49:32.675616264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675947 containerd[1816]: time="2025-04-30T13:49:32.675622745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675947 containerd[1816]: time="2025-04-30T13:49:32.675629261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675947 containerd[1816]: time="2025-04-30T13:49:32.675636657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675947 containerd[1816]: time="2025-04-30T13:49:32.675644763Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 13:49:32.675947 containerd[1816]: time="2025-04-30T13:49:32.675657406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675947 containerd[1816]: time="2025-04-30T13:49:32.675665535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.675947 containerd[1816]: time="2025-04-30T13:49:32.675671107Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 13:49:32.676144 containerd[1816]: time="2025-04-30T13:49:32.676024952Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 13:49:32.676144 containerd[1816]: time="2025-04-30T13:49:32.676038161Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 13:49:32.676144 containerd[1816]: time="2025-04-30T13:49:32.676044890Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 13:49:32.676144 containerd[1816]: time="2025-04-30T13:49:32.676051698Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 13:49:32.676144 containerd[1816]: time="2025-04-30T13:49:32.676056664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.676144 containerd[1816]: time="2025-04-30T13:49:32.676063800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 13:49:32.676144 containerd[1816]: time="2025-04-30T13:49:32.676070294Z" level=info msg="NRI interface is disabled by configuration." Apr 30 13:49:32.676144 containerd[1816]: time="2025-04-30T13:49:32.676088247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 13:49:32.676337 containerd[1816]: time="2025-04-30T13:49:32.676251126Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 13:49:32.676337 containerd[1816]: time="2025-04-30T13:49:32.676278575Z" level=info msg="Connect containerd service" Apr 30 13:49:32.676337 containerd[1816]: time="2025-04-30T13:49:32.676295613Z" level=info msg="using legacy CRI server" Apr 30 13:49:32.676337 containerd[1816]: time="2025-04-30T13:49:32.676300039Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 13:49:32.676516 containerd[1816]: time="2025-04-30T13:49:32.676358704Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 13:49:32.676666 containerd[1816]: time="2025-04-30T13:49:32.676653670Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 13:49:32.676800 containerd[1816]: time="2025-04-30T13:49:32.676778794Z" level=info msg="Start subscribing containerd event" Apr 30 13:49:32.676829 containerd[1816]: time="2025-04-30T13:49:32.676802728Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 13:49:32.676829 containerd[1816]: time="2025-04-30T13:49:32.676809930Z" level=info msg="Start recovering state" Apr 30 13:49:32.676829 containerd[1816]: time="2025-04-30T13:49:32.676827324Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 13:49:32.676901 containerd[1816]: time="2025-04-30T13:49:32.676845133Z" level=info msg="Start event monitor" Apr 30 13:49:32.676901 containerd[1816]: time="2025-04-30T13:49:32.676854339Z" level=info msg="Start snapshots syncer" Apr 30 13:49:32.676901 containerd[1816]: time="2025-04-30T13:49:32.676859993Z" level=info msg="Start cni network conf syncer for default" Apr 30 13:49:32.676901 containerd[1816]: time="2025-04-30T13:49:32.676865210Z" level=info msg="Start streaming server" Apr 30 13:49:32.676901 containerd[1816]: time="2025-04-30T13:49:32.676897303Z" level=info msg="containerd successfully booted in 0.028374s" Apr 30 13:49:32.685410 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 13:49:32.693617 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 13:49:32.738607 tar[1812]: linux-amd64/LICENSE Apr 30 13:49:32.738660 tar[1812]: linux-amd64/README.md Apr 30 13:49:32.747080 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Apr 30 13:49:32.771112 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Apr 30 13:49:32.771236 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Apr 30 13:49:32.771762 extend-filesystems[1785]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Apr 30 13:49:32.771762 extend-filesystems[1785]: old_desc_blocks = 1, new_desc_blocks = 56 Apr 30 13:49:32.771762 extend-filesystems[1785]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Apr 30 13:49:32.828230 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Apr 30 13:49:32.772271 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 13:49:32.828316 extend-filesystems[1777]: Resized filesystem in /dev/sdb9 Apr 30 13:49:32.772382 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 13:49:32.778191 systemd-networkd[1728]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Apr 30 13:49:32.779968 systemd-networkd[1728]: enp2s0f0np0: Link UP Apr 30 13:49:32.780352 systemd-networkd[1728]: enp2s0f0np0: Gained carrier Apr 30 13:49:32.803254 systemd-networkd[1728]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:a2:ec.network. Apr 30 13:49:32.803408 systemd-networkd[1728]: enp2s0f1np1: Link UP Apr 30 13:49:32.803533 systemd-networkd[1728]: enp2s0f1np1: Gained carrier Apr 30 13:49:32.819230 systemd-networkd[1728]: bond0: Link UP Apr 30 13:49:32.819393 systemd-networkd[1728]: bond0: Gained carrier Apr 30 13:49:32.819554 systemd-timesyncd[1730]: Network configuration changed, trying to establish connection. Apr 30 13:49:32.819970 systemd-timesyncd[1730]: Network configuration changed, trying to establish connection. Apr 30 13:49:32.820051 systemd-timesyncd[1730]: Network configuration changed, trying to establish connection. Apr 30 13:49:32.820219 systemd-timesyncd[1730]: Network configuration changed, trying to establish connection. Apr 30 13:49:32.828715 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 13:49:32.847509 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 13:49:32.897206 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Apr 30 13:49:32.897226 kernel: bond0: active interface up! Apr 30 13:49:33.012080 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Apr 30 13:49:33.171927 coreos-metadata[1771]: Apr 30 13:49:33.171 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Apr 30 13:49:33.649525 coreos-metadata[1865]: Apr 30 13:49:33.649 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Apr 30 13:49:34.043230 systemd-timesyncd[1730]: Network configuration changed, trying to establish connection. Apr 30 13:49:34.298188 systemd-networkd[1728]: bond0: Gained IPv6LL Apr 30 13:49:34.298430 systemd-timesyncd[1730]: Network configuration changed, trying to establish connection. Apr 30 13:49:34.299439 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 13:49:34.311904 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 13:49:34.333280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:49:34.347319 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 13:49:34.366770 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 13:49:35.056509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:49:35.067594 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:49:35.580167 kubelet[1911]: E0430 13:49:35.580141 1911 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:49:35.581398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:49:35.581478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:49:35.581676 systemd[1]: kubelet.service: Consumed 570ms CPU time, 256.8M memory peak. Apr 30 13:49:36.449278 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 13:49:36.473424 systemd[1]: Started sshd@0-147.75.202.179:22-147.75.109.163:54030.service - OpenSSH per-connection server daemon (147.75.109.163:54030). Apr 30 13:49:36.495472 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Apr 30 13:49:36.495628 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Apr 30 13:49:36.518980 sshd[1933]: Accepted publickey for core from 147.75.109.163 port 54030 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:36.519907 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:36.527731 systemd-logind[1797]: New session 1 of user core. Apr 30 13:49:36.528918 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 13:49:36.559504 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 13:49:36.572293 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 13:49:36.595465 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 13:49:36.605625 (systemd)[1939]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 13:49:36.607972 systemd-logind[1797]: New session c1 of user core. Apr 30 13:49:36.722586 systemd[1939]: Queued start job for default target default.target. Apr 30 13:49:36.734605 systemd[1939]: Created slice app.slice - User Application Slice. Apr 30 13:49:36.734651 systemd[1939]: Reached target paths.target - Paths. Apr 30 13:49:36.734683 systemd[1939]: Reached target timers.target - Timers. Apr 30 13:49:36.735361 systemd[1939]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 13:49:36.741105 systemd[1939]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 13:49:36.741158 systemd[1939]: Reached target sockets.target - Sockets. Apr 30 13:49:36.741188 systemd[1939]: Reached target basic.target - Basic System. Apr 30 13:49:36.741221 systemd[1939]: Reached target default.target - Main User Target. Apr 30 13:49:36.741242 systemd[1939]: Startup finished in 126ms. Apr 30 13:49:36.741270 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 13:49:36.752203 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 13:49:36.820738 systemd[1]: Started sshd@1-147.75.202.179:22-147.75.109.163:54034.service - OpenSSH per-connection server daemon (147.75.109.163:54034). Apr 30 13:49:36.870754 sshd[1950]: Accepted publickey for core from 147.75.109.163 port 54034 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:36.871375 sshd-session[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:36.873884 systemd-logind[1797]: New session 2 of user core. Apr 30 13:49:36.892321 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 13:49:36.949070 sshd[1952]: Connection closed by 147.75.109.163 port 54034 Apr 30 13:49:36.949275 sshd-session[1950]: pam_unix(sshd:session): session closed for user core Apr 30 13:49:36.966046 systemd[1]: sshd@1-147.75.202.179:22-147.75.109.163:54034.service: Deactivated successfully. Apr 30 13:49:36.966808 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 13:49:36.967485 systemd-logind[1797]: Session 2 logged out. Waiting for processes to exit. Apr 30 13:49:36.968162 systemd[1]: Started sshd@2-147.75.202.179:22-147.75.109.163:54038.service - OpenSSH per-connection server daemon (147.75.109.163:54038). Apr 30 13:49:36.980717 systemd-logind[1797]: Removed session 2. Apr 30 13:49:37.009256 sshd[1957]: Accepted publickey for core from 147.75.109.163 port 54038 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:37.009833 sshd-session[1957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:37.012369 systemd-logind[1797]: New session 3 of user core. Apr 30 13:49:37.027244 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 13:49:37.083211 sshd[1960]: Connection closed by 147.75.109.163 port 54038 Apr 30 13:49:37.083340 sshd-session[1957]: pam_unix(sshd:session): session closed for user core Apr 30 13:49:37.084675 systemd[1]: sshd@2-147.75.202.179:22-147.75.109.163:54038.service: Deactivated successfully. Apr 30 13:49:37.085556 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 13:49:37.086229 systemd-logind[1797]: Session 3 logged out. Waiting for processes to exit. Apr 30 13:49:37.086774 systemd-logind[1797]: Removed session 3. Apr 30 13:49:37.238282 coreos-metadata[1865]: Apr 30 13:49:37.238 INFO Fetch successful Apr 30 13:49:37.253248 coreos-metadata[1771]: Apr 30 13:49:37.253 INFO Fetch successful Apr 30 13:49:37.274214 unknown[1865]: wrote ssh authorized keys file for user: core Apr 30 13:49:37.292885 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 13:49:37.295523 update-ssh-keys[1966]: Updated "/home/core/.ssh/authorized_keys" Apr 30 13:49:37.302680 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 13:49:37.313975 systemd[1]: Finished sshkeys.service. Apr 30 13:49:37.333333 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Apr 30 13:49:37.689522 login[1882]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 13:49:37.692592 systemd-logind[1797]: New session 4 of user core. Apr 30 13:49:37.693573 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 13:49:37.694866 login[1877]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 13:49:37.697413 systemd-logind[1797]: New session 5 of user core. Apr 30 13:49:37.698353 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 13:49:37.714043 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Apr 30 13:49:37.714655 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 13:49:37.714835 systemd[1]: Startup finished in 2.845s (kernel) + 22.479s (initrd) + 9.464s (userspace) = 34.789s. Apr 30 13:49:39.648703 systemd-timesyncd[1730]: Network configuration changed, trying to establish connection. Apr 30 13:49:45.618149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 13:49:45.634341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:49:45.833948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:49:45.838137 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:49:45.898696 kubelet[2010]: E0430 13:49:45.898588 2010 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:49:45.901096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:49:45.901187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:49:45.901378 systemd[1]: kubelet.service: Consumed 164ms CPU time, 106.5M memory peak. Apr 30 13:49:47.121374 systemd[1]: Started sshd@3-147.75.202.179:22-147.75.109.163:46658.service - OpenSSH per-connection server daemon (147.75.109.163:46658). Apr 30 13:49:47.151195 sshd[2028]: Accepted publickey for core from 147.75.109.163 port 46658 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:47.151924 sshd-session[2028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:47.155064 systemd-logind[1797]: New session 6 of user core. Apr 30 13:49:47.176557 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 13:49:47.230898 sshd[2030]: Connection closed by 147.75.109.163 port 46658 Apr 30 13:49:47.231052 sshd-session[2028]: pam_unix(sshd:session): session closed for user core Apr 30 13:49:47.244465 systemd[1]: sshd@3-147.75.202.179:22-147.75.109.163:46658.service: Deactivated successfully. Apr 30 13:49:47.245326 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 13:49:47.246082 systemd-logind[1797]: Session 6 logged out. Waiting for processes to exit. Apr 30 13:49:47.246855 systemd[1]: Started sshd@4-147.75.202.179:22-147.75.109.163:46670.service - OpenSSH per-connection server daemon (147.75.109.163:46670). Apr 30 13:49:47.247552 systemd-logind[1797]: Removed session 6. Apr 30 13:49:47.281238 sshd[2035]: Accepted publickey for core from 147.75.109.163 port 46670 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:47.282044 sshd-session[2035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:47.285542 systemd-logind[1797]: New session 7 of user core. Apr 30 13:49:47.302448 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 13:49:47.354960 sshd[2038]: Connection closed by 147.75.109.163 port 46670 Apr 30 13:49:47.355118 sshd-session[2035]: pam_unix(sshd:session): session closed for user core Apr 30 13:49:47.371794 systemd[1]: sshd@4-147.75.202.179:22-147.75.109.163:46670.service: Deactivated successfully. Apr 30 13:49:47.372778 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 13:49:47.373396 systemd-logind[1797]: Session 7 logged out. Waiting for processes to exit. Apr 30 13:49:47.374452 systemd[1]: Started sshd@5-147.75.202.179:22-147.75.109.163:46676.service - OpenSSH per-connection server daemon (147.75.109.163:46676). Apr 30 13:49:47.375009 systemd-logind[1797]: Removed session 7. Apr 30 13:49:47.407524 sshd[2043]: Accepted publickey for core from 147.75.109.163 port 46676 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:47.408366 sshd-session[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:47.411870 systemd-logind[1797]: New session 8 of user core. Apr 30 13:49:47.423349 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 13:49:47.485821 sshd[2046]: Connection closed by 147.75.109.163 port 46676 Apr 30 13:49:47.486608 sshd-session[2043]: pam_unix(sshd:session): session closed for user core Apr 30 13:49:47.510538 systemd[1]: sshd@5-147.75.202.179:22-147.75.109.163:46676.service: Deactivated successfully. Apr 30 13:49:47.514597 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 13:49:47.516935 systemd-logind[1797]: Session 8 logged out. Waiting for processes to exit. Apr 30 13:49:47.531034 systemd[1]: Started sshd@6-147.75.202.179:22-147.75.109.163:46690.service - OpenSSH per-connection server daemon (147.75.109.163:46690). Apr 30 13:49:47.534929 systemd-logind[1797]: Removed session 8. Apr 30 13:49:47.584940 sshd[2051]: Accepted publickey for core from 147.75.109.163 port 46690 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:47.585785 sshd-session[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:47.589209 systemd-logind[1797]: New session 9 of user core. Apr 30 13:49:47.603334 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 13:49:47.667640 sudo[2055]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 13:49:47.667791 sudo[2055]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:49:47.687508 sudo[2055]: pam_unix(sudo:session): session closed for user root Apr 30 13:49:47.690200 sshd[2054]: Connection closed by 147.75.109.163 port 46690 Apr 30 13:49:47.691102 sshd-session[2051]: pam_unix(sshd:session): session closed for user core Apr 30 13:49:47.712578 systemd[1]: sshd@6-147.75.202.179:22-147.75.109.163:46690.service: Deactivated successfully. Apr 30 13:49:47.716343 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 13:49:47.718602 systemd-logind[1797]: Session 9 logged out. Waiting for processes to exit. Apr 30 13:49:47.734889 systemd[1]: Started sshd@7-147.75.202.179:22-147.75.109.163:46702.service - OpenSSH per-connection server daemon (147.75.109.163:46702). Apr 30 13:49:47.737726 systemd-logind[1797]: Removed session 9. Apr 30 13:49:47.796567 sshd[2060]: Accepted publickey for core from 147.75.109.163 port 46702 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:47.797512 sshd-session[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:47.801404 systemd-logind[1797]: New session 10 of user core. Apr 30 13:49:47.815314 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 13:49:47.872930 sudo[2065]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 13:49:47.873082 sudo[2065]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:49:47.875192 sudo[2065]: pam_unix(sudo:session): session closed for user root Apr 30 13:49:47.877825 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 13:49:47.877965 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:49:47.898576 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:49:47.926952 augenrules[2087]: No rules Apr 30 13:49:47.927863 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:49:47.928235 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:49:47.929980 sudo[2064]: pam_unix(sudo:session): session closed for user root Apr 30 13:49:47.932213 sshd[2063]: Connection closed by 147.75.109.163 port 46702 Apr 30 13:49:47.932823 sshd-session[2060]: pam_unix(sshd:session): session closed for user core Apr 30 13:49:47.950797 systemd[1]: sshd@7-147.75.202.179:22-147.75.109.163:46702.service: Deactivated successfully. Apr 30 13:49:47.951539 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 13:49:47.952192 systemd-logind[1797]: Session 10 logged out. Waiting for processes to exit. Apr 30 13:49:47.952784 systemd[1]: Started sshd@8-147.75.202.179:22-147.75.109.163:46716.service - OpenSSH per-connection server daemon (147.75.109.163:46716). Apr 30 13:49:47.953236 systemd-logind[1797]: Removed session 10. Apr 30 13:49:47.985020 sshd[2095]: Accepted publickey for core from 147.75.109.163 port 46716 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:49:47.985787 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:49:47.989098 systemd-logind[1797]: New session 11 of user core. Apr 30 13:49:47.997319 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 13:49:48.059379 sudo[2099]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 13:49:48.060238 sudo[2099]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:49:48.350418 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 13:49:48.350477 (dockerd)[2127]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 13:49:48.620708 dockerd[2127]: time="2025-04-30T13:49:48.620655700Z" level=info msg="Starting up" Apr 30 13:49:48.745668 dockerd[2127]: time="2025-04-30T13:49:48.745613315Z" level=info msg="Loading containers: start." Apr 30 13:49:48.867120 kernel: Initializing XFRM netlink socket Apr 30 13:49:48.882396 systemd-timesyncd[1730]: Network configuration changed, trying to establish connection. Apr 30 13:49:48.951738 systemd-networkd[1728]: docker0: Link UP Apr 30 13:49:48.994366 dockerd[2127]: time="2025-04-30T13:49:48.994242797Z" level=info msg="Loading containers: done." Apr 30 13:49:49.012942 dockerd[2127]: time="2025-04-30T13:49:49.012904241Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 13:49:49.013014 dockerd[2127]: time="2025-04-30T13:49:49.012968676Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 13:49:49.013033 dockerd[2127]: time="2025-04-30T13:49:49.013023992Z" level=info msg="Daemon has completed initialization" Apr 30 13:49:49.027743 dockerd[2127]: time="2025-04-30T13:49:49.027718811Z" level=info msg="API listen on /run/docker.sock" Apr 30 13:49:49.027840 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 13:49:49.729461 systemd-resolved[1729]: Clock change detected. Flushing caches. Apr 30 13:49:49.729540 systemd-timesyncd[1730]: Contacted time server [2605:6400:40:fec3:5a00:b1ba:9a51:c93b]:123 (2.flatcar.pool.ntp.org). Apr 30 13:49:49.729574 systemd-timesyncd[1730]: Initial clock synchronization to Wed 2025-04-30 13:49:49.729361 UTC. Apr 30 13:49:50.208894 containerd[1816]: time="2025-04-30T13:49:50.208842312Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 13:49:50.912156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772629902.mount: Deactivated successfully. Apr 30 13:49:52.621060 containerd[1816]: time="2025-04-30T13:49:52.621035718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:52.621282 containerd[1816]: time="2025-04-30T13:49:52.621146781Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" Apr 30 13:49:52.621635 containerd[1816]: time="2025-04-30T13:49:52.621622718Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:52.623494 containerd[1816]: time="2025-04-30T13:49:52.623464266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:52.624019 containerd[1816]: time="2025-04-30T13:49:52.624007866Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.415144052s" Apr 30 13:49:52.624052 containerd[1816]: time="2025-04-30T13:49:52.624022319Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" Apr 30 13:49:52.635136 containerd[1816]: time="2025-04-30T13:49:52.635103321Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 13:49:54.506325 containerd[1816]: time="2025-04-30T13:49:54.506293143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:54.506539 containerd[1816]: time="2025-04-30T13:49:54.506511676Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" Apr 30 13:49:54.507016 containerd[1816]: time="2025-04-30T13:49:54.507003718Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:54.508935 containerd[1816]: time="2025-04-30T13:49:54.508921625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:54.509480 containerd[1816]: time="2025-04-30T13:49:54.509433911Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.874307253s" Apr 30 13:49:54.509480 containerd[1816]: time="2025-04-30T13:49:54.509450809Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" Apr 30 13:49:54.521343 containerd[1816]: time="2025-04-30T13:49:54.521325031Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 13:49:55.781510 containerd[1816]: time="2025-04-30T13:49:55.781484639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:55.781731 containerd[1816]: time="2025-04-30T13:49:55.781711856Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" Apr 30 13:49:55.782032 containerd[1816]: time="2025-04-30T13:49:55.782022360Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:55.783932 containerd[1816]: time="2025-04-30T13:49:55.783918467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:55.784434 containerd[1816]: time="2025-04-30T13:49:55.784388059Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.263043873s" Apr 30 13:49:55.784434 containerd[1816]: time="2025-04-30T13:49:55.784402702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" Apr 30 13:49:55.795665 containerd[1816]: time="2025-04-30T13:49:55.795645109Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 13:49:56.371168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 13:49:56.392514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:49:56.541083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778262235.mount: Deactivated successfully. Apr 30 13:49:56.599878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:49:56.601914 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:49:56.625494 kubelet[2465]: E0430 13:49:56.625379 2465 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:49:56.626565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:49:56.626664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:49:56.626848 systemd[1]: kubelet.service: Consumed 86ms CPU time, 106.5M memory peak. Apr 30 13:49:56.908594 containerd[1816]: time="2025-04-30T13:49:56.908505874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:56.908821 containerd[1816]: time="2025-04-30T13:49:56.908616063Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" Apr 30 13:49:56.909157 containerd[1816]: time="2025-04-30T13:49:56.909136875Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:56.910572 containerd[1816]: time="2025-04-30T13:49:56.910534852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:56.910917 containerd[1816]: time="2025-04-30T13:49:56.910881221Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.115215784s" Apr 30 13:49:56.910917 containerd[1816]: time="2025-04-30T13:49:56.910896352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" Apr 30 13:49:56.921598 containerd[1816]: time="2025-04-30T13:49:56.921576326Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 13:49:57.430998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720359295.mount: Deactivated successfully. Apr 30 13:49:57.936722 containerd[1816]: time="2025-04-30T13:49:57.936669591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:57.936979 containerd[1816]: time="2025-04-30T13:49:57.936928385Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Apr 30 13:49:57.937350 containerd[1816]: time="2025-04-30T13:49:57.937312439Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:57.938918 containerd[1816]: time="2025-04-30T13:49:57.938871463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:57.939547 containerd[1816]: time="2025-04-30T13:49:57.939504893Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.017907203s" Apr 30 13:49:57.939547 containerd[1816]: time="2025-04-30T13:49:57.939521999Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 30 13:49:57.950236 containerd[1816]: time="2025-04-30T13:49:57.950209548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 13:49:58.393965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282079433.mount: Deactivated successfully. Apr 30 13:49:58.395136 containerd[1816]: time="2025-04-30T13:49:58.395118722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:58.395311 containerd[1816]: time="2025-04-30T13:49:58.395293958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Apr 30 13:49:58.395779 containerd[1816]: time="2025-04-30T13:49:58.395767912Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:58.397055 containerd[1816]: time="2025-04-30T13:49:58.397044991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:49:58.397639 containerd[1816]: time="2025-04-30T13:49:58.397624328Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 447.386785ms" Apr 30 13:49:58.397668 containerd[1816]: time="2025-04-30T13:49:58.397644229Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 30 13:49:58.409595 containerd[1816]: time="2025-04-30T13:49:58.409517884Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 13:49:58.913124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1841356717.mount: Deactivated successfully. Apr 30 13:50:00.656884 containerd[1816]: time="2025-04-30T13:50:00.656826861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:50:00.657093 containerd[1816]: time="2025-04-30T13:50:00.656969313Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Apr 30 13:50:00.657523 containerd[1816]: time="2025-04-30T13:50:00.657484285Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:50:00.659429 containerd[1816]: time="2025-04-30T13:50:00.659379502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:50:00.659954 containerd[1816]: time="2025-04-30T13:50:00.659911085Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.250372008s" Apr 30 13:50:00.659954 containerd[1816]: time="2025-04-30T13:50:00.659928764Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Apr 30 13:50:02.342921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:02.343108 systemd[1]: kubelet.service: Consumed 86ms CPU time, 106.5M memory peak. Apr 30 13:50:02.359798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:50:02.374310 systemd[1]: Reload requested from client PID 2767 ('systemctl') (unit session-11.scope)... Apr 30 13:50:02.374319 systemd[1]: Reloading... Apr 30 13:50:02.419458 zram_generator::config[2813]: No configuration found. Apr 30 13:50:02.487165 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:50:02.569855 systemd[1]: Reloading finished in 195 ms. Apr 30 13:50:02.613902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:02.615768 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:50:02.616050 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:50:02.616178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:02.616203 systemd[1]: kubelet.service: Consumed 46ms CPU time, 83.5M memory peak. Apr 30 13:50:02.617104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:50:02.825281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:02.827575 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:50:02.848657 kubelet[2882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:50:02.848657 kubelet[2882]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 13:50:02.848657 kubelet[2882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:50:02.849603 kubelet[2882]: I0430 13:50:02.849558 2882 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:50:03.052641 kubelet[2882]: I0430 13:50:03.052598 2882 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 13:50:03.052641 kubelet[2882]: I0430 13:50:03.052610 2882 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:50:03.052771 kubelet[2882]: I0430 13:50:03.052737 2882 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 13:50:03.068194 kubelet[2882]: I0430 13:50:03.068175 2882 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:50:03.069365 kubelet[2882]: E0430 13:50:03.069353 2882 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.75.202.179:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.083156 kubelet[2882]: I0430 13:50:03.083117 2882 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:50:03.084148 kubelet[2882]: I0430 13:50:03.084112 2882 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:50:03.084267 kubelet[2882]: I0430 13:50:03.084126 2882 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-70e1417a44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 13:50:03.084267 kubelet[2882]: I0430 13:50:03.084267 2882 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:50:03.084357 kubelet[2882]: I0430 13:50:03.084274 2882 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 13:50:03.084357 kubelet[2882]: I0430 13:50:03.084334 2882 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:50:03.085018 kubelet[2882]: I0430 13:50:03.085010 2882 kubelet.go:400] "Attempting to sync node with API server" Apr 30 13:50:03.085018 kubelet[2882]: I0430 13:50:03.085018 2882 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:50:03.085067 kubelet[2882]: I0430 13:50:03.085029 2882 kubelet.go:312] "Adding apiserver pod source" Apr 30 13:50:03.085067 kubelet[2882]: I0430 13:50:03.085038 2882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:50:03.085317 kubelet[2882]: W0430 13:50:03.085295 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.179:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.085351 kubelet[2882]: E0430 13:50:03.085325 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.202.179:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.085404 kubelet[2882]: W0430 13:50:03.085364 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.202.179:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-70e1417a44&limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.085468 kubelet[2882]: E0430 13:50:03.085410 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.75.202.179:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-70e1417a44&limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.089668 kubelet[2882]: I0430 13:50:03.089560 2882 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:50:03.091214 kubelet[2882]: I0430 13:50:03.091177 2882 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:50:03.091214 kubelet[2882]: W0430 13:50:03.091207 2882 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 13:50:03.091692 kubelet[2882]: I0430 13:50:03.091663 2882 server.go:1264] "Started kubelet" Apr 30 13:50:03.091867 kubelet[2882]: I0430 13:50:03.091773 2882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:50:03.091902 kubelet[2882]: I0430 13:50:03.091807 2882 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:50:03.092016 kubelet[2882]: I0430 13:50:03.092006 2882 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:50:03.092606 kubelet[2882]: I0430 13:50:03.092598 2882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:50:03.092645 kubelet[2882]: I0430 13:50:03.092617 2882 server.go:455] "Adding debug handlers to kubelet server" Apr 30 13:50:03.092645 kubelet[2882]: I0430 13:50:03.092622 2882 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 13:50:03.092710 kubelet[2882]: I0430 13:50:03.092643 2882 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 13:50:03.092710 kubelet[2882]: E0430 13:50:03.092638 2882 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-70e1417a44\" not found" Apr 30 13:50:03.092710 kubelet[2882]: I0430 13:50:03.092684 2882 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:50:03.092802 kubelet[2882]: E0430 13:50:03.092787 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-70e1417a44?timeout=10s\": dial tcp 147.75.202.179:6443: connect: connection refused" interval="200ms" Apr 30 13:50:03.092897 kubelet[2882]: W0430 13:50:03.092869 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.092928 kubelet[2882]: E0430 13:50:03.092908 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.202.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.093071 kubelet[2882]: I0430 13:50:03.092946 2882 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:50:03.093071 kubelet[2882]: I0430 13:50:03.092985 2882 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:50:03.093417 kubelet[2882]: E0430 13:50:03.093404 2882 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:50:03.094215 kubelet[2882]: I0430 13:50:03.094204 2882 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:50:03.096496 kubelet[2882]: E0430 13:50:03.096425 2882 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.202.179:6443/api/v1/namespaces/default/events\": dial tcp 147.75.202.179:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-a-70e1417a44.183b1cd9fba5f3f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-a-70e1417a44,UID:ci-4230.1.1-a-70e1417a44,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-a-70e1417a44,},FirstTimestamp:2025-04-30 13:50:03.091637234 +0000 UTC m=+0.261843764,LastTimestamp:2025-04-30 13:50:03.091637234 +0000 UTC m=+0.261843764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-a-70e1417a44,}" Apr 30 13:50:03.101351 kubelet[2882]: I0430 13:50:03.101331 2882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:50:03.101460 kubelet[2882]: I0430 13:50:03.101448 2882 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 13:50:03.101460 kubelet[2882]: I0430 13:50:03.101457 2882 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 13:50:03.101534 kubelet[2882]: I0430 13:50:03.101466 2882 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:50:03.101869 kubelet[2882]: I0430 13:50:03.101861 2882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:50:03.101898 kubelet[2882]: I0430 13:50:03.101874 2882 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 13:50:03.101898 kubelet[2882]: I0430 13:50:03.101883 2882 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 13:50:03.101935 kubelet[2882]: E0430 13:50:03.101901 2882 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:50:03.102167 kubelet[2882]: W0430 13:50:03.102146 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.202.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.102189 kubelet[2882]: E0430 13:50:03.102174 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.75.202.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.102344 kubelet[2882]: I0430 13:50:03.102338 2882 policy_none.go:49] "None policy: Start" Apr 30 13:50:03.102606 kubelet[2882]: I0430 13:50:03.102598 2882 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 13:50:03.102637 kubelet[2882]: I0430 13:50:03.102610 2882 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:50:03.105583 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 13:50:03.130233 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 13:50:03.132281 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 13:50:03.145086 kubelet[2882]: I0430 13:50:03.145072 2882 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:50:03.145234 kubelet[2882]: I0430 13:50:03.145207 2882 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:50:03.145311 kubelet[2882]: I0430 13:50:03.145302 2882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:50:03.146094 kubelet[2882]: E0430 13:50:03.146078 2882 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-a-70e1417a44\" not found" Apr 30 13:50:03.196801 kubelet[2882]: I0430 13:50:03.196746 2882 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.197548 kubelet[2882]: E0430 13:50:03.197482 2882 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.179:6443/api/v1/nodes\": dial tcp 147.75.202.179:6443: connect: connection refused" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.202766 kubelet[2882]: I0430 13:50:03.202644 2882 topology_manager.go:215] "Topology Admit Handler" podUID="4395b7c565373cdc8758c175be951ae0" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.206039 kubelet[2882]: I0430 13:50:03.205982 2882 topology_manager.go:215] "Topology Admit Handler" podUID="a7bfc9a2ba4e222f2b239d3e73ddb175" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.209453 kubelet[2882]: I0430 13:50:03.209370 2882 topology_manager.go:215] "Topology Admit Handler" podUID="2aca5555d5277c4e0f5e881359c55b95" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.223797 systemd[1]: Created slice kubepods-burstable-pod4395b7c565373cdc8758c175be951ae0.slice - libcontainer container kubepods-burstable-pod4395b7c565373cdc8758c175be951ae0.slice. Apr 30 13:50:03.254280 systemd[1]: Created slice kubepods-burstable-poda7bfc9a2ba4e222f2b239d3e73ddb175.slice - libcontainer container kubepods-burstable-poda7bfc9a2ba4e222f2b239d3e73ddb175.slice. Apr 30 13:50:03.265043 systemd[1]: Created slice kubepods-burstable-pod2aca5555d5277c4e0f5e881359c55b95.slice - libcontainer container kubepods-burstable-pod2aca5555d5277c4e0f5e881359c55b95.slice. Apr 30 13:50:03.294116 kubelet[2882]: E0430 13:50:03.294007 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-70e1417a44?timeout=10s\": dial tcp 147.75.202.179:6443: connect: connection refused" interval="400ms" Apr 30 13:50:03.394042 kubelet[2882]: I0430 13:50:03.393850 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2aca5555d5277c4e0f5e881359c55b95-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" (UID: \"2aca5555d5277c4e0f5e881359c55b95\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.394042 kubelet[2882]: I0430 13:50:03.393937 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2aca5555d5277c4e0f5e881359c55b95-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" (UID: \"2aca5555d5277c4e0f5e881359c55b95\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.394042 kubelet[2882]: I0430 13:50:03.393996 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.394042 kubelet[2882]: I0430 13:50:03.394042 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.394448 kubelet[2882]: I0430 13:50:03.394075 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.394448 kubelet[2882]: I0430 13:50:03.394110 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.394448 kubelet[2882]: I0430 13:50:03.394155 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.394448 kubelet[2882]: I0430 13:50:03.394226 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7bfc9a2ba4e222f2b239d3e73ddb175-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-70e1417a44\" (UID: \"a7bfc9a2ba4e222f2b239d3e73ddb175\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.394448 kubelet[2882]: I0430 13:50:03.394269 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2aca5555d5277c4e0f5e881359c55b95-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" (UID: \"2aca5555d5277c4e0f5e881359c55b95\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.398756 kubelet[2882]: I0430 13:50:03.398748 2882 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.398939 kubelet[2882]: E0430 13:50:03.398929 2882 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.179:6443/api/v1/nodes\": dial tcp 147.75.202.179:6443: connect: connection refused" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.547490 containerd[1816]: time="2025-04-30T13:50:03.547375167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-70e1417a44,Uid:4395b7c565373cdc8758c175be951ae0,Namespace:kube-system,Attempt:0,}" Apr 30 13:50:03.559870 containerd[1816]: time="2025-04-30T13:50:03.559817455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-70e1417a44,Uid:a7bfc9a2ba4e222f2b239d3e73ddb175,Namespace:kube-system,Attempt:0,}" Apr 30 13:50:03.569772 containerd[1816]: time="2025-04-30T13:50:03.569691136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-70e1417a44,Uid:2aca5555d5277c4e0f5e881359c55b95,Namespace:kube-system,Attempt:0,}" Apr 30 13:50:03.695311 kubelet[2882]: E0430 13:50:03.695009 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-70e1417a44?timeout=10s\": dial tcp 147.75.202.179:6443: connect: connection refused" interval="800ms" Apr 30 13:50:03.803270 kubelet[2882]: I0430 13:50:03.803181 2882 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.804020 kubelet[2882]: E0430 13:50:03.803910 2882 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.75.202.179:6443/api/v1/nodes\": dial tcp 147.75.202.179:6443: connect: connection refused" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:03.954871 kubelet[2882]: W0430 13:50:03.954758 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:03.954871 kubelet[2882]: E0430 13:50:03.954807 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.75.202.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:04.044086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050956948.mount: Deactivated successfully. Apr 30 13:50:04.045972 containerd[1816]: time="2025-04-30T13:50:04.045954109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:50:04.046251 containerd[1816]: time="2025-04-30T13:50:04.046202067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 13:50:04.047421 containerd[1816]: time="2025-04-30T13:50:04.047410734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:50:04.047922 containerd[1816]: time="2025-04-30T13:50:04.047911672Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:50:04.048274 containerd[1816]: time="2025-04-30T13:50:04.048260106Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:50:04.048636 containerd[1816]: time="2025-04-30T13:50:04.048621617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:50:04.048865 containerd[1816]: time="2025-04-30T13:50:04.048854210Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:50:04.050742 containerd[1816]: time="2025-04-30T13:50:04.050730180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.09454ms" Apr 30 13:50:04.051514 containerd[1816]: time="2025-04-30T13:50:04.051503482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:50:04.051986 containerd[1816]: time="2025-04-30T13:50:04.051975431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 482.200297ms" Apr 30 13:50:04.053661 containerd[1816]: time="2025-04-30T13:50:04.053649792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 493.791052ms" Apr 30 13:50:04.069155 kubelet[2882]: W0430 13:50:04.069082 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.179:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:04.069209 kubelet[2882]: E0430 13:50:04.069159 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.75.202.179:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142899453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142925913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142932675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142973495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142900750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142926102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142933004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142899437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142925907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142932755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142975550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:04.142991 containerd[1816]: time="2025-04-30T13:50:04.142975739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:04.163644 systemd[1]: Started cri-containerd-2fe8ca46753ca4a1e9db07efae146d5ab06cf464234bcbfa24b1cf8f836de20b.scope - libcontainer container 2fe8ca46753ca4a1e9db07efae146d5ab06cf464234bcbfa24b1cf8f836de20b. Apr 30 13:50:04.164539 systemd[1]: Started cri-containerd-6f6c772fd98442d2c80d186393c3b15fb1adbfb38e19658ea71fd6a4e3450e16.scope - libcontainer container 6f6c772fd98442d2c80d186393c3b15fb1adbfb38e19658ea71fd6a4e3450e16. Apr 30 13:50:04.165326 systemd[1]: Started cri-containerd-f8036d50b961efba30d16c89b83d282b2a966340370b052205a5868ec827efcf.scope - libcontainer container f8036d50b961efba30d16c89b83d282b2a966340370b052205a5868ec827efcf. Apr 30 13:50:04.188805 containerd[1816]: time="2025-04-30T13:50:04.188771814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-70e1417a44,Uid:a7bfc9a2ba4e222f2b239d3e73ddb175,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fe8ca46753ca4a1e9db07efae146d5ab06cf464234bcbfa24b1cf8f836de20b\"" Apr 30 13:50:04.189320 containerd[1816]: time="2025-04-30T13:50:04.189304697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-70e1417a44,Uid:4395b7c565373cdc8758c175be951ae0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f6c772fd98442d2c80d186393c3b15fb1adbfb38e19658ea71fd6a4e3450e16\"" Apr 30 13:50:04.191429 containerd[1816]: time="2025-04-30T13:50:04.191118067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-70e1417a44,Uid:2aca5555d5277c4e0f5e881359c55b95,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8036d50b961efba30d16c89b83d282b2a966340370b052205a5868ec827efcf\"" Apr 30 13:50:04.191429 containerd[1816]: time="2025-04-30T13:50:04.191165107Z" level=info msg="CreateContainer within sandbox \"2fe8ca46753ca4a1e9db07efae146d5ab06cf464234bcbfa24b1cf8f836de20b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 13:50:04.191429 containerd[1816]: time="2025-04-30T13:50:04.191177041Z" level=info msg="CreateContainer within sandbox \"6f6c772fd98442d2c80d186393c3b15fb1adbfb38e19658ea71fd6a4e3450e16\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 13:50:04.192552 containerd[1816]: time="2025-04-30T13:50:04.192537002Z" level=info msg="CreateContainer within sandbox \"f8036d50b961efba30d16c89b83d282b2a966340370b052205a5868ec827efcf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 13:50:04.198050 containerd[1816]: time="2025-04-30T13:50:04.198006962Z" level=info msg="CreateContainer within sandbox \"6f6c772fd98442d2c80d186393c3b15fb1adbfb38e19658ea71fd6a4e3450e16\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6fe591bb862e82787562e3162dc730f026155f95f9e18519c309f30c0dc466aa\"" Apr 30 13:50:04.198277 containerd[1816]: time="2025-04-30T13:50:04.198238020Z" level=info msg="StartContainer for \"6fe591bb862e82787562e3162dc730f026155f95f9e18519c309f30c0dc466aa\"" Apr 30 13:50:04.198531 containerd[1816]: time="2025-04-30T13:50:04.198490464Z" level=info msg="CreateContainer within sandbox \"2fe8ca46753ca4a1e9db07efae146d5ab06cf464234bcbfa24b1cf8f836de20b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3d74031200df8d1868a454f9ca27d26323e97b23191e352445dc7c622e5757fe\"" Apr 30 13:50:04.198698 containerd[1816]: time="2025-04-30T13:50:04.198659763Z" level=info msg="StartContainer for \"3d74031200df8d1868a454f9ca27d26323e97b23191e352445dc7c622e5757fe\"" Apr 30 13:50:04.199490 containerd[1816]: time="2025-04-30T13:50:04.199449927Z" level=info msg="CreateContainer within sandbox \"f8036d50b961efba30d16c89b83d282b2a966340370b052205a5868ec827efcf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"584ef7597692b88346c5a05b91a11a5cbf14260bca961aa8ce832dc59cabccbd\"" Apr 30 13:50:04.199622 containerd[1816]: time="2025-04-30T13:50:04.199581870Z" level=info msg="StartContainer for \"584ef7597692b88346c5a05b91a11a5cbf14260bca961aa8ce832dc59cabccbd\"" Apr 30 13:50:04.232691 systemd[1]: Started cri-containerd-3d74031200df8d1868a454f9ca27d26323e97b23191e352445dc7c622e5757fe.scope - libcontainer container 3d74031200df8d1868a454f9ca27d26323e97b23191e352445dc7c622e5757fe. Apr 30 13:50:04.233313 systemd[1]: Started cri-containerd-584ef7597692b88346c5a05b91a11a5cbf14260bca961aa8ce832dc59cabccbd.scope - libcontainer container 584ef7597692b88346c5a05b91a11a5cbf14260bca961aa8ce832dc59cabccbd. Apr 30 13:50:04.233945 systemd[1]: Started cri-containerd-6fe591bb862e82787562e3162dc730f026155f95f9e18519c309f30c0dc466aa.scope - libcontainer container 6fe591bb862e82787562e3162dc730f026155f95f9e18519c309f30c0dc466aa. Apr 30 13:50:04.262431 containerd[1816]: time="2025-04-30T13:50:04.262405811Z" level=info msg="StartContainer for \"6fe591bb862e82787562e3162dc730f026155f95f9e18519c309f30c0dc466aa\" returns successfully" Apr 30 13:50:04.262431 containerd[1816]: time="2025-04-30T13:50:04.262421393Z" level=info msg="StartContainer for \"584ef7597692b88346c5a05b91a11a5cbf14260bca961aa8ce832dc59cabccbd\" returns successfully" Apr 30 13:50:04.262431 containerd[1816]: time="2025-04-30T13:50:04.262413628Z" level=info msg="StartContainer for \"3d74031200df8d1868a454f9ca27d26323e97b23191e352445dc7c622e5757fe\" returns successfully" Apr 30 13:50:04.605250 kubelet[2882]: I0430 13:50:04.605209 2882 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:04.924728 kubelet[2882]: E0430 13:50:04.924661 2882 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-a-70e1417a44\" not found" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:05.026033 kubelet[2882]: I0430 13:50:05.025886 2882 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:05.085796 kubelet[2882]: I0430 13:50:05.085735 2882 apiserver.go:52] "Watching apiserver" Apr 30 13:50:05.092831 kubelet[2882]: I0430 13:50:05.092781 2882 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 13:50:05.113080 kubelet[2882]: E0430 13:50:05.113051 2882 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:05.113080 kubelet[2882]: E0430 13:50:05.113077 2882 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230.1.1-a-70e1417a44\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:05.113255 kubelet[2882]: E0430 13:50:05.113077 2882 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:06.119768 kubelet[2882]: W0430 13:50:06.119704 2882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:50:06.634925 kubelet[2882]: W0430 13:50:06.634826 2882 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:50:07.265990 systemd[1]: Reload requested from client PID 3199 ('systemctl') (unit session-11.scope)... Apr 30 13:50:07.265997 systemd[1]: Reloading... Apr 30 13:50:07.312448 zram_generator::config[3245]: No configuration found. Apr 30 13:50:07.389964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:50:07.482456 systemd[1]: Reloading finished in 216 ms. Apr 30 13:50:07.505113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:50:07.519120 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:50:07.519260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:07.519290 systemd[1]: kubelet.service: Consumed 735ms CPU time, 130.7M memory peak. Apr 30 13:50:07.537744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:50:07.755704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:07.758165 (kubelet)[3309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:50:07.780198 kubelet[3309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:50:07.780198 kubelet[3309]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 13:50:07.780198 kubelet[3309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:50:07.780502 kubelet[3309]: I0430 13:50:07.780188 3309 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:50:07.782795 kubelet[3309]: I0430 13:50:07.782759 3309 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 13:50:07.782795 kubelet[3309]: I0430 13:50:07.782769 3309 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:50:07.782876 kubelet[3309]: I0430 13:50:07.782870 3309 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 13:50:07.783604 kubelet[3309]: I0430 13:50:07.783566 3309 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 13:50:07.784182 kubelet[3309]: I0430 13:50:07.784143 3309 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:50:07.793435 kubelet[3309]: I0430 13:50:07.793406 3309 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:50:07.793559 kubelet[3309]: I0430 13:50:07.793513 3309 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:50:07.793664 kubelet[3309]: I0430 13:50:07.793531 3309 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-70e1417a44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 13:50:07.793664 kubelet[3309]: I0430 13:50:07.793636 3309 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:50:07.793664 kubelet[3309]: I0430 13:50:07.793644 3309 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 13:50:07.793769 kubelet[3309]: I0430 13:50:07.793671 3309 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:50:07.793769 kubelet[3309]: I0430 13:50:07.793720 3309 kubelet.go:400] "Attempting to sync node with API server" Apr 30 13:50:07.793769 kubelet[3309]: I0430 13:50:07.793727 3309 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:50:07.793769 kubelet[3309]: I0430 13:50:07.793739 3309 kubelet.go:312] "Adding apiserver pod source" Apr 30 13:50:07.793769 kubelet[3309]: I0430 13:50:07.793749 3309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:50:07.794026 kubelet[3309]: I0430 13:50:07.793988 3309 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:50:07.794125 kubelet[3309]: I0430 13:50:07.794098 3309 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:50:07.794378 kubelet[3309]: I0430 13:50:07.794329 3309 server.go:1264] "Started kubelet" Apr 30 13:50:07.794378 kubelet[3309]: I0430 13:50:07.794361 3309 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:50:07.794550 kubelet[3309]: I0430 13:50:07.794456 3309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:50:07.794693 kubelet[3309]: I0430 13:50:07.794682 3309 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:50:07.795310 kubelet[3309]: I0430 13:50:07.795301 3309 server.go:455] "Adding debug handlers to kubelet server" Apr 30 13:50:07.795349 kubelet[3309]: E0430 13:50:07.795315 3309 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:50:07.795422 kubelet[3309]: I0430 13:50:07.795411 3309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:50:07.795834 kubelet[3309]: I0430 13:50:07.795681 3309 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 13:50:07.795834 kubelet[3309]: I0430 13:50:07.795775 3309 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 13:50:07.795937 kubelet[3309]: I0430 13:50:07.795892 3309 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:50:07.796929 kubelet[3309]: I0430 13:50:07.796915 3309 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:50:07.797005 kubelet[3309]: I0430 13:50:07.796991 3309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:50:07.797670 kubelet[3309]: I0430 13:50:07.797652 3309 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:50:07.801870 kubelet[3309]: I0430 13:50:07.801844 3309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:50:07.802429 kubelet[3309]: I0430 13:50:07.802415 3309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:50:07.802469 kubelet[3309]: I0430 13:50:07.802438 3309 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 13:50:07.802469 kubelet[3309]: I0430 13:50:07.802452 3309 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 13:50:07.802530 kubelet[3309]: E0430 13:50:07.802484 3309 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:50:07.811912 kubelet[3309]: I0430 13:50:07.811883 3309 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 13:50:07.811912 kubelet[3309]: I0430 13:50:07.811892 3309 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 13:50:07.811912 kubelet[3309]: I0430 13:50:07.811904 3309 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:50:07.812021 kubelet[3309]: I0430 13:50:07.812002 3309 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 13:50:07.812021 kubelet[3309]: I0430 13:50:07.812009 3309 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 13:50:07.812021 kubelet[3309]: I0430 13:50:07.812021 3309 policy_none.go:49] "None policy: Start" Apr 30 13:50:07.812231 kubelet[3309]: I0430 13:50:07.812223 3309 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 13:50:07.812255 kubelet[3309]: I0430 13:50:07.812233 3309 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:50:07.812322 kubelet[3309]: I0430 13:50:07.812317 3309 state_mem.go:75] "Updated machine memory state" Apr 30 13:50:07.814429 kubelet[3309]: I0430 13:50:07.814417 3309 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:50:07.814566 kubelet[3309]: I0430 13:50:07.814512 3309 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:50:07.814566 kubelet[3309]: I0430 13:50:07.814568 3309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:50:07.902630 kubelet[3309]: I0430 13:50:07.902522 3309 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:07.902838 kubelet[3309]: I0430 13:50:07.902691 3309 topology_manager.go:215] "Topology Admit Handler" podUID="2aca5555d5277c4e0f5e881359c55b95" podNamespace="kube-system" podName="kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:07.903017 kubelet[3309]: I0430 13:50:07.902940 3309 topology_manager.go:215] "Topology Admit Handler" podUID="4395b7c565373cdc8758c175be951ae0" podNamespace="kube-system" podName="kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:07.903275 kubelet[3309]: I0430 13:50:07.903191 3309 topology_manager.go:215] "Topology Admit Handler" podUID="a7bfc9a2ba4e222f2b239d3e73ddb175" podNamespace="kube-system" podName="kube-scheduler-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:07.909885 kubelet[3309]: W0430 13:50:07.909794 3309 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:50:07.910980 kubelet[3309]: W0430 13:50:07.910923 3309 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:50:07.911213 kubelet[3309]: E0430 13:50:07.911098 3309 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:07.911455 kubelet[3309]: W0430 13:50:07.911326 3309 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:50:07.911593 kubelet[3309]: E0430 13:50:07.911501 3309 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:07.913078 kubelet[3309]: I0430 13:50:07.912995 3309 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:07.913272 kubelet[3309]: I0430 13:50:07.913140 3309 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.097230 kubelet[3309]: I0430 13:50:08.097140 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.097504 kubelet[3309]: I0430 13:50:08.097261 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.097504 kubelet[3309]: I0430 13:50:08.097364 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7bfc9a2ba4e222f2b239d3e73ddb175-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-70e1417a44\" (UID: \"a7bfc9a2ba4e222f2b239d3e73ddb175\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.097504 kubelet[3309]: I0430 13:50:08.097473 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2aca5555d5277c4e0f5e881359c55b95-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" (UID: \"2aca5555d5277c4e0f5e881359c55b95\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.097817 kubelet[3309]: I0430 13:50:08.097538 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2aca5555d5277c4e0f5e881359c55b95-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" (UID: \"2aca5555d5277c4e0f5e881359c55b95\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.097817 kubelet[3309]: I0430 13:50:08.097605 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.097817 kubelet[3309]: I0430 13:50:08.097670 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.097817 kubelet[3309]: I0430 13:50:08.097744 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2aca5555d5277c4e0f5e881359c55b95-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" (UID: \"2aca5555d5277c4e0f5e881359c55b95\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.098196 kubelet[3309]: I0430 13:50:08.097810 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4395b7c565373cdc8758c175be951ae0-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" (UID: \"4395b7c565373cdc8758c175be951ae0\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.292760 sudo[3353]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 13:50:08.293618 sudo[3353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 13:50:08.669560 sudo[3353]: pam_unix(sudo:session): session closed for user root Apr 30 13:50:08.794604 kubelet[3309]: I0430 13:50:08.794559 3309 apiserver.go:52] "Watching apiserver" Apr 30 13:50:08.796675 kubelet[3309]: I0430 13:50:08.796632 3309 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 13:50:08.810267 kubelet[3309]: W0430 13:50:08.810253 3309 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:50:08.810325 kubelet[3309]: E0430 13:50:08.810286 3309 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230.1.1-a-70e1417a44\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.810325 kubelet[3309]: W0430 13:50:08.810294 3309 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:50:08.810325 kubelet[3309]: E0430 13:50:08.810319 3309 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230.1.1-a-70e1417a44\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" Apr 30 13:50:08.816546 kubelet[3309]: I0430 13:50:08.816507 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-70e1417a44" podStartSLOduration=2.8164995250000002 podStartE2EDuration="2.816499525s" podCreationTimestamp="2025-04-30 13:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:50:08.816405869 +0000 UTC m=+1.056294598" watchObservedRunningTime="2025-04-30 13:50:08.816499525 +0000 UTC m=+1.056388249" Apr 30 13:50:08.820128 kubelet[3309]: I0430 13:50:08.820076 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-a-70e1417a44" podStartSLOduration=1.820055184 podStartE2EDuration="1.820055184s" podCreationTimestamp="2025-04-30 13:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:50:08.82002996 +0000 UTC m=+1.059918683" watchObservedRunningTime="2025-04-30 13:50:08.820055184 +0000 UTC m=+1.059943905" Apr 30 13:50:08.827898 kubelet[3309]: I0430 13:50:08.827862 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-a-70e1417a44" podStartSLOduration=2.827854716 podStartE2EDuration="2.827854716s" podCreationTimestamp="2025-04-30 13:50:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:50:08.823764222 +0000 UTC m=+1.063652946" watchObservedRunningTime="2025-04-30 13:50:08.827854716 +0000 UTC m=+1.067743443" Apr 30 13:50:09.785730 sudo[2099]: pam_unix(sudo:session): session closed for user root Apr 30 13:50:09.786341 sshd[2098]: Connection closed by 147.75.109.163 port 46716 Apr 30 13:50:09.786531 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Apr 30 13:50:09.788046 systemd[1]: sshd@8-147.75.202.179:22-147.75.109.163:46716.service: Deactivated successfully. Apr 30 13:50:09.789111 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 13:50:09.789224 systemd[1]: session-11.scope: Consumed 3.063s CPU time, 301.6M memory peak. Apr 30 13:50:09.790250 systemd-logind[1797]: Session 11 logged out. Waiting for processes to exit. Apr 30 13:50:09.791026 systemd-logind[1797]: Removed session 11. Apr 30 13:50:14.653624 systemd[1]: Started sshd@9-147.75.202.179:22-83.97.24.41:59496.service - OpenSSH per-connection server daemon (83.97.24.41:59496). Apr 30 13:50:15.814554 sshd[3435]: Received disconnect from 83.97.24.41 port 59496:11: Bye Bye [preauth] Apr 30 13:50:15.814554 sshd[3435]: Disconnected from authenticating user root 83.97.24.41 port 59496 [preauth] Apr 30 13:50:15.817891 systemd[1]: sshd@9-147.75.202.179:22-83.97.24.41:59496.service: Deactivated successfully. Apr 30 13:50:18.360551 update_engine[1802]: I20250430 13:50:18.360442 1802 update_attempter.cc:509] Updating boot flags... Apr 30 13:50:18.392393 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 40 scanned by (udev-worker) (3448) Apr 30 13:50:18.419431 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 40 scanned by (udev-worker) (3448) Apr 30 13:50:18.446392 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 40 scanned by (udev-worker) (3448) Apr 30 13:50:20.666946 kubelet[3309]: I0430 13:50:20.666880 3309 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 13:50:20.668032 kubelet[3309]: I0430 13:50:20.667912 3309 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 13:50:20.668205 containerd[1816]: time="2025-04-30T13:50:20.667516228Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 13:50:21.472250 kubelet[3309]: I0430 13:50:21.472225 3309 topology_manager.go:215] "Topology Admit Handler" podUID="24c65b9a-539a-4fe6-98d2-36c83789dd37" podNamespace="kube-system" podName="kube-proxy-922b8" Apr 30 13:50:21.473710 kubelet[3309]: I0430 13:50:21.473692 3309 topology_manager.go:215] "Topology Admit Handler" podUID="69a6792e-9fb1-4429-a1c2-9778a04936ed" podNamespace="kube-system" podName="cilium-cr9fh" Apr 30 13:50:21.476160 systemd[1]: Created slice kubepods-besteffort-pod24c65b9a_539a_4fe6_98d2_36c83789dd37.slice - libcontainer container kubepods-besteffort-pod24c65b9a_539a_4fe6_98d2_36c83789dd37.slice. Apr 30 13:50:21.484643 systemd[1]: Created slice kubepods-burstable-pod69a6792e_9fb1_4429_a1c2_9778a04936ed.slice - libcontainer container kubepods-burstable-pod69a6792e_9fb1_4429_a1c2_9778a04936ed.slice. Apr 30 13:50:21.485882 kubelet[3309]: I0430 13:50:21.485839 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-etc-cni-netd\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.485937 kubelet[3309]: I0430 13:50:21.485928 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24c65b9a-539a-4fe6-98d2-36c83789dd37-xtables-lock\") pod \"kube-proxy-922b8\" (UID: \"24c65b9a-539a-4fe6-98d2-36c83789dd37\") " pod="kube-system/kube-proxy-922b8" Apr 30 13:50:21.486068 kubelet[3309]: I0430 13:50:21.486050 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-bpf-maps\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486107 kubelet[3309]: I0430 13:50:21.486088 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24c65b9a-539a-4fe6-98d2-36c83789dd37-lib-modules\") pod \"kube-proxy-922b8\" (UID: \"24c65b9a-539a-4fe6-98d2-36c83789dd37\") " pod="kube-system/kube-proxy-922b8" Apr 30 13:50:21.486140 kubelet[3309]: I0430 13:50:21.486113 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-xtables-lock\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486140 kubelet[3309]: I0430 13:50:21.486133 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-host-proc-sys-net\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486204 kubelet[3309]: I0430 13:50:21.486157 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/24c65b9a-539a-4fe6-98d2-36c83789dd37-kube-proxy\") pod \"kube-proxy-922b8\" (UID: \"24c65b9a-539a-4fe6-98d2-36c83789dd37\") " pod="kube-system/kube-proxy-922b8" Apr 30 13:50:21.486204 kubelet[3309]: I0430 13:50:21.486179 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69a6792e-9fb1-4429-a1c2-9778a04936ed-clustermesh-secrets\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486264 kubelet[3309]: I0430 13:50:21.486202 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-lib-modules\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486264 kubelet[3309]: I0430 13:50:21.486226 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-config-path\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486264 kubelet[3309]: I0430 13:50:21.486243 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-host-proc-sys-kernel\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486264 kubelet[3309]: I0430 13:50:21.486255 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69a6792e-9fb1-4429-a1c2-9778a04936ed-hubble-tls\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486376 kubelet[3309]: I0430 13:50:21.486270 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-hostproc\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.486376 kubelet[3309]: I0430 13:50:21.486290 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-cgroup\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.487007 kubelet[3309]: I0430 13:50:21.486365 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cni-path\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.487043 kubelet[3309]: I0430 13:50:21.487026 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8s5\" (UniqueName: \"kubernetes.io/projected/24c65b9a-539a-4fe6-98d2-36c83789dd37-kube-api-access-5p8s5\") pod \"kube-proxy-922b8\" (UID: \"24c65b9a-539a-4fe6-98d2-36c83789dd37\") " pod="kube-system/kube-proxy-922b8" Apr 30 13:50:21.487064 kubelet[3309]: I0430 13:50:21.487043 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-run\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.487064 kubelet[3309]: I0430 13:50:21.487053 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscfh\" (UniqueName: \"kubernetes.io/projected/69a6792e-9fb1-4429-a1c2-9778a04936ed-kube-api-access-kscfh\") pod \"cilium-cr9fh\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " pod="kube-system/cilium-cr9fh" Apr 30 13:50:21.785754 containerd[1816]: time="2025-04-30T13:50:21.785485293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-922b8,Uid:24c65b9a-539a-4fe6-98d2-36c83789dd37,Namespace:kube-system,Attempt:0,}" Apr 30 13:50:21.787148 containerd[1816]: time="2025-04-30T13:50:21.787129089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cr9fh,Uid:69a6792e-9fb1-4429-a1c2-9778a04936ed,Namespace:kube-system,Attempt:0,}" Apr 30 13:50:21.792504 kubelet[3309]: I0430 13:50:21.792443 3309 topology_manager.go:215] "Topology Admit Handler" podUID="702b4d47-4138-413d-8fb6-057210f395b3" podNamespace="kube-system" podName="cilium-operator-599987898-zsntp" Apr 30 13:50:21.800817 systemd[1]: Created slice kubepods-besteffort-pod702b4d47_4138_413d_8fb6_057210f395b3.slice - libcontainer container kubepods-besteffort-pod702b4d47_4138_413d_8fb6_057210f395b3.slice. Apr 30 13:50:21.800920 containerd[1816]: time="2025-04-30T13:50:21.800793557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:50:21.801050 containerd[1816]: time="2025-04-30T13:50:21.801033628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:50:21.801072 containerd[1816]: time="2025-04-30T13:50:21.801050005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:21.801104 containerd[1816]: time="2025-04-30T13:50:21.801094223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:21.801129 containerd[1816]: time="2025-04-30T13:50:21.800881638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:50:21.801152 containerd[1816]: time="2025-04-30T13:50:21.801134217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:50:21.801170 containerd[1816]: time="2025-04-30T13:50:21.801146244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:21.801208 containerd[1816]: time="2025-04-30T13:50:21.801194812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:21.820678 systemd[1]: Started cri-containerd-19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539.scope - libcontainer container 19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539. Apr 30 13:50:21.821612 systemd[1]: Started cri-containerd-a7de99a654330de14cca04200bf2fc88261e23b1be9f1bc48757540259b72016.scope - libcontainer container a7de99a654330de14cca04200bf2fc88261e23b1be9f1bc48757540259b72016. Apr 30 13:50:21.831764 containerd[1816]: time="2025-04-30T13:50:21.831741328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cr9fh,Uid:69a6792e-9fb1-4429-a1c2-9778a04936ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\"" Apr 30 13:50:21.832042 containerd[1816]: time="2025-04-30T13:50:21.832029869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-922b8,Uid:24c65b9a-539a-4fe6-98d2-36c83789dd37,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7de99a654330de14cca04200bf2fc88261e23b1be9f1bc48757540259b72016\"" Apr 30 13:50:21.832639 containerd[1816]: time="2025-04-30T13:50:21.832626499Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 13:50:21.833121 containerd[1816]: time="2025-04-30T13:50:21.833109377Z" level=info msg="CreateContainer within sandbox \"a7de99a654330de14cca04200bf2fc88261e23b1be9f1bc48757540259b72016\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 13:50:21.838608 containerd[1816]: time="2025-04-30T13:50:21.838566135Z" level=info msg="CreateContainer within sandbox \"a7de99a654330de14cca04200bf2fc88261e23b1be9f1bc48757540259b72016\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac68564c3e77b095ce8785140a9d55eca6fd54a9a0aed81b32ea2a06e653692a\"" Apr 30 13:50:21.838800 containerd[1816]: time="2025-04-30T13:50:21.838759663Z" level=info msg="StartContainer for \"ac68564c3e77b095ce8785140a9d55eca6fd54a9a0aed81b32ea2a06e653692a\"" Apr 30 13:50:21.865656 systemd[1]: Started cri-containerd-ac68564c3e77b095ce8785140a9d55eca6fd54a9a0aed81b32ea2a06e653692a.scope - libcontainer container ac68564c3e77b095ce8785140a9d55eca6fd54a9a0aed81b32ea2a06e653692a. Apr 30 13:50:21.881761 containerd[1816]: time="2025-04-30T13:50:21.881734170Z" level=info msg="StartContainer for \"ac68564c3e77b095ce8785140a9d55eca6fd54a9a0aed81b32ea2a06e653692a\" returns successfully" Apr 30 13:50:21.890161 kubelet[3309]: I0430 13:50:21.890137 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdh5x\" (UniqueName: \"kubernetes.io/projected/702b4d47-4138-413d-8fb6-057210f395b3-kube-api-access-sdh5x\") pod \"cilium-operator-599987898-zsntp\" (UID: \"702b4d47-4138-413d-8fb6-057210f395b3\") " pod="kube-system/cilium-operator-599987898-zsntp" Apr 30 13:50:21.890161 kubelet[3309]: I0430 13:50:21.890164 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/702b4d47-4138-413d-8fb6-057210f395b3-cilium-config-path\") pod \"cilium-operator-599987898-zsntp\" (UID: \"702b4d47-4138-413d-8fb6-057210f395b3\") " pod="kube-system/cilium-operator-599987898-zsntp" Apr 30 13:50:22.103625 containerd[1816]: time="2025-04-30T13:50:22.103515837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zsntp,Uid:702b4d47-4138-413d-8fb6-057210f395b3,Namespace:kube-system,Attempt:0,}" Apr 30 13:50:22.115561 containerd[1816]: time="2025-04-30T13:50:22.115466550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:50:22.115561 containerd[1816]: time="2025-04-30T13:50:22.115494899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:50:22.115561 containerd[1816]: time="2025-04-30T13:50:22.115501890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:22.115775 containerd[1816]: time="2025-04-30T13:50:22.115559294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:22.132630 systemd[1]: Started cri-containerd-2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0.scope - libcontainer container 2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0. Apr 30 13:50:22.154558 containerd[1816]: time="2025-04-30T13:50:22.154536989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-zsntp,Uid:702b4d47-4138-413d-8fb6-057210f395b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0\"" Apr 30 13:50:22.859037 kubelet[3309]: I0430 13:50:22.858898 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-922b8" podStartSLOduration=1.8588529839999999 podStartE2EDuration="1.858852984s" podCreationTimestamp="2025-04-30 13:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:50:22.858854886 +0000 UTC m=+15.098743678" watchObservedRunningTime="2025-04-30 13:50:22.858852984 +0000 UTC m=+15.098741776" Apr 30 13:50:25.798628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212580156.mount: Deactivated successfully. Apr 30 13:50:26.585900 containerd[1816]: time="2025-04-30T13:50:26.585876433Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:50:26.586136 containerd[1816]: time="2025-04-30T13:50:26.586115091Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 13:50:26.586454 containerd[1816]: time="2025-04-30T13:50:26.586405832Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:50:26.587206 containerd[1816]: time="2025-04-30T13:50:26.587194148Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.754551089s" Apr 30 13:50:26.587229 containerd[1816]: time="2025-04-30T13:50:26.587210780Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 13:50:26.587871 containerd[1816]: time="2025-04-30T13:50:26.587818764Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 13:50:26.588440 containerd[1816]: time="2025-04-30T13:50:26.588398446Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 13:50:26.593781 containerd[1816]: time="2025-04-30T13:50:26.593738506Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\"" Apr 30 13:50:26.593953 containerd[1816]: time="2025-04-30T13:50:26.593940936Z" level=info msg="StartContainer for \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\"" Apr 30 13:50:26.620648 systemd[1]: Started cri-containerd-76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6.scope - libcontainer container 76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6. Apr 30 13:50:26.633205 containerd[1816]: time="2025-04-30T13:50:26.633183066Z" level=info msg="StartContainer for \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\" returns successfully" Apr 30 13:50:26.638726 systemd[1]: cri-containerd-76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6.scope: Deactivated successfully. Apr 30 13:50:27.596158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6-rootfs.mount: Deactivated successfully. Apr 30 13:50:27.797802 containerd[1816]: time="2025-04-30T13:50:27.797769752Z" level=info msg="shim disconnected" id=76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6 namespace=k8s.io Apr 30 13:50:27.797802 containerd[1816]: time="2025-04-30T13:50:27.797802117Z" level=warning msg="cleaning up after shim disconnected" id=76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6 namespace=k8s.io Apr 30 13:50:27.797802 containerd[1816]: time="2025-04-30T13:50:27.797807671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:50:27.866675 containerd[1816]: time="2025-04-30T13:50:27.866611867Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 13:50:27.872289 containerd[1816]: time="2025-04-30T13:50:27.872235636Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\"" Apr 30 13:50:27.872631 containerd[1816]: time="2025-04-30T13:50:27.872588397Z" level=info msg="StartContainer for \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\"" Apr 30 13:50:27.893569 systemd[1]: Started cri-containerd-c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08.scope - libcontainer container c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08. Apr 30 13:50:27.904479 containerd[1816]: time="2025-04-30T13:50:27.904457889Z" level=info msg="StartContainer for \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\" returns successfully" Apr 30 13:50:27.911720 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 13:50:27.911875 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:50:27.911970 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:50:27.931839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:50:27.932055 systemd[1]: cri-containerd-c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08.scope: Deactivated successfully. Apr 30 13:50:27.940623 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:50:27.942837 containerd[1816]: time="2025-04-30T13:50:27.942803839Z" level=info msg="shim disconnected" id=c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08 namespace=k8s.io Apr 30 13:50:27.942905 containerd[1816]: time="2025-04-30T13:50:27.942838352Z" level=warning msg="cleaning up after shim disconnected" id=c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08 namespace=k8s.io Apr 30 13:50:27.942905 containerd[1816]: time="2025-04-30T13:50:27.942844288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:50:28.592447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08-rootfs.mount: Deactivated successfully. Apr 30 13:50:28.875795 containerd[1816]: time="2025-04-30T13:50:28.875573598Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 13:50:28.887024 containerd[1816]: time="2025-04-30T13:50:28.886979299Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\"" Apr 30 13:50:28.887369 containerd[1816]: time="2025-04-30T13:50:28.887342300Z" level=info msg="StartContainer for \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\"" Apr 30 13:50:28.913800 systemd[1]: Started cri-containerd-1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33.scope - libcontainer container 1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33. Apr 30 13:50:28.960579 containerd[1816]: time="2025-04-30T13:50:28.960506952Z" level=info msg="StartContainer for \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\" returns successfully" Apr 30 13:50:28.962949 systemd[1]: cri-containerd-1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33.scope: Deactivated successfully. Apr 30 13:50:29.002138 containerd[1816]: time="2025-04-30T13:50:29.002101207Z" level=info msg="shim disconnected" id=1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33 namespace=k8s.io Apr 30 13:50:29.002138 containerd[1816]: time="2025-04-30T13:50:29.002134269Z" level=warning msg="cleaning up after shim disconnected" id=1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33 namespace=k8s.io Apr 30 13:50:29.002264 containerd[1816]: time="2025-04-30T13:50:29.002142610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:50:29.007821 containerd[1816]: time="2025-04-30T13:50:29.007798452Z" level=warning msg="cleanup warnings time=\"2025-04-30T13:50:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 13:50:29.592519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33-rootfs.mount: Deactivated successfully. Apr 30 13:50:29.678043 containerd[1816]: time="2025-04-30T13:50:29.678022255Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:50:29.678261 containerd[1816]: time="2025-04-30T13:50:29.678243700Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 13:50:29.678516 containerd[1816]: time="2025-04-30T13:50:29.678475871Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:50:29.679338 containerd[1816]: time="2025-04-30T13:50:29.679295998Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.09145826s" Apr 30 13:50:29.679338 containerd[1816]: time="2025-04-30T13:50:29.679312343Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 13:50:29.680277 containerd[1816]: time="2025-04-30T13:50:29.680265340Z" level=info msg="CreateContainer within sandbox \"2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 13:50:29.685015 containerd[1816]: time="2025-04-30T13:50:29.684968028Z" level=info msg="CreateContainer within sandbox \"2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\"" Apr 30 13:50:29.685206 containerd[1816]: time="2025-04-30T13:50:29.685192567Z" level=info msg="StartContainer for \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\"" Apr 30 13:50:29.706692 systemd[1]: Started cri-containerd-12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e.scope - libcontainer container 12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e. Apr 30 13:50:29.718034 containerd[1816]: time="2025-04-30T13:50:29.718011659Z" level=info msg="StartContainer for \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\" returns successfully" Apr 30 13:50:29.883245 containerd[1816]: time="2025-04-30T13:50:29.883029160Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 13:50:29.891481 containerd[1816]: time="2025-04-30T13:50:29.891430281Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\"" Apr 30 13:50:29.891687 containerd[1816]: time="2025-04-30T13:50:29.891630428Z" level=info msg="StartContainer for \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\"" Apr 30 13:50:29.911548 systemd[1]: Started cri-containerd-c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4.scope - libcontainer container c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4. Apr 30 13:50:29.914975 kubelet[3309]: I0430 13:50:29.914943 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-zsntp" podStartSLOduration=1.390388912 podStartE2EDuration="8.914931925s" podCreationTimestamp="2025-04-30 13:50:21 +0000 UTC" firstStartedPulling="2025-04-30 13:50:22.155089941 +0000 UTC m=+14.394978665" lastFinishedPulling="2025-04-30 13:50:29.679632954 +0000 UTC m=+21.919521678" observedRunningTime="2025-04-30 13:50:29.914802367 +0000 UTC m=+22.154691092" watchObservedRunningTime="2025-04-30 13:50:29.914931925 +0000 UTC m=+22.154820650" Apr 30 13:50:29.923945 systemd[1]: cri-containerd-c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4.scope: Deactivated successfully. Apr 30 13:50:29.924335 containerd[1816]: time="2025-04-30T13:50:29.924315014Z" level=info msg="StartContainer for \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\" returns successfully" Apr 30 13:50:30.093862 containerd[1816]: time="2025-04-30T13:50:30.093809162Z" level=info msg="shim disconnected" id=c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4 namespace=k8s.io Apr 30 13:50:30.093862 containerd[1816]: time="2025-04-30T13:50:30.093859831Z" level=warning msg="cleaning up after shim disconnected" id=c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4 namespace=k8s.io Apr 30 13:50:30.093862 containerd[1816]: time="2025-04-30T13:50:30.093865662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:50:30.099655 containerd[1816]: time="2025-04-30T13:50:30.099606281Z" level=warning msg="cleanup warnings time=\"2025-04-30T13:50:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 13:50:30.895435 containerd[1816]: time="2025-04-30T13:50:30.895386453Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 13:50:30.901701 containerd[1816]: time="2025-04-30T13:50:30.901658049Z" level=info msg="CreateContainer within sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\"" Apr 30 13:50:30.902117 containerd[1816]: time="2025-04-30T13:50:30.902100966Z" level=info msg="StartContainer for \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\"" Apr 30 13:50:30.903192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475757034.mount: Deactivated successfully. Apr 30 13:50:30.922568 systemd[1]: Started cri-containerd-d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d.scope - libcontainer container d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d. Apr 30 13:50:30.952841 containerd[1816]: time="2025-04-30T13:50:30.952816124Z" level=info msg="StartContainer for \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\" returns successfully" Apr 30 13:50:31.013202 kubelet[3309]: I0430 13:50:31.013122 3309 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 13:50:31.023361 kubelet[3309]: I0430 13:50:31.023336 3309 topology_manager.go:215] "Topology Admit Handler" podUID="89e2006d-d1ec-4548-bf23-1a4714e75f42" podNamespace="kube-system" podName="coredns-7db6d8ff4d-p4czv" Apr 30 13:50:31.023714 kubelet[3309]: I0430 13:50:31.023699 3309 topology_manager.go:215] "Topology Admit Handler" podUID="cd08b1dd-d0f2-4c2b-b072-d8d9c71fe857" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wp5kj" Apr 30 13:50:31.026806 systemd[1]: Created slice kubepods-burstable-pod89e2006d_d1ec_4548_bf23_1a4714e75f42.slice - libcontainer container kubepods-burstable-pod89e2006d_d1ec_4548_bf23_1a4714e75f42.slice. Apr 30 13:50:31.029422 systemd[1]: Created slice kubepods-burstable-podcd08b1dd_d0f2_4c2b_b072_d8d9c71fe857.slice - libcontainer container kubepods-burstable-podcd08b1dd_d0f2_4c2b_b072_d8d9c71fe857.slice. Apr 30 13:50:31.057986 kubelet[3309]: I0430 13:50:31.057938 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd08b1dd-d0f2-4c2b-b072-d8d9c71fe857-config-volume\") pod \"coredns-7db6d8ff4d-wp5kj\" (UID: \"cd08b1dd-d0f2-4c2b-b072-d8d9c71fe857\") " pod="kube-system/coredns-7db6d8ff4d-wp5kj" Apr 30 13:50:31.057986 kubelet[3309]: I0430 13:50:31.057960 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvhg\" (UniqueName: \"kubernetes.io/projected/89e2006d-d1ec-4548-bf23-1a4714e75f42-kube-api-access-vlvhg\") pod \"coredns-7db6d8ff4d-p4czv\" (UID: \"89e2006d-d1ec-4548-bf23-1a4714e75f42\") " pod="kube-system/coredns-7db6d8ff4d-p4czv" Apr 30 13:50:31.057986 kubelet[3309]: I0430 13:50:31.057972 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89e2006d-d1ec-4548-bf23-1a4714e75f42-config-volume\") pod \"coredns-7db6d8ff4d-p4czv\" (UID: \"89e2006d-d1ec-4548-bf23-1a4714e75f42\") " pod="kube-system/coredns-7db6d8ff4d-p4czv" Apr 30 13:50:31.057986 kubelet[3309]: I0430 13:50:31.057985 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ftf5\" (UniqueName: \"kubernetes.io/projected/cd08b1dd-d0f2-4c2b-b072-d8d9c71fe857-kube-api-access-6ftf5\") pod \"coredns-7db6d8ff4d-wp5kj\" (UID: \"cd08b1dd-d0f2-4c2b-b072-d8d9c71fe857\") " pod="kube-system/coredns-7db6d8ff4d-wp5kj" Apr 30 13:50:31.330213 containerd[1816]: time="2025-04-30T13:50:31.330076648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p4czv,Uid:89e2006d-d1ec-4548-bf23-1a4714e75f42,Namespace:kube-system,Attempt:0,}" Apr 30 13:50:31.332293 containerd[1816]: time="2025-04-30T13:50:31.332188003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wp5kj,Uid:cd08b1dd-d0f2-4c2b-b072-d8d9c71fe857,Namespace:kube-system,Attempt:0,}" Apr 30 13:50:31.900999 kubelet[3309]: I0430 13:50:31.900969 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cr9fh" podStartSLOduration=6.145655149 podStartE2EDuration="10.900955833s" podCreationTimestamp="2025-04-30 13:50:21 +0000 UTC" firstStartedPulling="2025-04-30 13:50:21.832435919 +0000 UTC m=+14.072324653" lastFinishedPulling="2025-04-30 13:50:26.587736615 +0000 UTC m=+18.827625337" observedRunningTime="2025-04-30 13:50:31.900636032 +0000 UTC m=+24.140524759" watchObservedRunningTime="2025-04-30 13:50:31.900955833 +0000 UTC m=+24.140844556" Apr 30 13:50:33.520260 systemd-networkd[1728]: cilium_host: Link UP Apr 30 13:50:33.520345 systemd-networkd[1728]: cilium_net: Link UP Apr 30 13:50:33.520448 systemd-networkd[1728]: cilium_net: Gained carrier Apr 30 13:50:33.520547 systemd-networkd[1728]: cilium_host: Gained carrier Apr 30 13:50:33.566078 systemd-networkd[1728]: cilium_vxlan: Link UP Apr 30 13:50:33.566081 systemd-networkd[1728]: cilium_vxlan: Gained carrier Apr 30 13:50:33.604520 systemd-networkd[1728]: cilium_net: Gained IPv6LL Apr 30 13:50:33.604679 systemd-networkd[1728]: cilium_host: Gained IPv6LL Apr 30 13:50:33.700459 kernel: NET: Registered PF_ALG protocol family Apr 30 13:50:34.093004 systemd-networkd[1728]: lxc_health: Link UP Apr 30 13:50:34.093165 systemd-networkd[1728]: lxc_health: Gained carrier Apr 30 13:50:34.398438 kernel: eth0: renamed from tmpc6a60 Apr 30 13:50:34.420452 kernel: eth0: renamed from tmp9f197 Apr 30 13:50:34.441124 systemd-networkd[1728]: lxccc84897ae9ed: Link UP Apr 30 13:50:34.441293 systemd-networkd[1728]: lxc3bc88df41230: Link UP Apr 30 13:50:34.441689 systemd-networkd[1728]: lxc3bc88df41230: Gained carrier Apr 30 13:50:34.441797 systemd-networkd[1728]: lxccc84897ae9ed: Gained carrier Apr 30 13:50:34.884514 systemd-networkd[1728]: cilium_vxlan: Gained IPv6LL Apr 30 13:50:35.524549 systemd-networkd[1728]: lxccc84897ae9ed: Gained IPv6LL Apr 30 13:50:36.038501 systemd-networkd[1728]: lxc_health: Gained IPv6LL Apr 30 13:50:36.164468 systemd-networkd[1728]: lxc3bc88df41230: Gained IPv6LL Apr 30 13:50:36.668570 containerd[1816]: time="2025-04-30T13:50:36.668519995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:50:36.668570 containerd[1816]: time="2025-04-30T13:50:36.668558985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:50:36.668796 containerd[1816]: time="2025-04-30T13:50:36.668570397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:36.668796 containerd[1816]: time="2025-04-30T13:50:36.668623918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:36.668796 containerd[1816]: time="2025-04-30T13:50:36.668551835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:50:36.668873 containerd[1816]: time="2025-04-30T13:50:36.668795654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:50:36.668873 containerd[1816]: time="2025-04-30T13:50:36.668807395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:36.668873 containerd[1816]: time="2025-04-30T13:50:36.668854577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:50:36.696665 systemd[1]: Started cri-containerd-9f197a4b8b7abdade32855cb1f03aa3b4482209c6c1fa6a5b273560a0e62e19e.scope - libcontainer container 9f197a4b8b7abdade32855cb1f03aa3b4482209c6c1fa6a5b273560a0e62e19e. Apr 30 13:50:36.697354 systemd[1]: Started cri-containerd-c6a6007b11b2628523882021367705c104d353de0b3b3fd9a8024a2767c49245.scope - libcontainer container c6a6007b11b2628523882021367705c104d353de0b3b3fd9a8024a2767c49245. Apr 30 13:50:36.718982 containerd[1816]: time="2025-04-30T13:50:36.718959723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-p4czv,Uid:89e2006d-d1ec-4548-bf23-1a4714e75f42,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f197a4b8b7abdade32855cb1f03aa3b4482209c6c1fa6a5b273560a0e62e19e\"" Apr 30 13:50:36.719660 containerd[1816]: time="2025-04-30T13:50:36.719644338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wp5kj,Uid:cd08b1dd-d0f2-4c2b-b072-d8d9c71fe857,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6a6007b11b2628523882021367705c104d353de0b3b3fd9a8024a2767c49245\"" Apr 30 13:50:36.720248 containerd[1816]: time="2025-04-30T13:50:36.720235200Z" level=info msg="CreateContainer within sandbox \"9f197a4b8b7abdade32855cb1f03aa3b4482209c6c1fa6a5b273560a0e62e19e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 13:50:36.720620 containerd[1816]: time="2025-04-30T13:50:36.720608189Z" level=info msg="CreateContainer within sandbox \"c6a6007b11b2628523882021367705c104d353de0b3b3fd9a8024a2767c49245\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 13:50:36.729903 containerd[1816]: time="2025-04-30T13:50:36.729860415Z" level=info msg="CreateContainer within sandbox \"9f197a4b8b7abdade32855cb1f03aa3b4482209c6c1fa6a5b273560a0e62e19e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4276b664b7b97e13af2ba2f9d40e8cad0e3f44caf1f1abab1c3a0fb825207997\"" Apr 30 13:50:36.730113 containerd[1816]: time="2025-04-30T13:50:36.730070841Z" level=info msg="StartContainer for \"4276b664b7b97e13af2ba2f9d40e8cad0e3f44caf1f1abab1c3a0fb825207997\"" Apr 30 13:50:36.731072 containerd[1816]: time="2025-04-30T13:50:36.731029367Z" level=info msg="CreateContainer within sandbox \"c6a6007b11b2628523882021367705c104d353de0b3b3fd9a8024a2767c49245\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6de6dd912a34976a9ce1f8275a0c57ba0cdb1138b31bcd5e0c3537aba893e90\"" Apr 30 13:50:36.731226 containerd[1816]: time="2025-04-30T13:50:36.731182936Z" level=info msg="StartContainer for \"d6de6dd912a34976a9ce1f8275a0c57ba0cdb1138b31bcd5e0c3537aba893e90\"" Apr 30 13:50:36.756588 systemd[1]: Started cri-containerd-4276b664b7b97e13af2ba2f9d40e8cad0e3f44caf1f1abab1c3a0fb825207997.scope - libcontainer container 4276b664b7b97e13af2ba2f9d40e8cad0e3f44caf1f1abab1c3a0fb825207997. Apr 30 13:50:36.758298 systemd[1]: Started cri-containerd-d6de6dd912a34976a9ce1f8275a0c57ba0cdb1138b31bcd5e0c3537aba893e90.scope - libcontainer container d6de6dd912a34976a9ce1f8275a0c57ba0cdb1138b31bcd5e0c3537aba893e90. Apr 30 13:50:36.770224 containerd[1816]: time="2025-04-30T13:50:36.770196989Z" level=info msg="StartContainer for \"4276b664b7b97e13af2ba2f9d40e8cad0e3f44caf1f1abab1c3a0fb825207997\" returns successfully" Apr 30 13:50:36.771214 containerd[1816]: time="2025-04-30T13:50:36.771195840Z" level=info msg="StartContainer for \"d6de6dd912a34976a9ce1f8275a0c57ba0cdb1138b31bcd5e0c3537aba893e90\" returns successfully" Apr 30 13:50:36.933650 kubelet[3309]: I0430 13:50:36.933367 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-p4czv" podStartSLOduration=15.933331157 podStartE2EDuration="15.933331157s" podCreationTimestamp="2025-04-30 13:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:50:36.932716537 +0000 UTC m=+29.172605347" watchObservedRunningTime="2025-04-30 13:50:36.933331157 +0000 UTC m=+29.173219927" Apr 30 13:50:36.969176 kubelet[3309]: I0430 13:50:36.969095 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wp5kj" podStartSLOduration=15.969073442 podStartE2EDuration="15.969073442s" podCreationTimestamp="2025-04-30 13:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:50:36.968591189 +0000 UTC m=+29.208479961" watchObservedRunningTime="2025-04-30 13:50:36.969073442 +0000 UTC m=+29.208962189" Apr 30 13:50:41.333252 kubelet[3309]: I0430 13:50:41.333185 3309 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:52:08.662103 systemd[1]: Started sshd@10-147.75.202.179:22-14.128.54.101:52926.service - OpenSSH per-connection server daemon (14.128.54.101:52926). Apr 30 13:52:09.506513 sshd[4898]: Invalid user kuba from 14.128.54.101 port 52926 Apr 30 13:52:09.657785 sshd[4898]: Received disconnect from 14.128.54.101 port 52926:11: Bye Bye [preauth] Apr 30 13:52:09.657785 sshd[4898]: Disconnected from invalid user kuba 14.128.54.101 port 52926 [preauth] Apr 30 13:52:09.661157 systemd[1]: sshd@10-147.75.202.179:22-14.128.54.101:52926.service: Deactivated successfully. Apr 30 13:52:33.875173 systemd[1]: Started sshd@11-147.75.202.179:22-116.204.182.224:47466.service - OpenSSH per-connection server daemon (116.204.182.224:47466). Apr 30 13:52:34.979753 sshd[4905]: Invalid user aditya from 116.204.182.224 port 47466 Apr 30 13:52:35.190418 sshd[4905]: Received disconnect from 116.204.182.224 port 47466:11: Bye Bye [preauth] Apr 30 13:52:35.190418 sshd[4905]: Disconnected from invalid user aditya 116.204.182.224 port 47466 [preauth] Apr 30 13:52:35.193613 systemd[1]: sshd@11-147.75.202.179:22-116.204.182.224:47466.service: Deactivated successfully. Apr 30 13:53:29.013223 systemd[1]: Started sshd@12-147.75.202.179:22-103.215.80.141:35792.service - OpenSSH per-connection server daemon (103.215.80.141:35792). Apr 30 13:53:29.404624 update_engine[1802]: I20250430 13:53:29.404553 1802 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 13:53:29.404624 update_engine[1802]: I20250430 13:53:29.404590 1802 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 13:53:29.404914 update_engine[1802]: I20250430 13:53:29.404714 1802 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 13:53:29.405033 update_engine[1802]: I20250430 13:53:29.404989 1802 omaha_request_params.cc:62] Current group set to beta Apr 30 13:53:29.405079 update_engine[1802]: I20250430 13:53:29.405067 1802 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 13:53:29.405079 update_engine[1802]: I20250430 13:53:29.405076 1802 update_attempter.cc:643] Scheduling an action processor start. Apr 30 13:53:29.405129 update_engine[1802]: I20250430 13:53:29.405086 1802 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 13:53:29.405129 update_engine[1802]: I20250430 13:53:29.405107 1802 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 13:53:29.405180 update_engine[1802]: I20250430 13:53:29.405149 1802 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 13:53:29.405180 update_engine[1802]: I20250430 13:53:29.405157 1802 omaha_request_action.cc:272] Request: Apr 30 13:53:29.405180 update_engine[1802]: Apr 30 13:53:29.405180 update_engine[1802]: Apr 30 13:53:29.405180 update_engine[1802]: Apr 30 13:53:29.405180 update_engine[1802]: Apr 30 13:53:29.405180 update_engine[1802]: Apr 30 13:53:29.405180 update_engine[1802]: Apr 30 13:53:29.405180 update_engine[1802]: Apr 30 13:53:29.405180 update_engine[1802]: Apr 30 13:53:29.405180 update_engine[1802]: I20250430 13:53:29.405161 1802 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:53:29.405450 locksmithd[1851]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 13:53:29.406261 update_engine[1802]: I20250430 13:53:29.406214 1802 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:53:29.406536 update_engine[1802]: I20250430 13:53:29.406489 1802 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:53:29.407089 update_engine[1802]: E20250430 13:53:29.407037 1802 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:53:29.407139 update_engine[1802]: I20250430 13:53:29.407086 1802 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 13:53:29.706242 sshd[4917]: Invalid user centos from 103.215.80.141 port 35792 Apr 30 13:53:29.871312 sshd[4917]: Received disconnect from 103.215.80.141 port 35792:11: Bye Bye [preauth] Apr 30 13:53:29.871312 sshd[4917]: Disconnected from invalid user centos 103.215.80.141 port 35792 [preauth] Apr 30 13:53:29.874611 systemd[1]: sshd@12-147.75.202.179:22-103.215.80.141:35792.service: Deactivated successfully. Apr 30 13:53:39.353565 update_engine[1802]: I20250430 13:53:39.353358 1802 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:53:39.354561 update_engine[1802]: I20250430 13:53:39.353948 1802 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:53:39.354688 update_engine[1802]: I20250430 13:53:39.354548 1802 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:53:39.355269 update_engine[1802]: E20250430 13:53:39.355160 1802 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:53:39.355467 update_engine[1802]: I20250430 13:53:39.355326 1802 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 13:53:49.362674 update_engine[1802]: I20250430 13:53:49.362509 1802 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:53:49.363651 update_engine[1802]: I20250430 13:53:49.363062 1802 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:53:49.363790 update_engine[1802]: I20250430 13:53:49.363692 1802 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:53:49.364184 update_engine[1802]: E20250430 13:53:49.364071 1802 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:53:49.364362 update_engine[1802]: I20250430 13:53:49.364181 1802 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 13:53:59.362535 update_engine[1802]: I20250430 13:53:59.362339 1802 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:53:59.363623 update_engine[1802]: I20250430 13:53:59.362936 1802 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:53:59.363623 update_engine[1802]: I20250430 13:53:59.363544 1802 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:53:59.364181 update_engine[1802]: E20250430 13:53:59.364071 1802 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:53:59.364423 update_engine[1802]: I20250430 13:53:59.364188 1802 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 13:53:59.364423 update_engine[1802]: I20250430 13:53:59.364218 1802 omaha_request_action.cc:617] Omaha request response: Apr 30 13:53:59.364645 update_engine[1802]: E20250430 13:53:59.364413 1802 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 13:53:59.364645 update_engine[1802]: I20250430 13:53:59.364481 1802 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 13:53:59.364645 update_engine[1802]: I20250430 13:53:59.364502 1802 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:53:59.364645 update_engine[1802]: I20250430 13:53:59.364516 1802 update_attempter.cc:306] Processing Done. Apr 30 13:53:59.364645 update_engine[1802]: E20250430 13:53:59.364548 1802 update_attempter.cc:619] Update failed. Apr 30 13:53:59.364645 update_engine[1802]: I20250430 13:53:59.364564 1802 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 13:53:59.364645 update_engine[1802]: I20250430 13:53:59.364579 1802 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 13:53:59.364645 update_engine[1802]: I20250430 13:53:59.364594 1802 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 13:53:59.365315 update_engine[1802]: I20250430 13:53:59.364749 1802 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 13:53:59.365315 update_engine[1802]: I20250430 13:53:59.364810 1802 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 13:53:59.365315 update_engine[1802]: I20250430 13:53:59.364832 1802 omaha_request_action.cc:272] Request: Apr 30 13:53:59.365315 update_engine[1802]: Apr 30 13:53:59.365315 update_engine[1802]: Apr 30 13:53:59.365315 update_engine[1802]: Apr 30 13:53:59.365315 update_engine[1802]: Apr 30 13:53:59.365315 update_engine[1802]: Apr 30 13:53:59.365315 update_engine[1802]: Apr 30 13:53:59.365315 update_engine[1802]: I20250430 13:53:59.364848 1802 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:53:59.365315 update_engine[1802]: I20250430 13:53:59.365291 1802 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:53:59.366319 update_engine[1802]: I20250430 13:53:59.365792 1802 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:53:59.366319 update_engine[1802]: E20250430 13:53:59.366210 1802 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:53:59.366548 locksmithd[1851]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 13:53:59.367186 update_engine[1802]: I20250430 13:53:59.366331 1802 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 13:53:59.367186 update_engine[1802]: I20250430 13:53:59.366358 1802 omaha_request_action.cc:617] Omaha request response: Apr 30 13:53:59.367186 update_engine[1802]: I20250430 13:53:59.366379 1802 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:53:59.367186 update_engine[1802]: I20250430 13:53:59.366418 1802 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:53:59.367186 update_engine[1802]: I20250430 13:53:59.366433 1802 update_attempter.cc:306] Processing Done. Apr 30 13:53:59.367186 update_engine[1802]: I20250430 13:53:59.366451 1802 update_attempter.cc:310] Error event sent. Apr 30 13:53:59.367186 update_engine[1802]: I20250430 13:53:59.366477 1802 update_check_scheduler.cc:74] Next update check in 43m26s Apr 30 13:53:59.367875 locksmithd[1851]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 13:54:44.272584 systemd[1]: Started sshd@13-147.75.202.179:22-83.97.24.41:40380.service - OpenSSH per-connection server daemon (83.97.24.41:40380). Apr 30 13:54:45.216129 sshd[4933]: Invalid user postmaster from 83.97.24.41 port 40380 Apr 30 13:54:45.396734 sshd[4933]: Received disconnect from 83.97.24.41 port 40380:11: Bye Bye [preauth] Apr 30 13:54:45.396734 sshd[4933]: Disconnected from invalid user postmaster 83.97.24.41 port 40380 [preauth] Apr 30 13:54:45.399975 systemd[1]: sshd@13-147.75.202.179:22-83.97.24.41:40380.service: Deactivated successfully. Apr 30 13:55:54.382680 systemd[1]: Started sshd@14-147.75.202.179:22-88.214.48.10:38330.service - OpenSSH per-connection server daemon (88.214.48.10:38330). Apr 30 13:55:56.415146 sshd[4949]: Invalid user tools from 88.214.48.10 port 38330 Apr 30 13:55:56.619607 sshd[4949]: Connection closed by invalid user tools 88.214.48.10 port 38330 [preauth] Apr 30 13:55:56.622809 systemd[1]: sshd@14-147.75.202.179:22-88.214.48.10:38330.service: Deactivated successfully. Apr 30 13:56:14.125092 systemd[1]: Started sshd@15-147.75.202.179:22-147.75.109.163:45282.service - OpenSSH per-connection server daemon (147.75.109.163:45282). Apr 30 13:56:14.188644 sshd[4956]: Accepted publickey for core from 147.75.109.163 port 45282 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:14.189900 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:14.194359 systemd-logind[1797]: New session 12 of user core. Apr 30 13:56:14.208675 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 13:56:14.338774 sshd[4958]: Connection closed by 147.75.109.163 port 45282 Apr 30 13:56:14.338959 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:14.340461 systemd[1]: sshd@15-147.75.202.179:22-147.75.109.163:45282.service: Deactivated successfully. Apr 30 13:56:14.341362 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 13:56:14.342090 systemd-logind[1797]: Session 12 logged out. Waiting for processes to exit. Apr 30 13:56:14.342806 systemd-logind[1797]: Removed session 12. Apr 30 13:56:19.366094 systemd[1]: Started sshd@16-147.75.202.179:22-147.75.109.163:34358.service - OpenSSH per-connection server daemon (147.75.109.163:34358). Apr 30 13:56:19.397586 sshd[4984]: Accepted publickey for core from 147.75.109.163 port 34358 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:19.398328 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:19.401308 systemd-logind[1797]: New session 13 of user core. Apr 30 13:56:19.411534 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 13:56:19.498081 sshd[4986]: Connection closed by 147.75.109.163 port 34358 Apr 30 13:56:19.498272 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:19.500031 systemd[1]: sshd@16-147.75.202.179:22-147.75.109.163:34358.service: Deactivated successfully. Apr 30 13:56:19.501006 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 13:56:19.501780 systemd-logind[1797]: Session 13 logged out. Waiting for processes to exit. Apr 30 13:56:19.502372 systemd-logind[1797]: Removed session 13. Apr 30 13:56:24.541685 systemd[1]: Started sshd@17-147.75.202.179:22-147.75.109.163:34362.service - OpenSSH per-connection server daemon (147.75.109.163:34362). Apr 30 13:56:24.573442 sshd[5015]: Accepted publickey for core from 147.75.109.163 port 34362 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:24.574341 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:24.578140 systemd-logind[1797]: New session 14 of user core. Apr 30 13:56:24.593717 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 13:56:24.683253 sshd[5017]: Connection closed by 147.75.109.163 port 34362 Apr 30 13:56:24.683459 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:24.685161 systemd[1]: sshd@17-147.75.202.179:22-147.75.109.163:34362.service: Deactivated successfully. Apr 30 13:56:24.686156 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 13:56:24.686946 systemd-logind[1797]: Session 14 logged out. Waiting for processes to exit. Apr 30 13:56:24.687586 systemd-logind[1797]: Removed session 14. Apr 30 13:56:29.716707 systemd[1]: Started sshd@18-147.75.202.179:22-147.75.109.163:57364.service - OpenSSH per-connection server daemon (147.75.109.163:57364). Apr 30 13:56:29.747029 sshd[5044]: Accepted publickey for core from 147.75.109.163 port 57364 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:29.750249 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:29.756376 systemd-logind[1797]: New session 15 of user core. Apr 30 13:56:29.765653 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 13:56:29.893138 sshd[5046]: Connection closed by 147.75.109.163 port 57364 Apr 30 13:56:29.893319 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:29.895049 systemd[1]: sshd@18-147.75.202.179:22-147.75.109.163:57364.service: Deactivated successfully. Apr 30 13:56:29.895979 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 13:56:29.896685 systemd-logind[1797]: Session 15 logged out. Waiting for processes to exit. Apr 30 13:56:29.897276 systemd-logind[1797]: Removed session 15. Apr 30 13:56:34.933686 systemd[1]: Started sshd@19-147.75.202.179:22-147.75.109.163:57366.service - OpenSSH per-connection server daemon (147.75.109.163:57366). Apr 30 13:56:34.964626 sshd[5072]: Accepted publickey for core from 147.75.109.163 port 57366 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:34.965404 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:34.968917 systemd-logind[1797]: New session 16 of user core. Apr 30 13:56:34.993558 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 13:56:35.080243 sshd[5074]: Connection closed by 147.75.109.163 port 57366 Apr 30 13:56:35.080470 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:35.082104 systemd[1]: sshd@19-147.75.202.179:22-147.75.109.163:57366.service: Deactivated successfully. Apr 30 13:56:35.083041 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 13:56:35.083744 systemd-logind[1797]: Session 16 logged out. Waiting for processes to exit. Apr 30 13:56:35.084234 systemd-logind[1797]: Removed session 16. Apr 30 13:56:40.118696 systemd[1]: Started sshd@20-147.75.202.179:22-147.75.109.163:56954.service - OpenSSH per-connection server daemon (147.75.109.163:56954). Apr 30 13:56:40.148670 sshd[5098]: Accepted publickey for core from 147.75.109.163 port 56954 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:40.149295 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:40.152054 systemd-logind[1797]: New session 17 of user core. Apr 30 13:56:40.166655 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 13:56:40.270787 sshd[5100]: Connection closed by 147.75.109.163 port 56954 Apr 30 13:56:40.271009 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:40.290323 systemd[1]: sshd@20-147.75.202.179:22-147.75.109.163:56954.service: Deactivated successfully. Apr 30 13:56:40.291514 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 13:56:40.292497 systemd-logind[1797]: Session 17 logged out. Waiting for processes to exit. Apr 30 13:56:40.293585 systemd[1]: Started sshd@21-147.75.202.179:22-147.75.109.163:56960.service - OpenSSH per-connection server daemon (147.75.109.163:56960). Apr 30 13:56:40.294277 systemd-logind[1797]: Removed session 17. Apr 30 13:56:40.335965 sshd[5125]: Accepted publickey for core from 147.75.109.163 port 56960 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:40.339041 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:40.350539 systemd-logind[1797]: New session 18 of user core. Apr 30 13:56:40.368824 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 13:56:40.528936 sshd[5128]: Connection closed by 147.75.109.163 port 56960 Apr 30 13:56:40.529176 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:40.541461 systemd[1]: sshd@21-147.75.202.179:22-147.75.109.163:56960.service: Deactivated successfully. Apr 30 13:56:40.542306 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 13:56:40.542961 systemd-logind[1797]: Session 18 logged out. Waiting for processes to exit. Apr 30 13:56:40.543564 systemd[1]: Started sshd@22-147.75.202.179:22-147.75.109.163:56968.service - OpenSSH per-connection server daemon (147.75.109.163:56968). Apr 30 13:56:40.544008 systemd-logind[1797]: Removed session 18. Apr 30 13:56:40.574797 sshd[5150]: Accepted publickey for core from 147.75.109.163 port 56968 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:40.575494 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:40.578400 systemd-logind[1797]: New session 19 of user core. Apr 30 13:56:40.596629 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 13:56:40.742418 sshd[5153]: Connection closed by 147.75.109.163 port 56968 Apr 30 13:56:40.742632 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:40.744424 systemd[1]: sshd@22-147.75.202.179:22-147.75.109.163:56968.service: Deactivated successfully. Apr 30 13:56:40.745421 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 13:56:40.746237 systemd-logind[1797]: Session 19 logged out. Waiting for processes to exit. Apr 30 13:56:40.747018 systemd-logind[1797]: Removed session 19. Apr 30 13:56:45.760026 systemd[1]: Started sshd@23-147.75.202.179:22-147.75.109.163:56984.service - OpenSSH per-connection server daemon (147.75.109.163:56984). Apr 30 13:56:45.791522 sshd[5179]: Accepted publickey for core from 147.75.109.163 port 56984 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:45.792329 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:45.795339 systemd-logind[1797]: New session 20 of user core. Apr 30 13:56:45.809651 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 13:56:45.899639 sshd[5181]: Connection closed by 147.75.109.163 port 56984 Apr 30 13:56:45.899848 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:45.916899 systemd[1]: sshd@23-147.75.202.179:22-147.75.109.163:56984.service: Deactivated successfully. Apr 30 13:56:45.917885 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 13:56:45.918746 systemd-logind[1797]: Session 20 logged out. Waiting for processes to exit. Apr 30 13:56:45.919567 systemd[1]: Started sshd@24-147.75.202.179:22-147.75.109.163:56988.service - OpenSSH per-connection server daemon (147.75.109.163:56988). Apr 30 13:56:45.920167 systemd-logind[1797]: Removed session 20. Apr 30 13:56:45.956843 sshd[5205]: Accepted publickey for core from 147.75.109.163 port 56988 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:45.957721 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:45.961678 systemd-logind[1797]: New session 21 of user core. Apr 30 13:56:45.982608 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 13:56:46.278670 sshd[5208]: Connection closed by 147.75.109.163 port 56988 Apr 30 13:56:46.279490 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:46.308296 systemd[1]: sshd@24-147.75.202.179:22-147.75.109.163:56988.service: Deactivated successfully. Apr 30 13:56:46.312353 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 13:56:46.315793 systemd-logind[1797]: Session 21 logged out. Waiting for processes to exit. Apr 30 13:56:46.326727 systemd[1]: Started sshd@25-147.75.202.179:22-147.75.109.163:56992.service - OpenSSH per-connection server daemon (147.75.109.163:56992). Apr 30 13:56:46.327357 systemd-logind[1797]: Removed session 21. Apr 30 13:56:46.356782 sshd[5230]: Accepted publickey for core from 147.75.109.163 port 56992 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:46.357415 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:46.360056 systemd-logind[1797]: New session 22 of user core. Apr 30 13:56:46.381504 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 13:56:47.485439 sshd[5233]: Connection closed by 147.75.109.163 port 56992 Apr 30 13:56:47.485857 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:47.495632 systemd[1]: sshd@25-147.75.202.179:22-147.75.109.163:56992.service: Deactivated successfully. Apr 30 13:56:47.496619 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 13:56:47.496739 systemd[1]: session-22.scope: Consumed 372ms CPU time, 65.2M memory peak. Apr 30 13:56:47.497309 systemd-logind[1797]: Session 22 logged out. Waiting for processes to exit. Apr 30 13:56:47.498149 systemd[1]: Started sshd@26-147.75.202.179:22-147.75.109.163:52470.service - OpenSSH per-connection server daemon (147.75.109.163:52470). Apr 30 13:56:47.498613 systemd-logind[1797]: Removed session 22. Apr 30 13:56:47.531515 sshd[5262]: Accepted publickey for core from 147.75.109.163 port 52470 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:47.532221 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:47.535376 systemd-logind[1797]: New session 23 of user core. Apr 30 13:56:47.555664 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 13:56:47.740825 sshd[5268]: Connection closed by 147.75.109.163 port 52470 Apr 30 13:56:47.741041 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:47.761195 systemd[1]: sshd@26-147.75.202.179:22-147.75.109.163:52470.service: Deactivated successfully. Apr 30 13:56:47.762907 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 13:56:47.764337 systemd-logind[1797]: Session 23 logged out. Waiting for processes to exit. Apr 30 13:56:47.765691 systemd[1]: Started sshd@27-147.75.202.179:22-147.75.109.163:52486.service - OpenSSH per-connection server daemon (147.75.109.163:52486). Apr 30 13:56:47.766715 systemd-logind[1797]: Removed session 23. Apr 30 13:56:47.814964 sshd[5290]: Accepted publickey for core from 147.75.109.163 port 52486 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:47.816363 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:47.821944 systemd-logind[1797]: New session 24 of user core. Apr 30 13:56:47.842930 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 13:56:47.978154 sshd[5295]: Connection closed by 147.75.109.163 port 52486 Apr 30 13:56:47.978350 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:47.980011 systemd[1]: sshd@27-147.75.202.179:22-147.75.109.163:52486.service: Deactivated successfully. Apr 30 13:56:47.980931 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 13:56:47.981679 systemd-logind[1797]: Session 24 logged out. Waiting for processes to exit. Apr 30 13:56:47.982255 systemd-logind[1797]: Removed session 24. Apr 30 13:56:52.995264 systemd[1]: Started sshd@28-147.75.202.179:22-147.75.109.163:52488.service - OpenSSH per-connection server daemon (147.75.109.163:52488). Apr 30 13:56:53.026582 sshd[5326]: Accepted publickey for core from 147.75.109.163 port 52488 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:53.027375 sshd-session[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:53.030313 systemd-logind[1797]: New session 25 of user core. Apr 30 13:56:53.040620 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 13:56:53.124543 sshd[5328]: Connection closed by 147.75.109.163 port 52488 Apr 30 13:56:53.124723 sshd-session[5326]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:53.126307 systemd[1]: sshd@28-147.75.202.179:22-147.75.109.163:52488.service: Deactivated successfully. Apr 30 13:56:53.127225 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 13:56:53.128046 systemd-logind[1797]: Session 25 logged out. Waiting for processes to exit. Apr 30 13:56:53.128748 systemd-logind[1797]: Removed session 25. Apr 30 13:56:58.162685 systemd[1]: Started sshd@29-147.75.202.179:22-147.75.109.163:50368.service - OpenSSH per-connection server daemon (147.75.109.163:50368). Apr 30 13:56:58.192660 sshd[5351]: Accepted publickey for core from 147.75.109.163 port 50368 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:56:58.193472 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:56:58.196939 systemd-logind[1797]: New session 26 of user core. Apr 30 13:56:58.207540 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 13:56:58.296063 sshd[5353]: Connection closed by 147.75.109.163 port 50368 Apr 30 13:56:58.296436 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Apr 30 13:56:58.298225 systemd[1]: sshd@29-147.75.202.179:22-147.75.109.163:50368.service: Deactivated successfully. Apr 30 13:56:58.299238 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 13:56:58.300009 systemd-logind[1797]: Session 26 logged out. Waiting for processes to exit. Apr 30 13:56:58.300640 systemd-logind[1797]: Removed session 26. Apr 30 13:57:03.330676 systemd[1]: Started sshd@30-147.75.202.179:22-147.75.109.163:50380.service - OpenSSH per-connection server daemon (147.75.109.163:50380). Apr 30 13:57:03.361292 sshd[5378]: Accepted publickey for core from 147.75.109.163 port 50380 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:03.362044 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:03.365328 systemd-logind[1797]: New session 27 of user core. Apr 30 13:57:03.381640 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 13:57:03.466512 sshd[5380]: Connection closed by 147.75.109.163 port 50380 Apr 30 13:57:03.466720 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:03.468273 systemd[1]: sshd@30-147.75.202.179:22-147.75.109.163:50380.service: Deactivated successfully. Apr 30 13:57:03.469294 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 13:57:03.470059 systemd-logind[1797]: Session 27 logged out. Waiting for processes to exit. Apr 30 13:57:03.470773 systemd-logind[1797]: Removed session 27. Apr 30 13:57:08.495685 systemd[1]: Started sshd@31-147.75.202.179:22-147.75.109.163:42302.service - OpenSSH per-connection server daemon (147.75.109.163:42302). Apr 30 13:57:08.525257 sshd[5407]: Accepted publickey for core from 147.75.109.163 port 42302 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:08.526036 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:08.529093 systemd-logind[1797]: New session 28 of user core. Apr 30 13:57:08.539626 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 13:57:08.625460 sshd[5409]: Connection closed by 147.75.109.163 port 42302 Apr 30 13:57:08.625719 sshd-session[5407]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:08.643628 systemd[1]: sshd@31-147.75.202.179:22-147.75.109.163:42302.service: Deactivated successfully. Apr 30 13:57:08.644485 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 13:57:08.645282 systemd-logind[1797]: Session 28 logged out. Waiting for processes to exit. Apr 30 13:57:08.646176 systemd[1]: Started sshd@32-147.75.202.179:22-147.75.109.163:42316.service - OpenSSH per-connection server daemon (147.75.109.163:42316). Apr 30 13:57:08.646801 systemd-logind[1797]: Removed session 28. Apr 30 13:57:08.679729 sshd[5433]: Accepted publickey for core from 147.75.109.163 port 42316 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:08.680628 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:08.684173 systemd-logind[1797]: New session 29 of user core. Apr 30 13:57:08.694566 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 13:57:10.068526 containerd[1816]: time="2025-04-30T13:57:10.068321764Z" level=info msg="StopContainer for \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\" with timeout 30 (s)" Apr 30 13:57:10.069361 containerd[1816]: time="2025-04-30T13:57:10.069134191Z" level=info msg="Stop container \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\" with signal terminated" Apr 30 13:57:10.093091 systemd[1]: cri-containerd-12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e.scope: Deactivated successfully. Apr 30 13:57:10.122604 containerd[1816]: time="2025-04-30T13:57:10.122527627Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 13:57:10.132169 containerd[1816]: time="2025-04-30T13:57:10.132145851Z" level=info msg="StopContainer for \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\" with timeout 2 (s)" Apr 30 13:57:10.132456 containerd[1816]: time="2025-04-30T13:57:10.132279592Z" level=info msg="Stop container \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\" with signal terminated" Apr 30 13:57:10.132486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e-rootfs.mount: Deactivated successfully. Apr 30 13:57:10.135444 systemd-networkd[1728]: lxc_health: Link DOWN Apr 30 13:57:10.135446 systemd-networkd[1728]: lxc_health: Lost carrier Apr 30 13:57:10.158004 containerd[1816]: time="2025-04-30T13:57:10.157971213Z" level=info msg="shim disconnected" id=12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e namespace=k8s.io Apr 30 13:57:10.158004 containerd[1816]: time="2025-04-30T13:57:10.158004133Z" level=warning msg="cleaning up after shim disconnected" id=12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e namespace=k8s.io Apr 30 13:57:10.158085 containerd[1816]: time="2025-04-30T13:57:10.158009895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:10.165906 containerd[1816]: time="2025-04-30T13:57:10.165855764Z" level=info msg="StopContainer for \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\" returns successfully" Apr 30 13:57:10.166262 containerd[1816]: time="2025-04-30T13:57:10.166244994Z" level=info msg="StopPodSandbox for \"2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0\"" Apr 30 13:57:10.166296 containerd[1816]: time="2025-04-30T13:57:10.166270743Z" level=info msg="Container to stop \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:57:10.167766 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0-shm.mount: Deactivated successfully. Apr 30 13:57:10.170212 systemd[1]: cri-containerd-2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0.scope: Deactivated successfully. Apr 30 13:57:10.178745 systemd[1]: cri-containerd-d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d.scope: Deactivated successfully. Apr 30 13:57:10.178958 systemd[1]: cri-containerd-d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d.scope: Consumed 6.338s CPU time, 170.8M memory peak, 144K read from disk, 13.3M written to disk. Apr 30 13:57:10.182686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0-rootfs.mount: Deactivated successfully. Apr 30 13:57:10.190607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d-rootfs.mount: Deactivated successfully. Apr 30 13:57:10.202929 containerd[1816]: time="2025-04-30T13:57:10.202893636Z" level=info msg="shim disconnected" id=2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0 namespace=k8s.io Apr 30 13:57:10.203002 containerd[1816]: time="2025-04-30T13:57:10.202928577Z" level=warning msg="cleaning up after shim disconnected" id=2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0 namespace=k8s.io Apr 30 13:57:10.203002 containerd[1816]: time="2025-04-30T13:57:10.202938940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:10.203043 containerd[1816]: time="2025-04-30T13:57:10.202910147Z" level=info msg="shim disconnected" id=d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d namespace=k8s.io Apr 30 13:57:10.203043 containerd[1816]: time="2025-04-30T13:57:10.203012075Z" level=warning msg="cleaning up after shim disconnected" id=d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d namespace=k8s.io Apr 30 13:57:10.203043 containerd[1816]: time="2025-04-30T13:57:10.203017528Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:10.209318 containerd[1816]: time="2025-04-30T13:57:10.209295362Z" level=info msg="TearDown network for sandbox \"2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0\" successfully" Apr 30 13:57:10.209318 containerd[1816]: time="2025-04-30T13:57:10.209313201Z" level=info msg="StopPodSandbox for \"2bbec367809a56bfa5d99ef65cfb251e1eba7c0bbf90daaabba2b02f20ebb6c0\" returns successfully" Apr 30 13:57:10.209903 containerd[1816]: time="2025-04-30T13:57:10.209868025Z" level=info msg="StopContainer for \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\" returns successfully" Apr 30 13:57:10.210086 containerd[1816]: time="2025-04-30T13:57:10.210073250Z" level=info msg="StopPodSandbox for \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\"" Apr 30 13:57:10.210132 containerd[1816]: time="2025-04-30T13:57:10.210091061Z" level=info msg="Container to stop \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:57:10.210132 containerd[1816]: time="2025-04-30T13:57:10.210118847Z" level=info msg="Container to stop \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:57:10.210132 containerd[1816]: time="2025-04-30T13:57:10.210127106Z" level=info msg="Container to stop \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:57:10.210225 containerd[1816]: time="2025-04-30T13:57:10.210135926Z" level=info msg="Container to stop \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:57:10.210225 containerd[1816]: time="2025-04-30T13:57:10.210144140Z" level=info msg="Container to stop \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:57:10.213099 systemd[1]: cri-containerd-19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539.scope: Deactivated successfully. Apr 30 13:57:10.236941 containerd[1816]: time="2025-04-30T13:57:10.236908713Z" level=info msg="shim disconnected" id=19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539 namespace=k8s.io Apr 30 13:57:10.236941 containerd[1816]: time="2025-04-30T13:57:10.236941091Z" level=warning msg="cleaning up after shim disconnected" id=19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539 namespace=k8s.io Apr 30 13:57:10.237055 containerd[1816]: time="2025-04-30T13:57:10.236948757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:10.243372 containerd[1816]: time="2025-04-30T13:57:10.243319654Z" level=info msg="TearDown network for sandbox \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" successfully" Apr 30 13:57:10.243372 containerd[1816]: time="2025-04-30T13:57:10.243340306Z" level=info msg="StopPodSandbox for \"19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539\" returns successfully" Apr 30 13:57:10.291768 kubelet[3309]: I0430 13:57:10.291643 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-etc-cni-netd\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.291768 kubelet[3309]: I0430 13:57:10.291765 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-xtables-lock\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.293094 kubelet[3309]: I0430 13:57:10.291782 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.293094 kubelet[3309]: I0430 13:57:10.291826 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-hostproc\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.293094 kubelet[3309]: I0430 13:57:10.291894 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cni-path\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.293094 kubelet[3309]: I0430 13:57:10.291907 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.293094 kubelet[3309]: I0430 13:57:10.291953 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-hostproc" (OuterVolumeSpecName: "hostproc") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.293891 kubelet[3309]: I0430 13:57:10.291990 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-config-path\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.293891 kubelet[3309]: I0430 13:57:10.291997 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cni-path" (OuterVolumeSpecName: "cni-path") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.293891 kubelet[3309]: I0430 13:57:10.292071 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kscfh\" (UniqueName: \"kubernetes.io/projected/69a6792e-9fb1-4429-a1c2-9778a04936ed-kube-api-access-kscfh\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.293891 kubelet[3309]: I0430 13:57:10.292117 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-run\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.293891 kubelet[3309]: I0430 13:57:10.292187 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-host-proc-sys-net\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.294485 kubelet[3309]: I0430 13:57:10.292270 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.294485 kubelet[3309]: I0430 13:57:10.292294 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69a6792e-9fb1-4429-a1c2-9778a04936ed-clustermesh-secrets\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.294485 kubelet[3309]: I0430 13:57:10.292365 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.294485 kubelet[3309]: I0430 13:57:10.292413 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69a6792e-9fb1-4429-a1c2-9778a04936ed-hubble-tls\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.294485 kubelet[3309]: I0430 13:57:10.292505 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-cgroup\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.295003 kubelet[3309]: I0430 13:57:10.292581 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.295003 kubelet[3309]: I0430 13:57:10.292615 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdh5x\" (UniqueName: \"kubernetes.io/projected/702b4d47-4138-413d-8fb6-057210f395b3-kube-api-access-sdh5x\") pod \"702b4d47-4138-413d-8fb6-057210f395b3\" (UID: \"702b4d47-4138-413d-8fb6-057210f395b3\") " Apr 30 13:57:10.295003 kubelet[3309]: I0430 13:57:10.292703 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-bpf-maps\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.295003 kubelet[3309]: I0430 13:57:10.292782 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-lib-modules\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.295003 kubelet[3309]: I0430 13:57:10.292846 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.295516 kubelet[3309]: I0430 13:57:10.292869 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-host-proc-sys-kernel\") pod \"69a6792e-9fb1-4429-a1c2-9778a04936ed\" (UID: \"69a6792e-9fb1-4429-a1c2-9778a04936ed\") " Apr 30 13:57:10.295516 kubelet[3309]: I0430 13:57:10.292963 3309 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/702b4d47-4138-413d-8fb6-057210f395b3-cilium-config-path\") pod \"702b4d47-4138-413d-8fb6-057210f395b3\" (UID: \"702b4d47-4138-413d-8fb6-057210f395b3\") " Apr 30 13:57:10.295516 kubelet[3309]: I0430 13:57:10.292936 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.295516 kubelet[3309]: I0430 13:57:10.293093 3309 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-etc-cni-netd\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.295516 kubelet[3309]: I0430 13:57:10.293030 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 13:57:10.295516 kubelet[3309]: I0430 13:57:10.293199 3309 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-xtables-lock\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.295736 kubelet[3309]: I0430 13:57:10.293268 3309 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-hostproc\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.295736 kubelet[3309]: I0430 13:57:10.293313 3309 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cni-path\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.295736 kubelet[3309]: I0430 13:57:10.293359 3309 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-run\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.295736 kubelet[3309]: I0430 13:57:10.293428 3309 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-host-proc-sys-net\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.295736 kubelet[3309]: I0430 13:57:10.293480 3309 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-cgroup\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.295736 kubelet[3309]: I0430 13:57:10.293526 3309 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-bpf-maps\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.296904 kubelet[3309]: I0430 13:57:10.296841 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69a6792e-9fb1-4429-a1c2-9778a04936ed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 13:57:10.296904 kubelet[3309]: I0430 13:57:10.296870 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a6792e-9fb1-4429-a1c2-9778a04936ed-kube-api-access-kscfh" (OuterVolumeSpecName: "kube-api-access-kscfh") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "kube-api-access-kscfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 13:57:10.296904 kubelet[3309]: I0430 13:57:10.296899 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 13:57:10.297061 kubelet[3309]: I0430 13:57:10.296964 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a6792e-9fb1-4429-a1c2-9778a04936ed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69a6792e-9fb1-4429-a1c2-9778a04936ed" (UID: "69a6792e-9fb1-4429-a1c2-9778a04936ed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 13:57:10.297061 kubelet[3309]: I0430 13:57:10.297004 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/702b4d47-4138-413d-8fb6-057210f395b3-kube-api-access-sdh5x" (OuterVolumeSpecName: "kube-api-access-sdh5x") pod "702b4d47-4138-413d-8fb6-057210f395b3" (UID: "702b4d47-4138-413d-8fb6-057210f395b3"). InnerVolumeSpecName "kube-api-access-sdh5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 13:57:10.297210 kubelet[3309]: I0430 13:57:10.297155 3309 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/702b4d47-4138-413d-8fb6-057210f395b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "702b4d47-4138-413d-8fb6-057210f395b3" (UID: "702b4d47-4138-413d-8fb6-057210f395b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 13:57:10.394872 kubelet[3309]: I0430 13:57:10.394625 3309 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-host-proc-sys-kernel\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.394872 kubelet[3309]: I0430 13:57:10.394699 3309 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/702b4d47-4138-413d-8fb6-057210f395b3-cilium-config-path\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.394872 kubelet[3309]: I0430 13:57:10.394731 3309 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69a6792e-9fb1-4429-a1c2-9778a04936ed-lib-modules\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.394872 kubelet[3309]: I0430 13:57:10.394758 3309 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69a6792e-9fb1-4429-a1c2-9778a04936ed-cilium-config-path\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.394872 kubelet[3309]: I0430 13:57:10.394787 3309 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kscfh\" (UniqueName: \"kubernetes.io/projected/69a6792e-9fb1-4429-a1c2-9778a04936ed-kube-api-access-kscfh\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.394872 kubelet[3309]: I0430 13:57:10.394816 3309 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69a6792e-9fb1-4429-a1c2-9778a04936ed-hubble-tls\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.394872 kubelet[3309]: I0430 13:57:10.394843 3309 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sdh5x\" (UniqueName: \"kubernetes.io/projected/702b4d47-4138-413d-8fb6-057210f395b3-kube-api-access-sdh5x\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:10.395712 kubelet[3309]: I0430 13:57:10.394869 3309 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69a6792e-9fb1-4429-a1c2-9778a04936ed-clustermesh-secrets\") on node \"ci-4230.1.1-a-70e1417a44\" DevicePath \"\"" Apr 30 13:57:11.004698 kubelet[3309]: I0430 13:57:11.004679 3309 scope.go:117] "RemoveContainer" containerID="d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d" Apr 30 13:57:11.005536 containerd[1816]: time="2025-04-30T13:57:11.005514225Z" level=info msg="RemoveContainer for \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\"" Apr 30 13:57:11.007187 containerd[1816]: time="2025-04-30T13:57:11.007168472Z" level=info msg="RemoveContainer for \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\" returns successfully" Apr 30 13:57:11.007342 kubelet[3309]: I0430 13:57:11.007328 3309 scope.go:117] "RemoveContainer" containerID="c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4" Apr 30 13:57:11.007923 containerd[1816]: time="2025-04-30T13:57:11.007863467Z" level=info msg="RemoveContainer for \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\"" Apr 30 13:57:11.008656 systemd[1]: Removed slice kubepods-burstable-pod69a6792e_9fb1_4429_a1c2_9778a04936ed.slice - libcontainer container kubepods-burstable-pod69a6792e_9fb1_4429_a1c2_9778a04936ed.slice. Apr 30 13:57:11.008748 systemd[1]: kubepods-burstable-pod69a6792e_9fb1_4429_a1c2_9778a04936ed.slice: Consumed 6.407s CPU time, 171.3M memory peak, 144K read from disk, 13.3M written to disk. Apr 30 13:57:11.009324 systemd[1]: Removed slice kubepods-besteffort-pod702b4d47_4138_413d_8fb6_057210f395b3.slice - libcontainer container kubepods-besteffort-pod702b4d47_4138_413d_8fb6_057210f395b3.slice. Apr 30 13:57:11.009503 containerd[1816]: time="2025-04-30T13:57:11.009489998Z" level=info msg="RemoveContainer for \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\" returns successfully" Apr 30 13:57:11.009594 kubelet[3309]: I0430 13:57:11.009583 3309 scope.go:117] "RemoveContainer" containerID="1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33" Apr 30 13:57:11.010023 containerd[1816]: time="2025-04-30T13:57:11.010012810Z" level=info msg="RemoveContainer for \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\"" Apr 30 13:57:11.011223 containerd[1816]: time="2025-04-30T13:57:11.011212289Z" level=info msg="RemoveContainer for \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\" returns successfully" Apr 30 13:57:11.011283 kubelet[3309]: I0430 13:57:11.011275 3309 scope.go:117] "RemoveContainer" containerID="c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08" Apr 30 13:57:11.011667 containerd[1816]: time="2025-04-30T13:57:11.011655601Z" level=info msg="RemoveContainer for \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\"" Apr 30 13:57:11.012663 containerd[1816]: time="2025-04-30T13:57:11.012653507Z" level=info msg="RemoveContainer for \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\" returns successfully" Apr 30 13:57:11.012715 kubelet[3309]: I0430 13:57:11.012708 3309 scope.go:117] "RemoveContainer" containerID="76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6" Apr 30 13:57:11.013048 containerd[1816]: time="2025-04-30T13:57:11.013037226Z" level=info msg="RemoveContainer for \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\"" Apr 30 13:57:11.014121 containerd[1816]: time="2025-04-30T13:57:11.014083281Z" level=info msg="RemoveContainer for \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\" returns successfully" Apr 30 13:57:11.014160 kubelet[3309]: I0430 13:57:11.014147 3309 scope.go:117] "RemoveContainer" containerID="d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d" Apr 30 13:57:11.014249 containerd[1816]: time="2025-04-30T13:57:11.014230364Z" level=error msg="ContainerStatus for \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\": not found" Apr 30 13:57:11.014303 kubelet[3309]: E0430 13:57:11.014294 3309 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\": not found" containerID="d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d" Apr 30 13:57:11.014347 kubelet[3309]: I0430 13:57:11.014310 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d"} err="failed to get container status \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1d94d02949c9ea536548026256a7183e21f72737b9f5046f5e98b5ca1daa34d\": not found" Apr 30 13:57:11.014367 kubelet[3309]: I0430 13:57:11.014348 3309 scope.go:117] "RemoveContainer" containerID="c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4" Apr 30 13:57:11.014474 containerd[1816]: time="2025-04-30T13:57:11.014428615Z" level=error msg="ContainerStatus for \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\": not found" Apr 30 13:57:11.014531 kubelet[3309]: E0430 13:57:11.014507 3309 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\": not found" containerID="c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4" Apr 30 13:57:11.014531 kubelet[3309]: I0430 13:57:11.014520 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4"} err="failed to get container status \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2d8c246995aa7f9887834c44f638df61c55b05f0d38cadf536801106fe1a3f4\": not found" Apr 30 13:57:11.014570 kubelet[3309]: I0430 13:57:11.014532 3309 scope.go:117] "RemoveContainer" containerID="1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33" Apr 30 13:57:11.014637 containerd[1816]: time="2025-04-30T13:57:11.014589447Z" level=error msg="ContainerStatus for \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\": not found" Apr 30 13:57:11.014674 kubelet[3309]: E0430 13:57:11.014632 3309 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\": not found" containerID="1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33" Apr 30 13:57:11.014674 kubelet[3309]: I0430 13:57:11.014643 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33"} err="failed to get container status \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ed284b7f36c633a75bdeac207d31f0fd1fdd841e47eb1745c48a89844237a33\": not found" Apr 30 13:57:11.014674 kubelet[3309]: I0430 13:57:11.014651 3309 scope.go:117] "RemoveContainer" containerID="c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08" Apr 30 13:57:11.014765 containerd[1816]: time="2025-04-30T13:57:11.014725204Z" level=error msg="ContainerStatus for \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\": not found" Apr 30 13:57:11.014799 kubelet[3309]: E0430 13:57:11.014791 3309 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\": not found" containerID="c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08" Apr 30 13:57:11.014833 kubelet[3309]: I0430 13:57:11.014804 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08"} err="failed to get container status \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1b7387a98c5bac9a0b6b6b255d4966dd486789566eef6c123e24bc820903f08\": not found" Apr 30 13:57:11.014833 kubelet[3309]: I0430 13:57:11.014818 3309 scope.go:117] "RemoveContainer" containerID="76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6" Apr 30 13:57:11.014918 containerd[1816]: time="2025-04-30T13:57:11.014904417Z" level=error msg="ContainerStatus for \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\": not found" Apr 30 13:57:11.014975 kubelet[3309]: E0430 13:57:11.014963 3309 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\": not found" containerID="76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6" Apr 30 13:57:11.015006 kubelet[3309]: I0430 13:57:11.014978 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6"} err="failed to get container status \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\": rpc error: code = NotFound desc = an error occurred when try to find container \"76916056b1cf0574edb593ca604b8940f05d9b5affcfa48670905c0459beccd6\": not found" Apr 30 13:57:11.015006 kubelet[3309]: I0430 13:57:11.014988 3309 scope.go:117] "RemoveContainer" containerID="12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e" Apr 30 13:57:11.015376 containerd[1816]: time="2025-04-30T13:57:11.015366904Z" level=info msg="RemoveContainer for \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\"" Apr 30 13:57:11.016633 containerd[1816]: time="2025-04-30T13:57:11.016609014Z" level=info msg="RemoveContainer for \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\" returns successfully" Apr 30 13:57:11.016734 kubelet[3309]: I0430 13:57:11.016708 3309 scope.go:117] "RemoveContainer" containerID="12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e" Apr 30 13:57:11.016924 containerd[1816]: time="2025-04-30T13:57:11.016859517Z" level=error msg="ContainerStatus for \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\": not found" Apr 30 13:57:11.017007 kubelet[3309]: E0430 13:57:11.016996 3309 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\": not found" containerID="12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e" Apr 30 13:57:11.017032 kubelet[3309]: I0430 13:57:11.017011 3309 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e"} err="failed to get container status \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\": rpc error: code = NotFound desc = an error occurred when try to find container \"12bf72e95ae467d39cc6ad8105bb50946421adddc8406572d54ae9ebc61d897e\": not found" Apr 30 13:57:11.088532 systemd[1]: var-lib-kubelet-pods-702b4d47\x2d4138\x2d413d\x2d8fb6\x2d057210f395b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsdh5x.mount: Deactivated successfully. Apr 30 13:57:11.088593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539-rootfs.mount: Deactivated successfully. Apr 30 13:57:11.088630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19f371641b26887edbdff4f0243f200ecc5a41027104b316beec505119b22539-shm.mount: Deactivated successfully. Apr 30 13:57:11.088668 systemd[1]: var-lib-kubelet-pods-69a6792e\x2d9fb1\x2d4429\x2da1c2\x2d9778a04936ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkscfh.mount: Deactivated successfully. Apr 30 13:57:11.088708 systemd[1]: var-lib-kubelet-pods-69a6792e\x2d9fb1\x2d4429\x2da1c2\x2d9778a04936ed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 13:57:11.088745 systemd[1]: var-lib-kubelet-pods-69a6792e\x2d9fb1\x2d4429\x2da1c2\x2d9778a04936ed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 13:57:11.810320 kubelet[3309]: I0430 13:57:11.810249 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a6792e-9fb1-4429-a1c2-9778a04936ed" path="/var/lib/kubelet/pods/69a6792e-9fb1-4429-a1c2-9778a04936ed/volumes" Apr 30 13:57:11.812458 kubelet[3309]: I0430 13:57:11.812404 3309 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="702b4d47-4138-413d-8fb6-057210f395b3" path="/var/lib/kubelet/pods/702b4d47-4138-413d-8fb6-057210f395b3/volumes" Apr 30 13:57:12.014299 sshd[5437]: Connection closed by 147.75.109.163 port 42316 Apr 30 13:57:12.014566 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:12.037153 systemd[1]: sshd@32-147.75.202.179:22-147.75.109.163:42316.service: Deactivated successfully. Apr 30 13:57:12.038264 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 13:57:12.039193 systemd-logind[1797]: Session 29 logged out. Waiting for processes to exit. Apr 30 13:57:12.040015 systemd[1]: Started sshd@33-147.75.202.179:22-147.75.109.163:42320.service - OpenSSH per-connection server daemon (147.75.109.163:42320). Apr 30 13:57:12.040673 systemd-logind[1797]: Removed session 29. Apr 30 13:57:12.072982 sshd[5607]: Accepted publickey for core from 147.75.109.163 port 42320 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:12.073656 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:12.076404 systemd-logind[1797]: New session 30 of user core. Apr 30 13:57:12.097646 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 13:57:12.587853 sshd[5610]: Connection closed by 147.75.109.163 port 42320 Apr 30 13:57:12.589546 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:12.597837 kubelet[3309]: I0430 13:57:12.597815 3309 topology_manager.go:215] "Topology Admit Handler" podUID="ea9eb296-4016-42d3-9842-293f8b527fde" podNamespace="kube-system" podName="cilium-nqfnq" Apr 30 13:57:12.597918 kubelet[3309]: E0430 13:57:12.597850 3309 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="702b4d47-4138-413d-8fb6-057210f395b3" containerName="cilium-operator" Apr 30 13:57:12.597918 kubelet[3309]: E0430 13:57:12.597856 3309 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a6792e-9fb1-4429-a1c2-9778a04936ed" containerName="apply-sysctl-overwrites" Apr 30 13:57:12.597918 kubelet[3309]: E0430 13:57:12.597860 3309 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a6792e-9fb1-4429-a1c2-9778a04936ed" containerName="mount-bpf-fs" Apr 30 13:57:12.597918 kubelet[3309]: E0430 13:57:12.597863 3309 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a6792e-9fb1-4429-a1c2-9778a04936ed" containerName="cilium-agent" Apr 30 13:57:12.597918 kubelet[3309]: E0430 13:57:12.597867 3309 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a6792e-9fb1-4429-a1c2-9778a04936ed" containerName="mount-cgroup" Apr 30 13:57:12.597918 kubelet[3309]: E0430 13:57:12.597870 3309 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="69a6792e-9fb1-4429-a1c2-9778a04936ed" containerName="clean-cilium-state" Apr 30 13:57:12.597918 kubelet[3309]: I0430 13:57:12.597884 3309 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a6792e-9fb1-4429-a1c2-9778a04936ed" containerName="cilium-agent" Apr 30 13:57:12.597918 kubelet[3309]: I0430 13:57:12.597887 3309 memory_manager.go:354] "RemoveStaleState removing state" podUID="702b4d47-4138-413d-8fb6-057210f395b3" containerName="cilium-operator" Apr 30 13:57:12.609718 systemd[1]: sshd@33-147.75.202.179:22-147.75.109.163:42320.service: Deactivated successfully. Apr 30 13:57:12.610605 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 13:57:12.611361 systemd-logind[1797]: Session 30 logged out. Waiting for processes to exit. Apr 30 13:57:12.612145 systemd[1]: Started sshd@34-147.75.202.179:22-147.75.109.163:42322.service - OpenSSH per-connection server daemon (147.75.109.163:42322). Apr 30 13:57:12.612976 systemd-logind[1797]: Removed session 30. Apr 30 13:57:12.615195 systemd[1]: Created slice kubepods-burstable-podea9eb296_4016_42d3_9842_293f8b527fde.slice - libcontainer container kubepods-burstable-podea9eb296_4016_42d3_9842_293f8b527fde.slice. Apr 30 13:57:12.644181 sshd[5632]: Accepted publickey for core from 147.75.109.163 port 42322 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:12.644816 sshd-session[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:12.647574 systemd-logind[1797]: New session 31 of user core. Apr 30 13:57:12.664565 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 30 13:57:12.709873 kubelet[3309]: I0430 13:57:12.709801 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-cilium-run\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710143 kubelet[3309]: I0430 13:57:12.709897 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-host-proc-sys-kernel\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710143 kubelet[3309]: I0430 13:57:12.709958 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-hostproc\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710143 kubelet[3309]: I0430 13:57:12.710014 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea9eb296-4016-42d3-9842-293f8b527fde-cilium-ipsec-secrets\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710631 kubelet[3309]: I0430 13:57:12.710156 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea9eb296-4016-42d3-9842-293f8b527fde-clustermesh-secrets\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710631 kubelet[3309]: I0430 13:57:12.710257 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea9eb296-4016-42d3-9842-293f8b527fde-cilium-config-path\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710631 kubelet[3309]: I0430 13:57:12.710399 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-cni-path\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710631 kubelet[3309]: I0430 13:57:12.710460 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-xtables-lock\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710631 kubelet[3309]: I0430 13:57:12.710509 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbr4\" (UniqueName: \"kubernetes.io/projected/ea9eb296-4016-42d3-9842-293f8b527fde-kube-api-access-zkbr4\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.710631 kubelet[3309]: I0430 13:57:12.710567 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-bpf-maps\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.711302 kubelet[3309]: I0430 13:57:12.710613 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-cilium-cgroup\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.711302 kubelet[3309]: I0430 13:57:12.710657 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-etc-cni-netd\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.711302 kubelet[3309]: I0430 13:57:12.710718 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-lib-modules\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.711302 kubelet[3309]: I0430 13:57:12.710779 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea9eb296-4016-42d3-9842-293f8b527fde-host-proc-sys-net\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.711302 kubelet[3309]: I0430 13:57:12.710832 3309 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea9eb296-4016-42d3-9842-293f8b527fde-hubble-tls\") pod \"cilium-nqfnq\" (UID: \"ea9eb296-4016-42d3-9842-293f8b527fde\") " pod="kube-system/cilium-nqfnq" Apr 30 13:57:12.717777 sshd[5635]: Connection closed by 147.75.109.163 port 42322 Apr 30 13:57:12.718653 sshd-session[5632]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:12.744568 systemd[1]: sshd@34-147.75.202.179:22-147.75.109.163:42322.service: Deactivated successfully. Apr 30 13:57:12.748684 systemd[1]: session-31.scope: Deactivated successfully. Apr 30 13:57:12.752089 systemd-logind[1797]: Session 31 logged out. Waiting for processes to exit. Apr 30 13:57:12.777170 systemd[1]: Started sshd@35-147.75.202.179:22-147.75.109.163:42330.service - OpenSSH per-connection server daemon (147.75.109.163:42330). Apr 30 13:57:12.779730 systemd-logind[1797]: Removed session 31. Apr 30 13:57:12.843576 sshd[5641]: Accepted publickey for core from 147.75.109.163 port 42330 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:57:12.845045 sshd-session[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:57:12.850044 systemd-logind[1797]: New session 32 of user core. Apr 30 13:57:12.870841 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 30 13:57:12.918080 containerd[1816]: time="2025-04-30T13:57:12.918007573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqfnq,Uid:ea9eb296-4016-42d3-9842-293f8b527fde,Namespace:kube-system,Attempt:0,}" Apr 30 13:57:12.927949 containerd[1816]: time="2025-04-30T13:57:12.927901908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:57:12.927949 containerd[1816]: time="2025-04-30T13:57:12.927934797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:57:12.927949 containerd[1816]: time="2025-04-30T13:57:12.927941748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:12.928067 containerd[1816]: time="2025-04-30T13:57:12.927987188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:57:12.949579 systemd[1]: Started cri-containerd-90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed.scope - libcontainer container 90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed. Apr 30 13:57:12.956564 kubelet[3309]: E0430 13:57:12.956542 3309 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 13:57:12.960137 containerd[1816]: time="2025-04-30T13:57:12.960119551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nqfnq,Uid:ea9eb296-4016-42d3-9842-293f8b527fde,Namespace:kube-system,Attempt:0,} returns sandbox id \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\"" Apr 30 13:57:12.961275 containerd[1816]: time="2025-04-30T13:57:12.961260322Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 13:57:12.965845 containerd[1816]: time="2025-04-30T13:57:12.965802289Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"295e67c23638efb3db483cd66f9e1c47a6337d88c3581d476b73ba3f0a212b51\"" Apr 30 13:57:12.966039 containerd[1816]: time="2025-04-30T13:57:12.966027678Z" level=info msg="StartContainer for \"295e67c23638efb3db483cd66f9e1c47a6337d88c3581d476b73ba3f0a212b51\"" Apr 30 13:57:12.992574 systemd[1]: Started cri-containerd-295e67c23638efb3db483cd66f9e1c47a6337d88c3581d476b73ba3f0a212b51.scope - libcontainer container 295e67c23638efb3db483cd66f9e1c47a6337d88c3581d476b73ba3f0a212b51. Apr 30 13:57:13.003895 containerd[1816]: time="2025-04-30T13:57:13.003846638Z" level=info msg="StartContainer for \"295e67c23638efb3db483cd66f9e1c47a6337d88c3581d476b73ba3f0a212b51\" returns successfully" Apr 30 13:57:13.008126 systemd[1]: cri-containerd-295e67c23638efb3db483cd66f9e1c47a6337d88c3581d476b73ba3f0a212b51.scope: Deactivated successfully. Apr 30 13:57:13.036016 containerd[1816]: time="2025-04-30T13:57:13.035969581Z" level=info msg="shim disconnected" id=295e67c23638efb3db483cd66f9e1c47a6337d88c3581d476b73ba3f0a212b51 namespace=k8s.io Apr 30 13:57:13.036016 containerd[1816]: time="2025-04-30T13:57:13.036004637Z" level=warning msg="cleaning up after shim disconnected" id=295e67c23638efb3db483cd66f9e1c47a6337d88c3581d476b73ba3f0a212b51 namespace=k8s.io Apr 30 13:57:13.036016 containerd[1816]: time="2025-04-30T13:57:13.036010467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:14.034953 containerd[1816]: time="2025-04-30T13:57:14.034931399Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 13:57:14.042841 containerd[1816]: time="2025-04-30T13:57:14.042784337Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627\"" Apr 30 13:57:14.043131 containerd[1816]: time="2025-04-30T13:57:14.043112929Z" level=info msg="StartContainer for \"8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627\"" Apr 30 13:57:14.043845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824675776.mount: Deactivated successfully. Apr 30 13:57:14.065677 systemd[1]: Started cri-containerd-8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627.scope - libcontainer container 8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627. Apr 30 13:57:14.078357 containerd[1816]: time="2025-04-30T13:57:14.078329951Z" level=info msg="StartContainer for \"8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627\" returns successfully" Apr 30 13:57:14.082233 systemd[1]: cri-containerd-8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627.scope: Deactivated successfully. Apr 30 13:57:14.107484 containerd[1816]: time="2025-04-30T13:57:14.107339140Z" level=info msg="shim disconnected" id=8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627 namespace=k8s.io Apr 30 13:57:14.107484 containerd[1816]: time="2025-04-30T13:57:14.107470016Z" level=warning msg="cleaning up after shim disconnected" id=8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627 namespace=k8s.io Apr 30 13:57:14.107940 containerd[1816]: time="2025-04-30T13:57:14.107496198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:14.821701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a0914dda0e336170f0ee38fcecefabc4b4d178f621a0f02407cede3c6f85627-rootfs.mount: Deactivated successfully. Apr 30 13:57:15.031985 containerd[1816]: time="2025-04-30T13:57:15.031888464Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 13:57:15.042443 containerd[1816]: time="2025-04-30T13:57:15.042422167Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14\"" Apr 30 13:57:15.042774 containerd[1816]: time="2025-04-30T13:57:15.042762239Z" level=info msg="StartContainer for \"9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14\"" Apr 30 13:57:15.075644 systemd[1]: Started cri-containerd-9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14.scope - libcontainer container 9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14. Apr 30 13:57:15.096234 containerd[1816]: time="2025-04-30T13:57:15.096201858Z" level=info msg="StartContainer for \"9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14\" returns successfully" Apr 30 13:57:15.098330 systemd[1]: cri-containerd-9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14.scope: Deactivated successfully. Apr 30 13:57:15.113631 containerd[1816]: time="2025-04-30T13:57:15.113594516Z" level=info msg="shim disconnected" id=9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14 namespace=k8s.io Apr 30 13:57:15.113631 containerd[1816]: time="2025-04-30T13:57:15.113628107Z" level=warning msg="cleaning up after shim disconnected" id=9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14 namespace=k8s.io Apr 30 13:57:15.113631 containerd[1816]: time="2025-04-30T13:57:15.113633557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:15.820322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b582080f3805122849623dbf91ab206367095f4f187eb55970e8863263cfb14-rootfs.mount: Deactivated successfully. Apr 30 13:57:16.040792 containerd[1816]: time="2025-04-30T13:57:16.040708617Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 13:57:16.049177 containerd[1816]: time="2025-04-30T13:57:16.049126647Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a\"" Apr 30 13:57:16.049435 containerd[1816]: time="2025-04-30T13:57:16.049417080Z" level=info msg="StartContainer for \"3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a\"" Apr 30 13:57:16.077661 systemd[1]: Started cri-containerd-3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a.scope - libcontainer container 3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a. Apr 30 13:57:16.089756 systemd[1]: cri-containerd-3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a.scope: Deactivated successfully. Apr 30 13:57:16.090244 containerd[1816]: time="2025-04-30T13:57:16.090200567Z" level=info msg="StartContainer for \"3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a\" returns successfully" Apr 30 13:57:16.115881 containerd[1816]: time="2025-04-30T13:57:16.115847463Z" level=info msg="shim disconnected" id=3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a namespace=k8s.io Apr 30 13:57:16.115881 containerd[1816]: time="2025-04-30T13:57:16.115879608Z" level=warning msg="cleaning up after shim disconnected" id=3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a namespace=k8s.io Apr 30 13:57:16.115995 containerd[1816]: time="2025-04-30T13:57:16.115887209Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:57:16.825037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f13383d0226c412b2d9a647e643553504faa6ab5c4f4140de9d828957f3312a-rootfs.mount: Deactivated successfully. Apr 30 13:57:17.025729 kubelet[3309]: I0430 13:57:17.025596 3309 setters.go:580] "Node became not ready" node="ci-4230.1.1-a-70e1417a44" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T13:57:17Z","lastTransitionTime":"2025-04-30T13:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 13:57:17.052125 containerd[1816]: time="2025-04-30T13:57:17.052047126Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 13:57:17.064881 containerd[1816]: time="2025-04-30T13:57:17.064824112Z" level=info msg="CreateContainer within sandbox \"90b6600ba4bb4ce9370d72ab108512825829bde324b85e2ca1c4664c0f6922ed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd4d7b19690aaca3d88b49ea7c8704b6d79ec74de6b57e8395a0f186dc6ad2ab\"" Apr 30 13:57:17.065281 containerd[1816]: time="2025-04-30T13:57:17.065264780Z" level=info msg="StartContainer for \"dd4d7b19690aaca3d88b49ea7c8704b6d79ec74de6b57e8395a0f186dc6ad2ab\"" Apr 30 13:57:17.089539 systemd[1]: Started cri-containerd-dd4d7b19690aaca3d88b49ea7c8704b6d79ec74de6b57e8395a0f186dc6ad2ab.scope - libcontainer container dd4d7b19690aaca3d88b49ea7c8704b6d79ec74de6b57e8395a0f186dc6ad2ab. Apr 30 13:57:17.111853 containerd[1816]: time="2025-04-30T13:57:17.111798091Z" level=info msg="StartContainer for \"dd4d7b19690aaca3d88b49ea7c8704b6d79ec74de6b57e8395a0f186dc6ad2ab\" returns successfully" Apr 30 13:57:17.265452 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 13:57:18.087492 kubelet[3309]: I0430 13:57:18.087346 3309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nqfnq" podStartSLOduration=6.087308938 podStartE2EDuration="6.087308938s" podCreationTimestamp="2025-04-30 13:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:57:18.086204731 +0000 UTC m=+430.326093539" watchObservedRunningTime="2025-04-30 13:57:18.087308938 +0000 UTC m=+430.327197711" Apr 30 13:57:20.394044 systemd-networkd[1728]: lxc_health: Link UP Apr 30 13:57:20.394719 systemd-networkd[1728]: lxc_health: Gained carrier Apr 30 13:57:22.308524 systemd-networkd[1728]: lxc_health: Gained IPv6LL Apr 30 13:57:25.396358 sshd[5650]: Connection closed by 147.75.109.163 port 42330 Apr 30 13:57:25.396802 sshd-session[5641]: pam_unix(sshd:session): session closed for user core Apr 30 13:57:25.400974 systemd[1]: sshd@35-147.75.202.179:22-147.75.109.163:42330.service: Deactivated successfully. Apr 30 13:57:25.403122 systemd[1]: session-32.scope: Deactivated successfully. Apr 30 13:57:25.404195 systemd-logind[1797]: Session 32 logged out. Waiting for processes to exit. Apr 30 13:57:25.405587 systemd-logind[1797]: Removed session 32.