Apr 30 13:30:10.482424 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 13:30:10.482439 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:30:10.482445 kernel: BIOS-provided physical RAM map: Apr 30 13:30:10.482451 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Apr 30 13:30:10.482455 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Apr 30 13:30:10.482459 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Apr 30 13:30:10.482464 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Apr 30 13:30:10.482468 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Apr 30 13:30:10.482472 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819c3fff] usable Apr 30 13:30:10.482477 kernel: BIOS-e820: [mem 0x00000000819c4000-0x00000000819c4fff] ACPI NVS Apr 30 13:30:10.482481 kernel: BIOS-e820: [mem 0x00000000819c5000-0x00000000819c5fff] reserved Apr 30 13:30:10.482485 kernel: BIOS-e820: [mem 0x00000000819c6000-0x000000008afcdfff] usable Apr 30 13:30:10.482491 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Apr 30 13:30:10.482495 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Apr 30 13:30:10.482501 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Apr 30 13:30:10.482506 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Apr 30 13:30:10.482511 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Apr 30 13:30:10.482516 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Apr 30 13:30:10.482521 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 30 13:30:10.482526 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Apr 30 13:30:10.482530 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Apr 30 13:30:10.482535 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Apr 30 13:30:10.482540 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Apr 30 13:30:10.482545 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Apr 30 13:30:10.482550 kernel: NX (Execute Disable) protection: active Apr 30 13:30:10.482555 kernel: APIC: Static calls initialized Apr 30 13:30:10.482559 kernel: SMBIOS 3.2.1 present. Apr 30 13:30:10.482564 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Apr 30 13:30:10.482570 kernel: tsc: Detected 3400.000 MHz processor Apr 30 13:30:10.482575 kernel: tsc: Detected 3399.906 MHz TSC Apr 30 13:30:10.482580 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 13:30:10.482585 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 13:30:10.482590 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Apr 30 13:30:10.482595 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Apr 30 13:30:10.482600 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 13:30:10.482605 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Apr 30 13:30:10.482610 kernel: Using GB pages for direct mapping Apr 30 13:30:10.482615 kernel: ACPI: Early table checksum verification disabled Apr 30 13:30:10.482621 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Apr 30 13:30:10.482626 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Apr 30 13:30:10.482633 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Apr 30 13:30:10.482638 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Apr 30 13:30:10.482643 kernel: ACPI: FACS 0x000000008C66DF80 000040 Apr 30 13:30:10.482649 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Apr 30 13:30:10.482655 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Apr 30 13:30:10.482660 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Apr 30 13:30:10.482665 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Apr 30 13:30:10.482671 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Apr 30 13:30:10.482676 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Apr 30 13:30:10.482681 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Apr 30 13:30:10.482686 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Apr 30 13:30:10.482692 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:30:10.482698 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Apr 30 13:30:10.482703 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Apr 30 13:30:10.482708 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:30:10.482741 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:30:10.482746 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Apr 30 13:30:10.482769 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Apr 30 13:30:10.482774 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:30:10.482779 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Apr 30 13:30:10.482786 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Apr 30 13:30:10.482791 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Apr 30 13:30:10.482797 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Apr 30 13:30:10.482802 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Apr 30 13:30:10.482807 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Apr 30 13:30:10.482812 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Apr 30 13:30:10.482818 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Apr 30 13:30:10.482823 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Apr 30 13:30:10.482829 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Apr 30 13:30:10.482834 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Apr 30 13:30:10.482840 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Apr 30 13:30:10.482845 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Apr 30 13:30:10.482850 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Apr 30 13:30:10.482855 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Apr 30 13:30:10.482861 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Apr 30 13:30:10.482866 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Apr 30 13:30:10.482871 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Apr 30 13:30:10.482877 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Apr 30 13:30:10.482882 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Apr 30 13:30:10.482887 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Apr 30 13:30:10.482893 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Apr 30 13:30:10.482898 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Apr 30 13:30:10.482903 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Apr 30 13:30:10.482908 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Apr 30 13:30:10.482913 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Apr 30 13:30:10.482918 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Apr 30 13:30:10.482923 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Apr 30 13:30:10.482930 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Apr 30 13:30:10.482935 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Apr 30 13:30:10.482940 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Apr 30 13:30:10.482945 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Apr 30 13:30:10.482950 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Apr 30 13:30:10.482955 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Apr 30 13:30:10.482961 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Apr 30 13:30:10.482966 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Apr 30 13:30:10.482971 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Apr 30 13:30:10.482977 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Apr 30 13:30:10.482982 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Apr 30 13:30:10.482987 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Apr 30 13:30:10.482993 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Apr 30 13:30:10.482998 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Apr 30 13:30:10.483003 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Apr 30 13:30:10.483008 kernel: No NUMA configuration found Apr 30 13:30:10.483013 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Apr 30 13:30:10.483018 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Apr 30 13:30:10.483025 kernel: Zone ranges: Apr 30 13:30:10.483030 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 13:30:10.483035 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 30 13:30:10.483040 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Apr 30 13:30:10.483046 kernel: Movable zone start for each node Apr 30 13:30:10.483051 kernel: Early memory node ranges Apr 30 13:30:10.483056 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Apr 30 13:30:10.483061 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Apr 30 13:30:10.483066 kernel: node 0: [mem 0x0000000040400000-0x00000000819c3fff] Apr 30 13:30:10.483073 kernel: node 0: [mem 0x00000000819c6000-0x000000008afcdfff] Apr 30 13:30:10.483078 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Apr 30 13:30:10.483083 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Apr 30 13:30:10.483088 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Apr 30 13:30:10.483097 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Apr 30 13:30:10.483103 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 13:30:10.483109 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Apr 30 13:30:10.483114 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Apr 30 13:30:10.483121 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Apr 30 13:30:10.483126 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Apr 30 13:30:10.483132 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Apr 30 13:30:10.483137 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Apr 30 13:30:10.483143 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Apr 30 13:30:10.483148 kernel: ACPI: PM-Timer IO Port: 0x1808 Apr 30 13:30:10.483154 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Apr 30 13:30:10.483159 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Apr 30 13:30:10.483165 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Apr 30 13:30:10.483171 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Apr 30 13:30:10.483177 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Apr 30 13:30:10.483182 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Apr 30 13:30:10.483188 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Apr 30 13:30:10.483193 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Apr 30 13:30:10.483199 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Apr 30 13:30:10.483204 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Apr 30 13:30:10.483210 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Apr 30 13:30:10.483215 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Apr 30 13:30:10.483221 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Apr 30 13:30:10.483227 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Apr 30 13:30:10.483232 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Apr 30 13:30:10.483238 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Apr 30 13:30:10.483243 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Apr 30 13:30:10.483249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 13:30:10.483254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 13:30:10.483260 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 13:30:10.483266 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 30 13:30:10.483271 kernel: TSC deadline timer available Apr 30 13:30:10.483278 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Apr 30 13:30:10.483284 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Apr 30 13:30:10.483289 kernel: Booting paravirtualized kernel on bare hardware Apr 30 13:30:10.483295 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 13:30:10.483301 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 13:30:10.483306 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 13:30:10.483312 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 13:30:10.483317 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 13:30:10.483324 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:30:10.483330 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 13:30:10.483336 kernel: random: crng init done Apr 30 13:30:10.483341 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Apr 30 13:30:10.483347 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Apr 30 13:30:10.483353 kernel: Fallback order for Node 0: 0 Apr 30 13:30:10.483358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Apr 30 13:30:10.483364 kernel: Policy zone: Normal Apr 30 13:30:10.483369 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 13:30:10.483376 kernel: software IO TLB: area num 16. Apr 30 13:30:10.483382 kernel: Memory: 32718248K/33452984K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 734476K reserved, 0K cma-reserved) Apr 30 13:30:10.483387 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 13:30:10.483393 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 13:30:10.483399 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 13:30:10.483404 kernel: Dynamic Preempt: voluntary Apr 30 13:30:10.483410 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 13:30:10.483416 kernel: rcu: RCU event tracing is enabled. Apr 30 13:30:10.483421 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 13:30:10.483428 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 13:30:10.483434 kernel: Rude variant of Tasks RCU enabled. Apr 30 13:30:10.483439 kernel: Tracing variant of Tasks RCU enabled. Apr 30 13:30:10.483445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 13:30:10.483450 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 13:30:10.483456 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Apr 30 13:30:10.483461 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 13:30:10.483467 kernel: Console: colour VGA+ 80x25 Apr 30 13:30:10.483472 kernel: printk: console [tty0] enabled Apr 30 13:30:10.483479 kernel: printk: console [ttyS1] enabled Apr 30 13:30:10.483485 kernel: ACPI: Core revision 20230628 Apr 30 13:30:10.483490 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Apr 30 13:30:10.483496 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 13:30:10.483502 kernel: DMAR: Host address width 39 Apr 30 13:30:10.483507 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Apr 30 13:30:10.483513 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Apr 30 13:30:10.483518 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Apr 30 13:30:10.483524 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Apr 30 13:30:10.483530 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Apr 30 13:30:10.483536 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Apr 30 13:30:10.483542 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Apr 30 13:30:10.483547 kernel: x2apic enabled Apr 30 13:30:10.483553 kernel: APIC: Switched APIC routing to: cluster x2apic Apr 30 13:30:10.483558 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 30 13:30:10.483564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Apr 30 13:30:10.483570 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Apr 30 13:30:10.483575 kernel: CPU0: Thermal monitoring enabled (TM1) Apr 30 13:30:10.483582 kernel: process: using mwait in idle threads Apr 30 13:30:10.483588 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 30 13:30:10.483593 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 30 13:30:10.483599 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 13:30:10.483605 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 30 13:30:10.483610 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 30 13:30:10.483616 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 30 13:30:10.483621 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 13:30:10.483627 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Apr 30 13:30:10.483633 kernel: RETBleed: Mitigation: Enhanced IBRS Apr 30 13:30:10.483639 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 13:30:10.483645 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 13:30:10.483650 kernel: TAA: Mitigation: TSX disabled Apr 30 13:30:10.483656 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Apr 30 13:30:10.483662 kernel: SRBDS: Mitigation: Microcode Apr 30 13:30:10.483667 kernel: GDS: Mitigation: Microcode Apr 30 13:30:10.483673 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 13:30:10.483678 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 13:30:10.483685 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 13:30:10.483691 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 30 13:30:10.483696 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 30 13:30:10.483702 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 13:30:10.483707 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 30 13:30:10.483715 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 30 13:30:10.483720 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Apr 30 13:30:10.483726 kernel: Freeing SMP alternatives memory: 32K Apr 30 13:30:10.483753 kernel: pid_max: default: 32768 minimum: 301 Apr 30 13:30:10.483777 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 13:30:10.483782 kernel: landlock: Up and running. Apr 30 13:30:10.483788 kernel: SELinux: Initializing. Apr 30 13:30:10.483794 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 13:30:10.483799 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 13:30:10.483805 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Apr 30 13:30:10.483810 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:30:10.483816 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:30:10.483822 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:30:10.483828 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Apr 30 13:30:10.483834 kernel: ... version: 4 Apr 30 13:30:10.483840 kernel: ... bit width: 48 Apr 30 13:30:10.483845 kernel: ... generic registers: 4 Apr 30 13:30:10.483851 kernel: ... value mask: 0000ffffffffffff Apr 30 13:30:10.483856 kernel: ... max period: 00007fffffffffff Apr 30 13:30:10.483862 kernel: ... fixed-purpose events: 3 Apr 30 13:30:10.483867 kernel: ... event mask: 000000070000000f Apr 30 13:30:10.483873 kernel: signal: max sigframe size: 2032 Apr 30 13:30:10.483879 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Apr 30 13:30:10.483885 kernel: rcu: Hierarchical SRCU implementation. Apr 30 13:30:10.483891 kernel: rcu: Max phase no-delay instances is 400. Apr 30 13:30:10.483896 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Apr 30 13:30:10.483902 kernel: smp: Bringing up secondary CPUs ... Apr 30 13:30:10.483907 kernel: smpboot: x86: Booting SMP configuration: Apr 30 13:30:10.483913 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Apr 30 13:30:10.483919 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 30 13:30:10.483926 kernel: smp: Brought up 1 node, 16 CPUs Apr 30 13:30:10.483931 kernel: smpboot: Max logical packages: 1 Apr 30 13:30:10.483937 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Apr 30 13:30:10.483942 kernel: devtmpfs: initialized Apr 30 13:30:10.483948 kernel: x86/mm: Memory block size: 128MB Apr 30 13:30:10.483954 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819c4000-0x819c4fff] (4096 bytes) Apr 30 13:30:10.483959 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Apr 30 13:30:10.483965 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 13:30:10.483970 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 13:30:10.483977 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 13:30:10.483983 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 13:30:10.483988 kernel: audit: initializing netlink subsys (disabled) Apr 30 13:30:10.483994 kernel: audit: type=2000 audit(1746019804.121:1): state=initialized audit_enabled=0 res=1 Apr 30 13:30:10.483999 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 13:30:10.484005 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 13:30:10.484010 kernel: cpuidle: using governor menu Apr 30 13:30:10.484016 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 13:30:10.484021 kernel: dca service started, version 1.12.1 Apr 30 13:30:10.484028 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 30 13:30:10.484033 kernel: PCI: Using configuration type 1 for base access Apr 30 13:30:10.484039 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Apr 30 13:30:10.484044 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 13:30:10.484050 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 13:30:10.484056 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 13:30:10.484061 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 13:30:10.484067 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 13:30:10.484072 kernel: ACPI: Added _OSI(Module Device) Apr 30 13:30:10.484079 kernel: ACPI: Added _OSI(Processor Device) Apr 30 13:30:10.484084 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 13:30:10.484090 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 13:30:10.484095 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Apr 30 13:30:10.484101 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:30:10.484107 kernel: ACPI: SSDT 0xFFFF8DF800E38400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Apr 30 13:30:10.484112 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:30:10.484118 kernel: ACPI: SSDT 0xFFFF8DF801E0D800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Apr 30 13:30:10.484123 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:30:10.484130 kernel: ACPI: SSDT 0xFFFF8DF800DE5500 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Apr 30 13:30:10.484135 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:30:10.484141 kernel: ACPI: SSDT 0xFFFF8DF801E0B000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Apr 30 13:30:10.484146 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:30:10.484152 kernel: ACPI: SSDT 0xFFFF8DF800E50000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Apr 30 13:30:10.484157 kernel: ACPI: Dynamic OEM Table Load: Apr 30 13:30:10.484163 kernel: ACPI: SSDT 0xFFFF8DF802429C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Apr 30 13:30:10.484169 kernel: ACPI: _OSC evaluated successfully for all CPUs Apr 30 13:30:10.484174 kernel: ACPI: Interpreter enabled Apr 30 13:30:10.484180 kernel: ACPI: PM: (supports S0 S5) Apr 30 13:30:10.484186 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 13:30:10.484192 kernel: HEST: Enabling Firmware First mode for corrected errors. Apr 30 13:30:10.484197 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Apr 30 13:30:10.484203 kernel: HEST: Table parsing has been initialized. Apr 30 13:30:10.484208 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Apr 30 13:30:10.484214 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 13:30:10.484220 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 30 13:30:10.484225 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Apr 30 13:30:10.484231 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Apr 30 13:30:10.484238 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Apr 30 13:30:10.484243 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Apr 30 13:30:10.484249 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Apr 30 13:30:10.484254 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Apr 30 13:30:10.484260 kernel: ACPI: \_TZ_.FN00: New power resource Apr 30 13:30:10.484266 kernel: ACPI: \_TZ_.FN01: New power resource Apr 30 13:30:10.484271 kernel: ACPI: \_TZ_.FN02: New power resource Apr 30 13:30:10.484277 kernel: ACPI: \_TZ_.FN03: New power resource Apr 30 13:30:10.484282 kernel: ACPI: \_TZ_.FN04: New power resource Apr 30 13:30:10.484289 kernel: ACPI: \PIN_: New power resource Apr 30 13:30:10.484295 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Apr 30 13:30:10.484371 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 13:30:10.484425 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Apr 30 13:30:10.484474 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Apr 30 13:30:10.484483 kernel: PCI host bridge to bus 0000:00 Apr 30 13:30:10.484534 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 13:30:10.484583 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 13:30:10.484629 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 13:30:10.484673 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Apr 30 13:30:10.484720 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Apr 30 13:30:10.484806 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Apr 30 13:30:10.484869 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Apr 30 13:30:10.484933 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Apr 30 13:30:10.484987 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.485042 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Apr 30 13:30:10.485095 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.485150 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Apr 30 13:30:10.485202 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Apr 30 13:30:10.485259 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Apr 30 13:30:10.485310 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Apr 30 13:30:10.485364 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Apr 30 13:30:10.485415 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Apr 30 13:30:10.485465 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Apr 30 13:30:10.485518 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Apr 30 13:30:10.485570 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Apr 30 13:30:10.485623 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Apr 30 13:30:10.485679 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Apr 30 13:30:10.485734 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:30:10.485789 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Apr 30 13:30:10.485841 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:30:10.485898 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Apr 30 13:30:10.485957 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Apr 30 13:30:10.486009 kernel: pci 0000:00:16.0: PME# supported from D3hot Apr 30 13:30:10.486063 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Apr 30 13:30:10.486114 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Apr 30 13:30:10.486164 kernel: pci 0000:00:16.1: PME# supported from D3hot Apr 30 13:30:10.486218 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Apr 30 13:30:10.486271 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Apr 30 13:30:10.486321 kernel: pci 0000:00:16.4: PME# supported from D3hot Apr 30 13:30:10.486375 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Apr 30 13:30:10.486425 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Apr 30 13:30:10.486476 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Apr 30 13:30:10.486525 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Apr 30 13:30:10.486575 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Apr 30 13:30:10.486627 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Apr 30 13:30:10.486679 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Apr 30 13:30:10.486731 kernel: pci 0000:00:17.0: PME# supported from D3hot Apr 30 13:30:10.486830 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Apr 30 13:30:10.486882 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.486940 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Apr 30 13:30:10.486992 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.487046 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Apr 30 13:30:10.487098 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.487152 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Apr 30 13:30:10.487207 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.487261 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Apr 30 13:30:10.487314 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.487369 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Apr 30 13:30:10.487420 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Apr 30 13:30:10.487477 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Apr 30 13:30:10.487534 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Apr 30 13:30:10.487585 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Apr 30 13:30:10.487635 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Apr 30 13:30:10.487690 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Apr 30 13:30:10.487743 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Apr 30 13:30:10.487795 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 13:30:10.487852 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Apr 30 13:30:10.487908 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Apr 30 13:30:10.487960 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Apr 30 13:30:10.488012 kernel: pci 0000:02:00.0: PME# supported from D3cold Apr 30 13:30:10.488063 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 13:30:10.488115 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 13:30:10.488171 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Apr 30 13:30:10.488227 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Apr 30 13:30:10.488280 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Apr 30 13:30:10.488333 kernel: pci 0000:02:00.1: PME# supported from D3cold Apr 30 13:30:10.488385 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Apr 30 13:30:10.488436 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Apr 30 13:30:10.488487 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Apr 30 13:30:10.488538 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Apr 30 13:30:10.488589 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:30:10.488643 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Apr 30 13:30:10.488699 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Apr 30 13:30:10.488784 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Apr 30 13:30:10.488852 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Apr 30 13:30:10.488905 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Apr 30 13:30:10.488956 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Apr 30 13:30:10.489008 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.489060 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Apr 30 13:30:10.489113 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 13:30:10.489164 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 13:30:10.489222 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Apr 30 13:30:10.489275 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Apr 30 13:30:10.489327 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Apr 30 13:30:10.489479 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Apr 30 13:30:10.489547 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Apr 30 13:30:10.489602 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Apr 30 13:30:10.489655 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Apr 30 13:30:10.489705 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 13:30:10.489809 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 13:30:10.489904 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Apr 30 13:30:10.489961 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Apr 30 13:30:10.490014 kernel: pci 0000:07:00.0: enabling Extended Tags Apr 30 13:30:10.490066 kernel: pci 0000:07:00.0: supports D1 D2 Apr 30 13:30:10.490122 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 13:30:10.490175 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Apr 30 13:30:10.490227 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Apr 30 13:30:10.490278 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:30:10.490333 kernel: pci_bus 0000:08: extended config space not accessible Apr 30 13:30:10.490395 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Apr 30 13:30:10.490450 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Apr 30 13:30:10.490507 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Apr 30 13:30:10.490561 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Apr 30 13:30:10.490617 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 13:30:10.490719 kernel: pci 0000:08:00.0: supports D1 D2 Apr 30 13:30:10.490780 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 13:30:10.490936 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Apr 30 13:30:10.491044 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Apr 30 13:30:10.491186 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:30:10.491195 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Apr 30 13:30:10.491202 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Apr 30 13:30:10.491233 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Apr 30 13:30:10.491261 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Apr 30 13:30:10.491267 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Apr 30 13:30:10.491273 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Apr 30 13:30:10.491279 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Apr 30 13:30:10.491301 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Apr 30 13:30:10.491309 kernel: iommu: Default domain type: Translated Apr 30 13:30:10.491315 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 13:30:10.491321 kernel: PCI: Using ACPI for IRQ routing Apr 30 13:30:10.491327 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 13:30:10.491333 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Apr 30 13:30:10.491339 kernel: e820: reserve RAM buffer [mem 0x819c4000-0x83ffffff] Apr 30 13:30:10.491346 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Apr 30 13:30:10.491352 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Apr 30 13:30:10.491378 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Apr 30 13:30:10.491385 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Apr 30 13:30:10.491444 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Apr 30 13:30:10.491505 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Apr 30 13:30:10.491563 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 13:30:10.491572 kernel: vgaarb: loaded Apr 30 13:30:10.491578 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 30 13:30:10.491584 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Apr 30 13:30:10.491590 kernel: clocksource: Switched to clocksource tsc-early Apr 30 13:30:10.491596 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 13:30:10.491604 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 13:30:10.491610 kernel: pnp: PnP ACPI init Apr 30 13:30:10.491670 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Apr 30 13:30:10.491727 kernel: pnp 00:02: [dma 0 disabled] Apr 30 13:30:10.491783 kernel: pnp 00:03: [dma 0 disabled] Apr 30 13:30:10.491833 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Apr 30 13:30:10.491883 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Apr 30 13:30:10.491934 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Apr 30 13:30:10.491985 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Apr 30 13:30:10.492033 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Apr 30 13:30:10.492080 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Apr 30 13:30:10.492126 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Apr 30 13:30:10.492173 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Apr 30 13:30:10.492223 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Apr 30 13:30:10.492269 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Apr 30 13:30:10.492325 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Apr 30 13:30:10.492383 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Apr 30 13:30:10.492431 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Apr 30 13:30:10.492479 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Apr 30 13:30:10.492525 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Apr 30 13:30:10.492575 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Apr 30 13:30:10.492622 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Apr 30 13:30:10.492669 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Apr 30 13:30:10.492775 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Apr 30 13:30:10.492784 kernel: pnp: PnP ACPI: found 10 devices Apr 30 13:30:10.492791 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 13:30:10.492813 kernel: NET: Registered PF_INET protocol family Apr 30 13:30:10.492821 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 13:30:10.492827 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 30 13:30:10.492833 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 13:30:10.492839 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 13:30:10.492845 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 30 13:30:10.492851 kernel: TCP: Hash tables configured (established 262144 bind 65536) Apr 30 13:30:10.492857 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 13:30:10.492862 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 13:30:10.492868 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 13:30:10.492878 kernel: NET: Registered PF_XDP protocol family Apr 30 13:30:10.492967 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Apr 30 13:30:10.493022 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Apr 30 13:30:10.493075 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Apr 30 13:30:10.493126 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Apr 30 13:30:10.493181 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 13:30:10.493234 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 13:30:10.493288 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Apr 30 13:30:10.493345 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Apr 30 13:30:10.493396 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Apr 30 13:30:10.493449 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Apr 30 13:30:10.493500 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:30:10.493553 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Apr 30 13:30:10.493607 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Apr 30 13:30:10.493659 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Apr 30 13:30:10.493715 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Apr 30 13:30:10.493783 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Apr 30 13:30:10.493834 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Apr 30 13:30:10.493884 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Apr 30 13:30:10.493935 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Apr 30 13:30:10.493986 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Apr 30 13:30:10.494043 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Apr 30 13:30:10.494130 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:30:10.494180 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Apr 30 13:30:10.494232 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Apr 30 13:30:10.494282 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Apr 30 13:30:10.494329 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Apr 30 13:30:10.494374 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 13:30:10.494419 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 13:30:10.494463 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 13:30:10.494511 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Apr 30 13:30:10.494556 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Apr 30 13:30:10.494609 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Apr 30 13:30:10.494657 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Apr 30 13:30:10.494709 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Apr 30 13:30:10.494807 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Apr 30 13:30:10.494859 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Apr 30 13:30:10.494907 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Apr 30 13:30:10.494957 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Apr 30 13:30:10.495003 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Apr 30 13:30:10.495053 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Apr 30 13:30:10.495100 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Apr 30 13:30:10.495109 kernel: PCI: CLS 64 bytes, default 64 Apr 30 13:30:10.495117 kernel: DMAR: No ATSR found Apr 30 13:30:10.495123 kernel: DMAR: No SATC found Apr 30 13:30:10.495128 kernel: DMAR: dmar0: Using Queued invalidation Apr 30 13:30:10.495180 kernel: pci 0000:00:00.0: Adding to iommu group 0 Apr 30 13:30:10.495231 kernel: pci 0000:00:01.0: Adding to iommu group 1 Apr 30 13:30:10.495282 kernel: pci 0000:00:01.1: Adding to iommu group 1 Apr 30 13:30:10.495333 kernel: pci 0000:00:08.0: Adding to iommu group 2 Apr 30 13:30:10.495384 kernel: pci 0000:00:12.0: Adding to iommu group 3 Apr 30 13:30:10.495434 kernel: pci 0000:00:14.0: Adding to iommu group 4 Apr 30 13:30:10.495487 kernel: pci 0000:00:14.2: Adding to iommu group 4 Apr 30 13:30:10.495537 kernel: pci 0000:00:15.0: Adding to iommu group 5 Apr 30 13:30:10.495585 kernel: pci 0000:00:15.1: Adding to iommu group 5 Apr 30 13:30:10.495635 kernel: pci 0000:00:16.0: Adding to iommu group 6 Apr 30 13:30:10.495685 kernel: pci 0000:00:16.1: Adding to iommu group 6 Apr 30 13:30:10.495764 kernel: pci 0000:00:16.4: Adding to iommu group 6 Apr 30 13:30:10.495831 kernel: pci 0000:00:17.0: Adding to iommu group 7 Apr 30 13:30:10.495883 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Apr 30 13:30:10.495936 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Apr 30 13:30:10.495985 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Apr 30 13:30:10.496036 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Apr 30 13:30:10.496086 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Apr 30 13:30:10.496135 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Apr 30 13:30:10.496185 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Apr 30 13:30:10.496236 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Apr 30 13:30:10.496286 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Apr 30 13:30:10.496340 kernel: pci 0000:02:00.0: Adding to iommu group 1 Apr 30 13:30:10.496392 kernel: pci 0000:02:00.1: Adding to iommu group 1 Apr 30 13:30:10.496443 kernel: pci 0000:04:00.0: Adding to iommu group 15 Apr 30 13:30:10.496495 kernel: pci 0000:05:00.0: Adding to iommu group 16 Apr 30 13:30:10.496545 kernel: pci 0000:07:00.0: Adding to iommu group 17 Apr 30 13:30:10.496599 kernel: pci 0000:08:00.0: Adding to iommu group 17 Apr 30 13:30:10.496607 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Apr 30 13:30:10.496614 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 13:30:10.496621 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Apr 30 13:30:10.496628 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Apr 30 13:30:10.496633 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Apr 30 13:30:10.496639 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Apr 30 13:30:10.496645 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Apr 30 13:30:10.496697 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Apr 30 13:30:10.496707 kernel: Initialise system trusted keyrings Apr 30 13:30:10.496715 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Apr 30 13:30:10.496723 kernel: Key type asymmetric registered Apr 30 13:30:10.496729 kernel: Asymmetric key parser 'x509' registered Apr 30 13:30:10.496760 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 13:30:10.496767 kernel: io scheduler mq-deadline registered Apr 30 13:30:10.496796 kernel: io scheduler kyber registered Apr 30 13:30:10.496802 kernel: io scheduler bfq registered Apr 30 13:30:10.496870 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Apr 30 13:30:10.496921 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Apr 30 13:30:10.496971 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Apr 30 13:30:10.497024 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Apr 30 13:30:10.497073 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Apr 30 13:30:10.497123 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Apr 30 13:30:10.497174 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Apr 30 13:30:10.497230 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Apr 30 13:30:10.497239 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Apr 30 13:30:10.497245 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Apr 30 13:30:10.497253 kernel: pstore: Using crash dump compression: deflate Apr 30 13:30:10.497259 kernel: pstore: Registered erst as persistent store backend Apr 30 13:30:10.497265 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 13:30:10.497271 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 13:30:10.497277 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 13:30:10.497282 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 30 13:30:10.497334 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Apr 30 13:30:10.497343 kernel: i8042: PNP: No PS/2 controller found. Apr 30 13:30:10.497390 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Apr 30 13:30:10.497437 kernel: rtc_cmos rtc_cmos: registered as rtc0 Apr 30 13:30:10.497483 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-04-30T13:30:09 UTC (1746019809) Apr 30 13:30:10.497529 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Apr 30 13:30:10.497538 kernel: intel_pstate: Intel P-state driver initializing Apr 30 13:30:10.497544 kernel: intel_pstate: Disabling energy efficiency optimization Apr 30 13:30:10.497550 kernel: intel_pstate: HWP enabled Apr 30 13:30:10.497556 kernel: NET: Registered PF_INET6 protocol family Apr 30 13:30:10.497561 kernel: Segment Routing with IPv6 Apr 30 13:30:10.497569 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 13:30:10.497575 kernel: NET: Registered PF_PACKET protocol family Apr 30 13:30:10.497580 kernel: Key type dns_resolver registered Apr 30 13:30:10.497586 kernel: microcode: Current revision: 0x00000102 Apr 30 13:30:10.497592 kernel: microcode: Microcode Update Driver: v2.2. Apr 30 13:30:10.497598 kernel: IPI shorthand broadcast: enabled Apr 30 13:30:10.497604 kernel: sched_clock: Marking stable (2618137946, 1435053886)->(4559232430, -506040598) Apr 30 13:30:10.497609 kernel: registered taskstats version 1 Apr 30 13:30:10.497615 kernel: Loading compiled-in X.509 certificates Apr 30 13:30:10.497622 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 13:30:10.497628 kernel: Key type .fscrypt registered Apr 30 13:30:10.497633 kernel: Key type fscrypt-provisioning registered Apr 30 13:30:10.497639 kernel: ima: Allocated hash algorithm: sha1 Apr 30 13:30:10.497645 kernel: ima: No architecture policies found Apr 30 13:30:10.497651 kernel: clk: Disabling unused clocks Apr 30 13:30:10.497656 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 13:30:10.497662 kernel: Write protecting the kernel read-only data: 38912k Apr 30 13:30:10.497669 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 13:30:10.497675 kernel: Run /init as init process Apr 30 13:30:10.497681 kernel: with arguments: Apr 30 13:30:10.497687 kernel: /init Apr 30 13:30:10.497692 kernel: with environment: Apr 30 13:30:10.497698 kernel: HOME=/ Apr 30 13:30:10.497704 kernel: TERM=linux Apr 30 13:30:10.497709 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 13:30:10.497718 systemd[1]: Successfully made /usr/ read-only. Apr 30 13:30:10.497727 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 13:30:10.497760 systemd[1]: Detected architecture x86-64. Apr 30 13:30:10.497766 systemd[1]: Running in initrd. Apr 30 13:30:10.497772 systemd[1]: No hostname configured, using default hostname. Apr 30 13:30:10.497793 systemd[1]: Hostname set to . Apr 30 13:30:10.497813 systemd[1]: Initializing machine ID from random generator. Apr 30 13:30:10.497819 systemd[1]: Queued start job for default target initrd.target. Apr 30 13:30:10.497826 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:30:10.497832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:30:10.497839 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 13:30:10.497845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 13:30:10.497851 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 13:30:10.497857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 13:30:10.497864 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 13:30:10.497871 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 13:30:10.497878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:30:10.497884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:30:10.497890 systemd[1]: Reached target paths.target - Path Units. Apr 30 13:30:10.497896 systemd[1]: Reached target slices.target - Slice Units. Apr 30 13:30:10.497902 systemd[1]: Reached target swap.target - Swaps. Apr 30 13:30:10.497908 systemd[1]: Reached target timers.target - Timer Units. Apr 30 13:30:10.497914 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 13:30:10.497920 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 13:30:10.497927 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 13:30:10.497934 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 13:30:10.497940 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:30:10.497946 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 13:30:10.497952 kernel: tsc: Refined TSC clocksource calibration: 3407.985 MHz Apr 30 13:30:10.497959 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:30:10.497965 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fc5a980c, max_idle_ns: 440795300013 ns Apr 30 13:30:10.497970 kernel: clocksource: Switched to clocksource tsc Apr 30 13:30:10.497977 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 13:30:10.497984 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 13:30:10.497990 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 13:30:10.497996 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 13:30:10.498002 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 13:30:10.498008 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 13:30:10.498014 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 13:30:10.498032 systemd-journald[269]: Collecting audit messages is disabled. Apr 30 13:30:10.498047 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:30:10.498054 systemd-journald[269]: Journal started Apr 30 13:30:10.498068 systemd-journald[269]: Runtime Journal (/run/log/journal/c5c7febdae634a1ba51388fa62c58d18) is 8M, max 639.9M, 631.9M free. Apr 30 13:30:10.491646 systemd-modules-load[270]: Inserted module 'overlay' Apr 30 13:30:10.510348 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 13:30:10.547853 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 13:30:10.547866 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 13:30:10.547874 kernel: Bridge firewalling registered Apr 30 13:30:10.516224 systemd-modules-load[270]: Inserted module 'br_netfilter' Apr 30 13:30:10.547913 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:30:10.597135 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 13:30:10.606102 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 13:30:10.623325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:30:10.659986 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:30:10.671373 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:30:10.671852 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 13:30:10.672370 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 13:30:10.677152 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:30:10.678044 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 13:30:10.678187 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:30:10.679179 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:30:10.680386 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 13:30:10.693028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:30:10.703103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:30:10.709626 systemd-resolved[306]: Positive Trust Anchors: Apr 30 13:30:10.709633 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 13:30:10.709665 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 13:30:10.711855 systemd-resolved[306]: Defaulting to hostname 'linux'. Apr 30 13:30:10.724103 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 13:30:10.756432 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:30:10.790187 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 13:30:10.890092 dracut-cmdline[311]: dracut-dracut-053 Apr 30 13:30:10.890092 dracut-cmdline[311]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:30:11.053752 kernel: SCSI subsystem initialized Apr 30 13:30:11.067729 kernel: Loading iSCSI transport class v2.0-870. Apr 30 13:30:11.080765 kernel: iscsi: registered transport (tcp) Apr 30 13:30:11.101728 kernel: iscsi: registered transport (qla4xxx) Apr 30 13:30:11.101745 kernel: QLogic iSCSI HBA Driver Apr 30 13:30:11.125190 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 13:30:11.145943 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 13:30:11.234563 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 13:30:11.234592 kernel: device-mapper: uevent: version 1.0.3 Apr 30 13:30:11.243321 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 13:30:11.278782 kernel: raid6: avx2x4 gen() 47072 MB/s Apr 30 13:30:11.299746 kernel: raid6: avx2x2 gen() 53726 MB/s Apr 30 13:30:11.325844 kernel: raid6: avx2x1 gen() 45118 MB/s Apr 30 13:30:11.325863 kernel: raid6: using algorithm avx2x2 gen() 53726 MB/s Apr 30 13:30:11.352935 kernel: raid6: .... xor() 32455 MB/s, rmw enabled Apr 30 13:30:11.352954 kernel: raid6: using avx2x2 recovery algorithm Apr 30 13:30:11.373746 kernel: xor: automatically using best checksumming function avx Apr 30 13:30:11.471755 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 13:30:11.477186 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 13:30:11.502045 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:30:11.509783 systemd-udevd[497]: Using default interface naming scheme 'v255'. Apr 30 13:30:11.512582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:30:11.548982 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 13:30:11.577998 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Apr 30 13:30:11.608036 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 13:30:11.642044 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 13:30:11.709636 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:30:11.749308 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 30 13:30:11.749330 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 30 13:30:11.749344 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 13:30:11.749357 kernel: PTP clock support registered Apr 30 13:30:11.749374 kernel: libata version 3.00 loaded. Apr 30 13:30:11.721937 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 13:30:11.771407 kernel: ACPI: bus type USB registered Apr 30 13:30:11.771425 kernel: usbcore: registered new interface driver usbfs Apr 30 13:30:11.771433 kernel: usbcore: registered new interface driver hub Apr 30 13:30:11.771440 kernel: usbcore: registered new device driver usb Apr 30 13:30:11.773398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 13:30:12.063777 kernel: AVX2 version of gcm_enc/dec engaged. Apr 30 13:30:12.063794 kernel: AES CTR mode by8 optimization enabled Apr 30 13:30:12.063802 kernel: ahci 0000:00:17.0: version 3.0 Apr 30 13:30:12.063894 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Apr 30 13:30:12.063963 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Apr 30 13:30:12.064027 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 13:30:12.064093 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Apr 30 13:30:12.064158 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Apr 30 13:30:12.064166 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Apr 30 13:30:12.064228 kernel: scsi host0: ahci Apr 30 13:30:12.064292 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Apr 30 13:30:12.064300 kernel: scsi host1: ahci Apr 30 13:30:12.064358 kernel: scsi host2: ahci Apr 30 13:30:12.064415 kernel: scsi host3: ahci Apr 30 13:30:12.064475 kernel: scsi host4: ahci Apr 30 13:30:12.064538 kernel: scsi host5: ahci Apr 30 13:30:12.064594 kernel: scsi host6: ahci Apr 30 13:30:12.064652 kernel: scsi host7: ahci Apr 30 13:30:12.064708 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Apr 30 13:30:12.064722 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Apr 30 13:30:12.064730 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Apr 30 13:30:12.064739 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Apr 30 13:30:12.064747 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Apr 30 13:30:12.064754 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Apr 30 13:30:12.064761 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Apr 30 13:30:12.064768 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 128 Apr 30 13:30:12.064776 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Apr 30 13:30:12.064841 kernel: igb 0000:04:00.0: added PHC on eth0 Apr 30 13:30:12.064912 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Apr 30 13:30:12.064975 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 13:30:12.065041 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Apr 30 13:30:12.065109 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:52 Apr 30 13:30:12.065173 kernel: hub 1-0:1.0: USB hub found Apr 30 13:30:12.065241 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Apr 30 13:30:12.065305 kernel: hub 1-0:1.0: 16 ports detected Apr 30 13:30:12.065364 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 13:30:12.065427 kernel: hub 2-0:1.0: USB hub found Apr 30 13:30:12.065496 kernel: igb 0000:05:00.0: added PHC on eth1 Apr 30 13:30:12.065562 kernel: hub 2-0:1.0: 10 ports detected Apr 30 13:30:12.065622 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Apr 30 13:30:12.065684 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:53 Apr 30 13:30:12.065753 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Apr 30 13:30:12.065818 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Apr 30 13:30:12.065881 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Apr 30 13:30:12.564776 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 13:30:12.564895 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 13:30:12.564910 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 13:30:12.564924 kernel: ata7: SATA link down (SStatus 0 SControl 300) Apr 30 13:30:12.564937 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 13:30:12.564950 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 13:30:12.564963 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 13:30:12.564976 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 30 13:30:12.564989 kernel: ata8: SATA link down (SStatus 0 SControl 300) Apr 30 13:30:12.565002 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 13:30:12.565016 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Apr 30 13:30:12.565029 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 13:30:12.565042 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 30 13:30:12.565055 kernel: ata2.00: Features: NCQ-prio Apr 30 13:30:12.565067 kernel: ata1.00: Features: NCQ-prio Apr 30 13:30:12.565081 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Apr 30 13:30:12.691628 kernel: ata2.00: configured for UDMA/133 Apr 30 13:30:12.691639 kernel: ata1.00: configured for UDMA/133 Apr 30 13:30:12.691650 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 13:30:12.691745 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Apr 30 13:30:12.691866 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Apr 30 13:30:12.691977 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Apr 30 13:30:12.692085 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 13:30:12.692100 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:30:12.692113 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 13:30:12.692215 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Apr 30 13:30:12.692354 kernel: hub 1-14:1.0: USB hub found Apr 30 13:30:12.692497 kernel: hub 1-14:1.0: 4 ports detected Apr 30 13:30:12.692612 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Apr 30 13:30:12.692711 kernel: sd 1:0:0:0: [sdb] Write Protect is off Apr 30 13:30:12.692820 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Apr 30 13:30:12.692922 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 30 13:30:12.693021 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Apr 30 13:30:12.693119 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Apr 30 13:30:12.693215 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 13:30:12.693327 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 13:30:12.693425 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Apr 30 13:30:12.693529 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 30 13:30:12.693627 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Apr 30 13:30:12.693729 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Apr 30 13:30:12.693827 kernel: ata2.00: Enabling discard_zeroes_data Apr 30 13:30:12.693842 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:30:12.693854 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Apr 30 13:30:12.693948 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 13:30:12.693963 kernel: GPT:9289727 != 937703087 Apr 30 13:30:12.693976 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 13:30:12.693989 kernel: GPT:9289727 != 937703087 Apr 30 13:30:12.694001 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 13:30:12.694014 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 13:30:12.694029 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 30 13:30:12.694126 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (576) Apr 30 13:30:12.694140 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (575) Apr 30 13:30:12.694153 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 13:30:12.694261 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Apr 30 13:30:13.180807 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Apr 30 13:30:13.180922 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Apr 30 13:30:13.181063 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:30:13.181079 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 13:30:13.181092 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 13:30:13.181106 kernel: usbcore: registered new interface driver usbhid Apr 30 13:30:13.181119 kernel: usbhid: USB HID core driver Apr 30 13:30:13.181133 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Apr 30 13:30:13.181147 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Apr 30 13:30:13.181274 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Apr 30 13:30:13.181293 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Apr 30 13:30:13.181417 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Apr 30 13:30:13.181533 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Apr 30 13:30:13.181632 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Apr 30 13:30:11.773473 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:30:13.204864 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Apr 30 13:30:13.204945 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Apr 30 13:30:12.074873 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:30:12.074901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 13:30:12.074999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:30:12.097852 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:30:12.124934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:30:12.140178 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 13:30:12.154200 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 13:30:13.279897 disk-uuid[701]: Primary Header is updated. Apr 30 13:30:13.279897 disk-uuid[701]: Secondary Entries is updated. Apr 30 13:30:13.279897 disk-uuid[701]: Secondary Header is updated. Apr 30 13:30:12.249431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:30:12.370463 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 13:30:12.472857 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 13:30:12.481888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:30:12.511119 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 13:30:12.546829 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Apr 30 13:30:12.583780 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Apr 30 13:30:12.607368 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 13:30:12.618797 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Apr 30 13:30:12.636439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 13:30:12.666872 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 13:30:12.688170 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:30:12.711420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:30:13.692134 kernel: ata1.00: Enabling discard_zeroes_data Apr 30 13:30:13.699297 disk-uuid[702]: The operation has completed successfully. Apr 30 13:30:13.707837 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 30 13:30:13.735633 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 13:30:13.735681 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 13:30:13.792993 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 13:30:13.818770 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 30 13:30:13.818827 sh[743]: Success Apr 30 13:30:13.852548 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 13:30:13.868692 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 13:30:13.875212 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 13:30:13.917773 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 13:30:13.917793 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:30:13.928545 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 13:30:13.936708 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 13:30:13.942618 kernel: BTRFS info (device dm-0): using free space tree Apr 30 13:30:13.956779 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 13:30:13.959388 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 13:30:13.968132 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 13:30:13.988899 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 13:30:13.994454 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 13:30:14.062881 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:30:14.062895 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:30:14.062904 kernel: BTRFS info (device sda6): using free space tree Apr 30 13:30:14.062912 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 13:30:14.062920 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 13:30:14.062928 kernel: BTRFS info (device sda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:30:14.063038 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 13:30:14.091877 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 13:30:14.157494 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 13:30:14.180856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 13:30:14.188208 unknown[800]: fetched base config from "system" Apr 30 13:30:14.185828 ignition[800]: Ignition 2.20.0 Apr 30 13:30:14.188212 unknown[800]: fetched user config from "system" Apr 30 13:30:14.185832 ignition[800]: Stage: fetch-offline Apr 30 13:30:14.191309 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 13:30:14.185853 ignition[800]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:30:14.193977 systemd-networkd[923]: lo: Link UP Apr 30 13:30:14.185858 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:30:14.193979 systemd-networkd[923]: lo: Gained carrier Apr 30 13:30:14.185907 ignition[800]: parsed url from cmdline: "" Apr 30 13:30:14.196528 systemd-networkd[923]: Enumeration completed Apr 30 13:30:14.185909 ignition[800]: no config URL provided Apr 30 13:30:14.197472 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:30:14.185912 ignition[800]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 13:30:14.213005 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 13:30:14.185933 ignition[800]: parsing config with SHA512: 7779faa5806185153c6eb25e6e4373851c13ba78de2299b585aedcbe585acf60ba01a2a5cdb8e8822278f23e78d08df2fa6984df1953287b028ef25ef3279309 Apr 30 13:30:14.225655 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:30:14.188430 ignition[800]: fetch-offline: fetch-offline passed Apr 30 13:30:14.232139 systemd[1]: Reached target network.target - Network. Apr 30 13:30:14.188433 ignition[800]: POST message to Packet Timeline Apr 30 13:30:14.244890 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 13:30:14.188436 ignition[800]: POST Status error: resource requires networking Apr 30 13:30:14.252929 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 13:30:14.188474 ignition[800]: Ignition finished successfully Apr 30 13:30:14.254493 systemd-networkd[923]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:30:14.269395 ignition[937]: Ignition 2.20.0 Apr 30 13:30:14.269405 ignition[937]: Stage: kargs Apr 30 13:30:14.269620 ignition[937]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:30:14.474805 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Apr 30 13:30:14.468277 systemd-networkd[923]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:30:14.269635 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:30:14.270893 ignition[937]: kargs: kargs passed Apr 30 13:30:14.270899 ignition[937]: POST message to Packet Timeline Apr 30 13:30:14.270923 ignition[937]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:30:14.271836 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39486->[::1]:53: read: connection refused Apr 30 13:30:14.472789 ignition[937]: GET https://metadata.packet.net/metadata: attempt #2 Apr 30 13:30:14.473353 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50858->[::1]:53: read: connection refused Apr 30 13:30:14.746754 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Apr 30 13:30:14.748013 systemd-networkd[923]: eno1: Link UP Apr 30 13:30:14.748129 systemd-networkd[923]: eno2: Link UP Apr 30 13:30:14.748235 systemd-networkd[923]: enp2s0f0np0: Link UP Apr 30 13:30:14.748362 systemd-networkd[923]: enp2s0f0np0: Gained carrier Apr 30 13:30:14.758938 systemd-networkd[923]: enp2s0f1np1: Link UP Apr 30 13:30:14.796921 systemd-networkd[923]: enp2s0f0np0: DHCPv4 address 147.75.202.179/31, gateway 147.75.202.178 acquired from 145.40.83.140 Apr 30 13:30:14.873610 ignition[937]: GET https://metadata.packet.net/metadata: attempt #3 Apr 30 13:30:14.874645 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55172->[::1]:53: read: connection refused Apr 30 13:30:15.488400 systemd-networkd[923]: enp2s0f1np1: Gained carrier Apr 30 13:30:15.675086 ignition[937]: GET https://metadata.packet.net/metadata: attempt #4 Apr 30 13:30:15.676258 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60044->[::1]:53: read: connection refused Apr 30 13:30:15.808224 systemd-networkd[923]: enp2s0f0np0: Gained IPv6LL Apr 30 13:30:17.278015 ignition[937]: GET https://metadata.packet.net/metadata: attempt #5 Apr 30 13:30:17.279538 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36044->[::1]:53: read: connection refused Apr 30 13:30:17.344241 systemd-networkd[923]: enp2s0f1np1: Gained IPv6LL Apr 30 13:30:20.482697 ignition[937]: GET https://metadata.packet.net/metadata: attempt #6 Apr 30 13:30:21.339933 ignition[937]: GET result: OK Apr 30 13:30:21.816513 ignition[937]: Ignition finished successfully Apr 30 13:30:21.822227 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 13:30:21.846957 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 13:30:21.853041 ignition[951]: Ignition 2.20.0 Apr 30 13:30:21.853045 ignition[951]: Stage: disks Apr 30 13:30:21.853149 ignition[951]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:30:21.853155 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:30:21.853659 ignition[951]: disks: disks passed Apr 30 13:30:21.853662 ignition[951]: POST message to Packet Timeline Apr 30 13:30:21.853674 ignition[951]: GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:30:22.833082 ignition[951]: GET result: OK Apr 30 13:30:23.675379 ignition[951]: Ignition finished successfully Apr 30 13:30:23.678621 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 13:30:23.694063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 13:30:23.712055 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 13:30:23.733000 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 13:30:23.754071 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 13:30:23.774077 systemd[1]: Reached target basic.target - Basic System. Apr 30 13:30:23.803983 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 13:30:23.839243 systemd-fsck[969]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 13:30:23.849181 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 13:30:23.862186 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 13:30:23.941766 kernel: EXT4-fs (sda9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 13:30:23.941988 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 13:30:23.950120 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 13:30:23.967886 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 13:30:23.992904 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 13:30:24.038639 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (978) Apr 30 13:30:24.038654 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:30:24.038666 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:30:24.038674 kernel: BTRFS info (device sda6): using free space tree Apr 30 13:30:24.001475 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 30 13:30:24.072922 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 13:30:24.072936 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 13:30:24.039365 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Apr 30 13:30:24.083808 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 13:30:24.128946 coreos-metadata[980]: Apr 30 13:30:24.114 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:30:24.083829 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 13:30:24.159829 coreos-metadata[984]: Apr 30 13:30:24.114 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:30:24.093928 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 13:30:24.110918 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 13:30:24.198842 initrd-setup-root[1010]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 13:30:24.153965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 13:30:24.218844 initrd-setup-root[1017]: cut: /sysroot/etc/group: No such file or directory Apr 30 13:30:24.228818 initrd-setup-root[1024]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 13:30:24.238963 initrd-setup-root[1031]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 13:30:24.234960 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 13:30:24.267943 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 13:30:24.293952 kernel: BTRFS info (device sda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:30:24.269614 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 13:30:24.302429 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 13:30:24.329516 ignition[1098]: INFO : Ignition 2.20.0 Apr 30 13:30:24.329516 ignition[1098]: INFO : Stage: mount Apr 30 13:30:24.343902 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:30:24.343902 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:30:24.343902 ignition[1098]: INFO : mount: mount passed Apr 30 13:30:24.343902 ignition[1098]: INFO : POST message to Packet Timeline Apr 30 13:30:24.343902 ignition[1098]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:30:24.340111 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 13:30:25.055605 coreos-metadata[980]: Apr 30 13:30:25.055 INFO Fetch successful Apr 30 13:30:25.063972 coreos-metadata[984]: Apr 30 13:30:25.057 INFO Fetch successful Apr 30 13:30:25.135200 coreos-metadata[980]: Apr 30 13:30:25.135 INFO wrote hostname ci-4230.1.1-a-aaf56335e8 to /sysroot/etc/hostname Apr 30 13:30:25.136630 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 13:30:25.159065 systemd[1]: flatcar-static-network.service: Deactivated successfully. Apr 30 13:30:25.159109 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Apr 30 13:30:25.703199 ignition[1098]: INFO : GET result: OK Apr 30 13:30:26.057422 ignition[1098]: INFO : Ignition finished successfully Apr 30 13:30:26.058972 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 13:30:26.087002 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 13:30:26.098216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 13:30:26.144747 kernel: BTRFS: device label OEM devid 1 transid 19 /dev/sda6 scanned by mount (1121) Apr 30 13:30:26.162273 kernel: BTRFS info (device sda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:30:26.162289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:30:26.168179 kernel: BTRFS info (device sda6): using free space tree Apr 30 13:30:26.183280 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 30 13:30:26.183302 kernel: BTRFS info (device sda6): auto enabling async discard Apr 30 13:30:26.185837 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 13:30:26.211587 ignition[1138]: INFO : Ignition 2.20.0 Apr 30 13:30:26.211587 ignition[1138]: INFO : Stage: files Apr 30 13:30:26.227941 ignition[1138]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:30:26.227941 ignition[1138]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:30:26.227941 ignition[1138]: DEBUG : files: compiled without relabeling support, skipping Apr 30 13:30:26.227941 ignition[1138]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 13:30:26.227941 ignition[1138]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 13:30:26.227941 ignition[1138]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 13:30:26.227941 ignition[1138]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 13:30:26.227941 ignition[1138]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 13:30:26.227941 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 13:30:26.227941 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Apr 30 13:30:26.215601 unknown[1138]: wrote ssh authorized keys file for user: core Apr 30 13:30:26.366906 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 13:30:26.508333 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Apr 30 13:30:26.524919 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 13:30:26.524919 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 30 13:30:27.156568 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 13:30:27.207579 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 13:30:27.207579 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 13:30:27.239027 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Apr 30 13:30:27.615697 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 13:30:27.784825 ignition[1138]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Apr 30 13:30:27.784825 ignition[1138]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 13:30:27.814014 ignition[1138]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 13:30:27.814014 ignition[1138]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 13:30:27.814014 ignition[1138]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 13:30:27.814014 ignition[1138]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 13:30:27.814014 ignition[1138]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 13:30:27.814014 ignition[1138]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 13:30:27.814014 ignition[1138]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 13:30:27.814014 ignition[1138]: INFO : files: files passed Apr 30 13:30:27.814014 ignition[1138]: INFO : POST message to Packet Timeline Apr 30 13:30:27.814014 ignition[1138]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:30:28.776508 ignition[1138]: INFO : GET result: OK Apr 30 13:30:29.237600 ignition[1138]: INFO : Ignition finished successfully Apr 30 13:30:29.240658 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 13:30:29.270952 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 13:30:29.271388 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 13:30:29.289255 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 13:30:29.289317 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 13:30:29.325922 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 13:30:29.341318 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 13:30:29.381897 initrd-setup-root-after-ignition[1177]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:30:29.381897 initrd-setup-root-after-ignition[1177]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:30:29.377884 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 13:30:29.431989 initrd-setup-root-after-ignition[1182]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:30:29.452094 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 13:30:29.452145 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 13:30:29.471107 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 13:30:29.492915 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 13:30:29.514087 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 13:30:29.530188 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 13:30:29.608403 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 13:30:29.634038 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 13:30:29.639394 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:30:29.664951 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:30:29.687241 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 13:30:29.705393 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 13:30:29.705836 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 13:30:29.747225 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 13:30:29.757356 systemd[1]: Stopped target basic.target - Basic System. Apr 30 13:30:29.776350 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 13:30:29.795345 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 13:30:29.816330 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 13:30:29.837345 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 13:30:29.857355 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 13:30:29.878526 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 13:30:29.899376 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 13:30:29.920346 systemd[1]: Stopped target swap.target - Swaps. Apr 30 13:30:29.938241 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 13:30:29.938661 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 13:30:29.975153 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:30:29.985366 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:30:30.006217 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 13:30:30.006690 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:30:30.030230 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 13:30:30.030642 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 13:30:30.062347 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 13:30:30.062836 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 13:30:30.082539 systemd[1]: Stopped target paths.target - Path Units. Apr 30 13:30:30.101212 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 13:30:30.101659 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:30:30.122334 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 13:30:30.140349 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 13:30:30.160345 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 13:30:30.160648 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 13:30:30.180289 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 13:30:30.180564 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 13:30:30.204605 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 13:30:30.311122 ignition[1202]: INFO : Ignition 2.20.0 Apr 30 13:30:30.311122 ignition[1202]: INFO : Stage: umount Apr 30 13:30:30.311122 ignition[1202]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:30:30.311122 ignition[1202]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Apr 30 13:30:30.311122 ignition[1202]: INFO : umount: umount passed Apr 30 13:30:30.311122 ignition[1202]: INFO : POST message to Packet Timeline Apr 30 13:30:30.311122 ignition[1202]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Apr 30 13:30:30.205060 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 13:30:30.226442 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 13:30:30.226854 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 13:30:30.244443 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 30 13:30:30.244872 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 30 13:30:30.276910 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 13:30:30.291452 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 13:30:30.302161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 13:30:30.302586 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:30:30.329919 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 13:30:30.330002 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 13:30:30.383425 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 13:30:30.384635 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 13:30:30.384788 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 13:30:30.403429 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 13:30:30.403666 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 13:30:31.707034 ignition[1202]: INFO : GET result: OK Apr 30 13:30:32.067730 ignition[1202]: INFO : Ignition finished successfully Apr 30 13:30:32.068967 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 13:30:32.069103 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 13:30:32.086889 systemd[1]: Stopped target network.target - Network. Apr 30 13:30:32.101981 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 13:30:32.102180 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 13:30:32.121057 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 13:30:32.121196 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 13:30:32.139155 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 13:30:32.139323 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 13:30:32.158151 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 13:30:32.158329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 13:30:32.177263 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 13:30:32.177446 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 13:30:32.197466 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 13:30:32.215176 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 13:30:32.234832 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 13:30:32.235112 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 13:30:32.257686 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 13:30:32.257862 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 13:30:32.257907 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 13:30:32.281721 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 13:30:32.282233 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 13:30:32.282272 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:30:32.302875 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 13:30:32.311025 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 13:30:32.311136 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 13:30:32.339093 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 13:30:32.339249 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:30:32.360442 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 13:30:32.360607 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 13:30:32.378201 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 13:30:32.378375 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:30:32.400409 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:30:32.425605 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 13:30:32.425835 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:30:32.426704 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 13:30:32.426776 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:30:32.448239 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 13:30:32.448282 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 13:30:32.452020 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 13:30:32.452049 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:30:32.471977 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 13:30:32.472038 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 13:30:32.506963 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 13:30:32.828875 systemd-journald[269]: Received SIGTERM from PID 1 (systemd). Apr 30 13:30:32.507018 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 13:30:32.544882 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 13:30:32.545026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:30:32.585831 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 13:30:32.603856 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 13:30:32.603894 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:30:32.634020 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 30 13:30:32.634095 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:30:32.655961 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 13:30:32.656102 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:30:32.678010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 13:30:32.678163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:30:32.704124 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 13:30:32.704281 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:30:32.705462 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 13:30:32.705673 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 13:30:32.724585 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 13:30:32.724857 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 13:30:32.742828 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 13:30:32.775943 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 13:30:32.788462 systemd[1]: Switching root. Apr 30 13:30:32.956001 systemd-journald[269]: Journal stopped Apr 30 13:30:34.643354 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 13:30:34.643369 kernel: SELinux: policy capability open_perms=1 Apr 30 13:30:34.643377 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 13:30:34.643383 kernel: SELinux: policy capability always_check_network=0 Apr 30 13:30:34.643391 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 13:30:34.643397 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 13:30:34.643403 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 13:30:34.643409 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 13:30:34.643415 kernel: audit: type=1403 audit(1746019833.055:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 13:30:34.643423 systemd[1]: Successfully loaded SELinux policy in 75.935ms. Apr 30 13:30:34.643431 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.969ms. Apr 30 13:30:34.643439 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 13:30:34.643446 systemd[1]: Detected architecture x86-64. Apr 30 13:30:34.643452 systemd[1]: Detected first boot. Apr 30 13:30:34.643459 systemd[1]: Hostname set to . Apr 30 13:30:34.643468 systemd[1]: Initializing machine ID from random generator. Apr 30 13:30:34.643475 zram_generator::config[1254]: No configuration found. Apr 30 13:30:34.643483 systemd[1]: Populated /etc with preset unit settings. Apr 30 13:30:34.643490 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 13:30:34.643497 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 13:30:34.643505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 13:30:34.643512 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 13:30:34.643521 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 13:30:34.643528 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 13:30:34.643535 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 13:30:34.643542 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 13:30:34.643549 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 13:30:34.643556 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 13:30:34.643563 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 13:30:34.643572 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 13:30:34.643579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:30:34.643586 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:30:34.643593 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 13:30:34.643600 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 13:30:34.643607 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 13:30:34.643614 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 13:30:34.643622 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Apr 30 13:30:34.643630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:30:34.643637 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 13:30:34.643644 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 13:30:34.643653 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 13:30:34.643661 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 13:30:34.643668 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:30:34.643675 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 13:30:34.643682 systemd[1]: Reached target slices.target - Slice Units. Apr 30 13:30:34.643691 systemd[1]: Reached target swap.target - Swaps. Apr 30 13:30:34.643698 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 13:30:34.643705 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 13:30:34.643726 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 13:30:34.643734 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:30:34.643743 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 13:30:34.643751 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:30:34.643758 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 13:30:34.643766 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 13:30:34.643773 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 13:30:34.643780 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 13:30:34.643788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:30:34.643795 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 13:30:34.643804 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 13:30:34.643811 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 13:30:34.643819 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 13:30:34.643826 systemd[1]: Reached target machines.target - Containers. Apr 30 13:30:34.643834 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 13:30:34.643841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:30:34.643848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 13:30:34.643856 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 13:30:34.643864 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 13:30:34.643872 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 13:30:34.643881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 13:30:34.643888 kernel: ACPI: bus type drm_connector registered Apr 30 13:30:34.643895 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 13:30:34.643902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 13:30:34.643909 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 13:30:34.643917 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 13:30:34.643925 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 13:30:34.643933 kernel: fuse: init (API version 7.39) Apr 30 13:30:34.643939 kernel: loop: module loaded Apr 30 13:30:34.643946 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 13:30:34.643953 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 13:30:34.643961 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:30:34.643969 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 13:30:34.643985 systemd-journald[1357]: Collecting audit messages is disabled. Apr 30 13:30:34.644003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 13:30:34.644011 systemd-journald[1357]: Journal started Apr 30 13:30:34.644028 systemd-journald[1357]: Runtime Journal (/run/log/journal/d085be87a94e4bed922b19b52d588696) is 8M, max 639.9M, 631.9M free. Apr 30 13:30:33.490338 systemd[1]: Queued start job for default target multi-user.target. Apr 30 13:30:33.501567 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 30 13:30:33.501796 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 13:30:34.678771 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 13:30:34.699775 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 13:30:34.720751 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 13:30:34.741717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 13:30:34.762894 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 13:30:34.762921 systemd[1]: Stopped verity-setup.service. Apr 30 13:30:34.787753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:30:34.795754 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 13:30:34.805180 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 13:30:34.815008 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 13:30:34.824982 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 13:30:34.834970 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 13:30:34.845972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 13:30:34.855973 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 13:30:34.866077 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 13:30:34.877077 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:30:34.888109 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 13:30:34.888277 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 13:30:34.899231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 13:30:34.899458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 13:30:34.912582 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 13:30:34.913030 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 13:30:34.924593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 13:30:34.925035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 13:30:34.937594 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 13:30:34.938041 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 13:30:34.949592 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 13:30:34.950032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 13:30:34.961681 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 13:30:34.972646 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 13:30:34.984636 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 13:30:34.997656 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 13:30:35.010623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:30:35.045924 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 13:30:35.077240 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 13:30:35.090605 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 13:30:35.100999 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 13:30:35.101093 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 13:30:35.113795 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 13:30:35.136073 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 13:30:35.138694 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 13:30:35.157960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:30:35.159114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 13:30:35.170347 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 13:30:35.181846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 13:30:35.182475 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 13:30:35.185695 systemd-journald[1357]: Time spent on flushing to /var/log/journal/d085be87a94e4bed922b19b52d588696 is 12.641ms for 1384 entries. Apr 30 13:30:35.185695 systemd-journald[1357]: System Journal (/var/log/journal/d085be87a94e4bed922b19b52d588696) is 8M, max 195.6M, 187.6M free. Apr 30 13:30:35.219215 systemd-journald[1357]: Received client request to flush runtime journal. Apr 30 13:30:35.199875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 13:30:35.212200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:30:35.222522 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 13:30:35.235486 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 13:30:35.247783 kernel: loop0: detected capacity change from 0 to 138176 Apr 30 13:30:35.252663 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 13:30:35.265669 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Apr 30 13:30:35.265684 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. Apr 30 13:30:35.266403 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 13:30:35.274729 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 13:30:35.282923 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 13:30:35.293954 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 13:30:35.305050 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 13:30:35.315985 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 13:30:35.326936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:30:35.338755 kernel: loop1: detected capacity change from 0 to 147912 Apr 30 13:30:35.342023 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:30:35.355870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 13:30:35.383005 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 13:30:35.394552 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 13:30:35.405437 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 13:30:35.406093 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 13:30:35.414749 kernel: loop2: detected capacity change from 0 to 8 Apr 30 13:30:35.423998 udevadm[1398]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 13:30:35.430211 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 13:30:35.451894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 13:30:35.459779 kernel: loop3: detected capacity change from 0 to 218376 Apr 30 13:30:35.464726 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Apr 30 13:30:35.464736 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Apr 30 13:30:35.470567 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:30:35.512750 kernel: loop4: detected capacity change from 0 to 138176 Apr 30 13:30:35.522671 ldconfig[1388]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 13:30:35.525221 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 13:30:35.535777 kernel: loop5: detected capacity change from 0 to 147912 Apr 30 13:30:35.553723 kernel: loop6: detected capacity change from 0 to 8 Apr 30 13:30:35.561756 kernel: loop7: detected capacity change from 0 to 218376 Apr 30 13:30:35.604279 (sd-merge)[1421]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Apr 30 13:30:35.604546 (sd-merge)[1421]: Merged extensions into '/usr'. Apr 30 13:30:35.607120 systemd[1]: Reload requested from client PID 1394 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 13:30:35.607127 systemd[1]: Reloading... Apr 30 13:30:35.640799 zram_generator::config[1448]: No configuration found. Apr 30 13:30:35.713165 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:30:35.765319 systemd[1]: Reloading finished in 157 ms. Apr 30 13:30:35.781928 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 13:30:35.793716 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 13:30:35.815563 systemd[1]: Starting ensure-sysext.service... Apr 30 13:30:35.823586 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 13:30:35.835769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:30:35.847252 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 13:30:35.847467 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 13:30:35.848129 systemd-tmpfiles[1506]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 13:30:35.848350 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Apr 30 13:30:35.848402 systemd-tmpfiles[1506]: ACLs are not supported, ignoring. Apr 30 13:30:35.850875 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 13:30:35.850879 systemd-tmpfiles[1506]: Skipping /boot Apr 30 13:30:35.852122 systemd[1]: Reload requested from client PID 1505 ('systemctl') (unit ensure-sysext.service)... Apr 30 13:30:35.852145 systemd[1]: Reloading... Apr 30 13:30:35.856540 systemd-tmpfiles[1506]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 13:30:35.856544 systemd-tmpfiles[1506]: Skipping /boot Apr 30 13:30:35.863506 systemd-udevd[1507]: Using default interface naming scheme 'v255'. Apr 30 13:30:35.881726 zram_generator::config[1536]: No configuration found. Apr 30 13:30:35.911723 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1609) Apr 30 13:30:35.911786 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Apr 30 13:30:35.924832 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 13:30:35.924884 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 13:30:35.936721 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 13:30:35.951957 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Apr 30 13:30:35.967306 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Apr 30 13:30:35.967455 kernel: ACPI: button: Power Button [PWRF] Apr 30 13:30:35.967475 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Apr 30 13:30:35.970768 kernel: IPMI message handler: version 39.2 Apr 30 13:30:35.980831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:30:36.020726 kernel: ipmi device interface Apr 30 13:30:36.021721 kernel: iTCO_vendor_support: vendor-support=0 Apr 30 13:30:36.021753 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Apr 30 13:30:36.040253 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Apr 30 13:30:36.062205 kernel: ipmi_si: IPMI System Interface driver Apr 30 13:30:36.062238 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Apr 30 13:30:36.076179 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Apr 30 13:30:36.076194 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Apr 30 13:30:36.076207 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Apr 30 13:30:36.108951 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Apr 30 13:30:36.109042 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Apr 30 13:30:36.109113 kernel: ipmi_si: Adding ACPI-specified kcs state machine Apr 30 13:30:36.109124 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Apr 30 13:30:36.071342 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Apr 30 13:30:36.071486 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Apr 30 13:30:36.125212 systemd[1]: Reloading finished in 272 ms. Apr 30 13:30:36.135718 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Apr 30 13:30:36.152848 kernel: intel_rapl_common: Found RAPL domain package Apr 30 13:30:36.152890 kernel: intel_rapl_common: Found RAPL domain core Apr 30 13:30:36.158182 kernel: intel_rapl_common: Found RAPL domain dram Apr 30 13:30:36.162681 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:30:36.183338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:30:36.192760 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Apr 30 13:30:36.213822 systemd[1]: Finished ensure-sysext.service. Apr 30 13:30:36.233721 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Apr 30 13:30:36.240635 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Apr 30 13:30:36.249806 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:30:36.272872 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:30:36.281591 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 13:30:36.292734 augenrules[1711]: No rules Apr 30 13:30:36.292868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:30:36.308215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 13:30:36.318365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 13:30:36.327719 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Apr 30 13:30:36.335783 kernel: ipmi_ssif: IPMI SSIF Interface driver Apr 30 13:30:36.339390 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 13:30:36.350386 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 13:30:36.359881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:30:36.360408 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 13:30:36.370808 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:30:36.371409 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 13:30:36.382685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 13:30:36.383641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 13:30:36.402700 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 13:30:36.412342 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 13:30:36.423361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:30:36.432823 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:30:36.433415 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 13:30:36.445937 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:30:36.446039 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:30:36.446285 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 13:30:36.446420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 13:30:36.446502 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 13:30:36.446640 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 13:30:36.446723 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 13:30:36.446856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 13:30:36.446934 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 13:30:36.447064 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 13:30:36.447141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 13:30:36.447282 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 13:30:36.447500 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 13:30:36.452329 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 13:30:36.453359 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 13:30:36.453392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 13:30:36.453426 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 13:30:36.454016 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 13:30:36.454883 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 13:30:36.454909 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 13:30:36.460192 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 13:30:36.462217 lvm[1741]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 13:30:36.477765 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 13:30:36.511919 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 13:30:36.524380 systemd-resolved[1725]: Positive Trust Anchors: Apr 30 13:30:36.524386 systemd-resolved[1725]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 13:30:36.524411 systemd-resolved[1725]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 13:30:36.528008 systemd-networkd[1724]: lo: Link UP Apr 30 13:30:36.528012 systemd-resolved[1725]: Using system hostname 'ci-4230.1.1-a-aaf56335e8'. Apr 30 13:30:36.528012 systemd-networkd[1724]: lo: Gained carrier Apr 30 13:30:36.530595 systemd-networkd[1724]: bond0: netdev ready Apr 30 13:30:36.531577 systemd-networkd[1724]: Enumeration completed Apr 30 13:30:36.538384 systemd-networkd[1724]: enp2s0f0np0: Configuring with /etc/systemd/network/10-04:3f:72:d9:a2:ec.network. Apr 30 13:30:36.576913 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 13:30:36.588968 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 13:30:36.598777 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 13:30:36.608904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:30:36.621672 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:30:36.631751 systemd[1]: Reached target network.target - Network. Apr 30 13:30:36.640748 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:30:36.652750 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 13:30:36.662799 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 13:30:36.673759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 13:30:36.684751 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 13:30:36.696743 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 13:30:36.696797 systemd[1]: Reached target paths.target - Path Units. Apr 30 13:30:36.704746 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 13:30:36.713826 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 13:30:36.724792 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 13:30:36.735843 systemd[1]: Reached target timers.target - Timer Units. Apr 30 13:30:36.744453 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 13:30:36.755467 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 13:30:36.764689 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 13:30:36.790045 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 13:30:36.799952 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 13:30:36.821845 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 13:30:36.823975 lvm[1764]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 13:30:36.834420 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 13:30:36.846350 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 13:30:36.857148 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 13:30:36.867966 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 13:30:36.878788 systemd[1]: Reached target basic.target - Basic System. Apr 30 13:30:36.887783 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 13:30:36.887800 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 13:30:36.888380 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 13:30:36.899459 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 13:30:36.910383 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 13:30:36.920302 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 13:30:36.923336 coreos-metadata[1768]: Apr 30 13:30:36.923 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:30:36.924217 coreos-metadata[1768]: Apr 30 13:30:36.924 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Apr 30 13:30:36.930891 dbus-daemon[1769]: [system] SELinux support is enabled Apr 30 13:30:36.931370 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 13:30:36.933086 jq[1772]: false Apr 30 13:30:36.941836 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 13:30:36.942434 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 13:30:36.950328 extend-filesystems[1774]: Found loop4 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found loop5 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found loop6 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found loop7 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found sda Apr 30 13:30:36.952882 extend-filesystems[1774]: Found sda1 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found sda2 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found sda3 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found usr Apr 30 13:30:36.952882 extend-filesystems[1774]: Found sda4 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found sda6 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found sda7 Apr 30 13:30:36.952882 extend-filesystems[1774]: Found sda9 Apr 30 13:30:36.952882 extend-filesystems[1774]: Checking size of /dev/sda9 Apr 30 13:30:37.067937 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Apr 30 13:30:37.067956 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1617) Apr 30 13:30:37.067995 extend-filesystems[1774]: Resized partition /dev/sda9 Apr 30 13:30:36.953406 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 13:30:37.090930 extend-filesystems[1782]: resize2fs 1.47.1 (20-May-2024) Apr 30 13:30:36.994509 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 13:30:37.002376 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 13:30:37.029234 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 13:30:37.061867 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Apr 30 13:30:37.068177 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 13:30:37.068536 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 13:30:37.096613 systemd-logind[1794]: Watching system buttons on /dev/input/event3 (Power Button) Apr 30 13:30:37.096624 systemd-logind[1794]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 30 13:30:37.096634 systemd-logind[1794]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Apr 30 13:30:37.096753 systemd-logind[1794]: New seat seat0. Apr 30 13:30:37.114815 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 13:30:37.116341 jq[1800]: true Apr 30 13:30:37.122198 update_engine[1799]: I20250430 13:30:37.122139 1799 main.cc:92] Flatcar Update Engine starting Apr 30 13:30:37.122878 update_engine[1799]: I20250430 13:30:37.122834 1799 update_check_scheduler.cc:74] Next update check in 6m40s Apr 30 13:30:37.126139 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 13:30:37.137243 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 13:30:37.148030 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 13:30:37.177911 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 13:30:37.178022 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 13:30:37.178200 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 13:30:37.178300 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 13:30:37.188253 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 13:30:37.188356 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 13:30:37.201446 (ntainerd)[1804]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 13:30:37.202933 jq[1803]: true Apr 30 13:30:37.205334 dbus-daemon[1769]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 13:30:37.210259 tar[1802]: linux-amd64/LICENSE Apr 30 13:30:37.210434 tar[1802]: linux-amd64/helm Apr 30 13:30:37.212826 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Apr 30 13:30:37.212940 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Apr 30 13:30:37.221866 systemd[1]: Started update-engine.service - Update Engine. Apr 30 13:30:37.232446 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 13:30:37.232556 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 13:30:37.243814 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 13:30:37.243894 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 13:30:37.259469 bash[1832]: Updated "/home/core/.ssh/authorized_keys" Apr 30 13:30:37.277967 sshd_keygen[1797]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 13:30:37.277961 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 13:30:37.291262 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 13:30:37.301046 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 13:30:37.306150 locksmithd[1834]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 13:30:37.326962 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 13:30:37.336068 systemd[1]: Starting sshkeys.service... Apr 30 13:30:37.343173 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 13:30:37.343277 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 13:30:37.355558 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 13:30:37.368750 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Apr 30 13:30:37.381721 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Apr 30 13:30:37.384565 containerd[1804]: time="2025-04-30T13:30:37.384524140Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 13:30:37.386137 systemd-networkd[1724]: enp2s0f1np1: Configuring with /etc/systemd/network/10-04:3f:72:d9:a2:ed.network. Apr 30 13:30:37.388149 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 13:30:37.396935 containerd[1804]: time="2025-04-30T13:30:37.396884919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:30:37.397691 containerd[1804]: time="2025-04-30T13:30:37.397646680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:30:37.397691 containerd[1804]: time="2025-04-30T13:30:37.397662791Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 13:30:37.397691 containerd[1804]: time="2025-04-30T13:30:37.397672251Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 13:30:37.397773 containerd[1804]: time="2025-04-30T13:30:37.397761758Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 13:30:37.397773 containerd[1804]: time="2025-04-30T13:30:37.397772083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 13:30:37.397813 containerd[1804]: time="2025-04-30T13:30:37.397805183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:30:37.397832 containerd[1804]: time="2025-04-30T13:30:37.397813746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:30:37.397961 containerd[1804]: time="2025-04-30T13:30:37.397923564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:30:37.397961 containerd[1804]: time="2025-04-30T13:30:37.397932311Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 13:30:37.397961 containerd[1804]: time="2025-04-30T13:30:37.397939357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:30:37.397961 containerd[1804]: time="2025-04-30T13:30:37.397944483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 13:30:37.398031 containerd[1804]: time="2025-04-30T13:30:37.397984148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:30:37.398131 containerd[1804]: time="2025-04-30T13:30:37.398091975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:30:37.398169 containerd[1804]: time="2025-04-30T13:30:37.398158063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:30:37.398169 containerd[1804]: time="2025-04-30T13:30:37.398166572Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 13:30:37.398214 containerd[1804]: time="2025-04-30T13:30:37.398206568Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 13:30:37.398270 containerd[1804]: time="2025-04-30T13:30:37.398233057Z" level=info msg="metadata content store policy set" policy=shared Apr 30 13:30:37.409982 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 13:30:37.410098 containerd[1804]: time="2025-04-30T13:30:37.410076499Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 13:30:37.410125 containerd[1804]: time="2025-04-30T13:30:37.410105474Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 13:30:37.410125 containerd[1804]: time="2025-04-30T13:30:37.410115402Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 13:30:37.410158 containerd[1804]: time="2025-04-30T13:30:37.410124836Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 13:30:37.410158 containerd[1804]: time="2025-04-30T13:30:37.410132136Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 13:30:37.410219 containerd[1804]: time="2025-04-30T13:30:37.410211683Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 13:30:37.410352 containerd[1804]: time="2025-04-30T13:30:37.410344347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 13:30:37.410431 containerd[1804]: time="2025-04-30T13:30:37.410399723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 13:30:37.410431 containerd[1804]: time="2025-04-30T13:30:37.410409812Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 13:30:37.410431 containerd[1804]: time="2025-04-30T13:30:37.410417693Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 13:30:37.410431 containerd[1804]: time="2025-04-30T13:30:37.410427298Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 13:30:37.410489 containerd[1804]: time="2025-04-30T13:30:37.410440465Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 13:30:37.410489 containerd[1804]: time="2025-04-30T13:30:37.410452804Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 13:30:37.410489 containerd[1804]: time="2025-04-30T13:30:37.410461325Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 13:30:37.410489 containerd[1804]: time="2025-04-30T13:30:37.410469497Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 13:30:37.410489 containerd[1804]: time="2025-04-30T13:30:37.410477593Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 13:30:37.410489 containerd[1804]: time="2025-04-30T13:30:37.410484916Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 13:30:37.410567 containerd[1804]: time="2025-04-30T13:30:37.410490840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 13:30:37.410567 containerd[1804]: time="2025-04-30T13:30:37.410506253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410567 containerd[1804]: time="2025-04-30T13:30:37.410514487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410567 containerd[1804]: time="2025-04-30T13:30:37.410521320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410567 containerd[1804]: time="2025-04-30T13:30:37.410530091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410567 containerd[1804]: time="2025-04-30T13:30:37.410541741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410567 containerd[1804]: time="2025-04-30T13:30:37.410556048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410567 containerd[1804]: time="2025-04-30T13:30:37.410565462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410572839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410581382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410589367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410595691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410602067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410608535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410615478Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410629642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410637600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.410672 containerd[1804]: time="2025-04-30T13:30:37.410647414Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 13:30:37.411068 containerd[1804]: time="2025-04-30T13:30:37.411032700Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 13:30:37.411068 containerd[1804]: time="2025-04-30T13:30:37.411048908Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 13:30:37.411068 containerd[1804]: time="2025-04-30T13:30:37.411060460Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 13:30:37.411129 containerd[1804]: time="2025-04-30T13:30:37.411073726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 13:30:37.411129 containerd[1804]: time="2025-04-30T13:30:37.411080894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.411129 containerd[1804]: time="2025-04-30T13:30:37.411088191Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 13:30:37.411129 containerd[1804]: time="2025-04-30T13:30:37.411099446Z" level=info msg="NRI interface is disabled by configuration." Apr 30 13:30:37.411129 containerd[1804]: time="2025-04-30T13:30:37.411108789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 13:30:37.411337 containerd[1804]: time="2025-04-30T13:30:37.411292565Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 13:30:37.411337 containerd[1804]: time="2025-04-30T13:30:37.411320766Z" level=info msg="Connect containerd service" Apr 30 13:30:37.411337 containerd[1804]: time="2025-04-30T13:30:37.411337302Z" level=info msg="using legacy CRI server" Apr 30 13:30:37.411337 containerd[1804]: time="2025-04-30T13:30:37.411344138Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 13:30:37.411481 containerd[1804]: time="2025-04-30T13:30:37.411411765Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 13:30:37.412086 containerd[1804]: time="2025-04-30T13:30:37.412068069Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 13:30:37.412205 containerd[1804]: time="2025-04-30T13:30:37.412163726Z" level=info msg="Start subscribing containerd event" Apr 30 13:30:37.412205 containerd[1804]: time="2025-04-30T13:30:37.412194320Z" level=info msg="Start recovering state" Apr 30 13:30:37.412251 containerd[1804]: time="2025-04-30T13:30:37.412219592Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 13:30:37.412251 containerd[1804]: time="2025-04-30T13:30:37.412229169Z" level=info msg="Start event monitor" Apr 30 13:30:37.412251 containerd[1804]: time="2025-04-30T13:30:37.412242135Z" level=info msg="Start snapshots syncer" Apr 30 13:30:37.412251 containerd[1804]: time="2025-04-30T13:30:37.412245100Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 13:30:37.412308 containerd[1804]: time="2025-04-30T13:30:37.412247514Z" level=info msg="Start cni network conf syncer for default" Apr 30 13:30:37.412308 containerd[1804]: time="2025-04-30T13:30:37.412258570Z" level=info msg="Start streaming server" Apr 30 13:30:37.412308 containerd[1804]: time="2025-04-30T13:30:37.412282104Z" level=info msg="containerd successfully booted in 0.028187s" Apr 30 13:30:37.420560 coreos-metadata[1871]: Apr 30 13:30:37.420 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Apr 30 13:30:37.421224 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 13:30:37.421423 coreos-metadata[1871]: Apr 30 13:30:37.421 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Apr 30 13:30:37.431106 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 13:30:37.455036 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 13:30:37.463634 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Apr 30 13:30:37.472937 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 13:30:37.484763 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Apr 30 13:30:37.509391 extend-filesystems[1782]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 30 13:30:37.509391 extend-filesystems[1782]: old_desc_blocks = 1, new_desc_blocks = 56 Apr 30 13:30:37.509391 extend-filesystems[1782]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Apr 30 13:30:37.541149 extend-filesystems[1774]: Resized filesystem in /dev/sda9 Apr 30 13:30:37.541149 extend-filesystems[1774]: Found sdb Apr 30 13:30:37.566136 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Apr 30 13:30:37.510157 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 13:30:37.566824 tar[1802]: linux-amd64/README.md Apr 30 13:30:37.510269 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 13:30:37.576750 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Apr 30 13:30:37.576925 systemd-networkd[1724]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Apr 30 13:30:37.579333 systemd-networkd[1724]: enp2s0f0np0: Link UP Apr 30 13:30:37.579949 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 13:30:37.580278 systemd-networkd[1724]: enp2s0f0np0: Gained carrier Apr 30 13:30:37.587818 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Apr 30 13:30:37.600788 systemd-networkd[1724]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-04:3f:72:d9:a2:ec.network. Apr 30 13:30:37.601670 systemd-networkd[1724]: enp2s0f1np1: Link UP Apr 30 13:30:37.602441 systemd-networkd[1724]: enp2s0f1np1: Gained carrier Apr 30 13:30:37.604458 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 13:30:37.615477 systemd-networkd[1724]: bond0: Link UP Apr 30 13:30:37.616677 systemd-networkd[1724]: bond0: Gained carrier Apr 30 13:30:37.617434 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:37.619039 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:37.620015 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:37.620541 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:37.693649 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Apr 30 13:30:37.693669 kernel: bond0: active interface up! Apr 30 13:30:37.809753 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Apr 30 13:30:37.924360 coreos-metadata[1768]: Apr 30 13:30:37.924 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Apr 30 13:30:38.421529 coreos-metadata[1871]: Apr 30 13:30:38.421 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Apr 30 13:30:39.167891 systemd-networkd[1724]: bond0: Gained IPv6LL Apr 30 13:30:39.168278 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:39.680084 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:39.680148 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:39.681341 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 13:30:39.693183 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 13:30:39.718959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:30:39.729535 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 13:30:39.749664 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 13:30:40.505113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:30:40.516244 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:30:40.974378 kubelet[1907]: E0430 13:30:40.974326 1907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:30:40.975787 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:30:40.975870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:30:40.976046 systemd[1]: kubelet.service: Consumed 580ms CPU time, 259.5M memory peak. Apr 30 13:30:41.318630 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Apr 30 13:30:41.318839 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Apr 30 13:30:41.573292 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 13:30:41.593970 systemd[1]: Started sshd@0-147.75.202.179:22-147.75.109.163:52526.service - OpenSSH per-connection server daemon (147.75.109.163:52526). Apr 30 13:30:41.655402 sshd[1926]: Accepted publickey for core from 147.75.109.163 port 52526 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:41.656337 sshd-session[1926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:41.662980 systemd-logind[1794]: New session 1 of user core. Apr 30 13:30:41.663892 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 13:30:41.680933 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 13:30:41.693791 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 13:30:41.706023 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 13:30:41.715932 (systemd)[1930]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 13:30:41.717160 systemd-logind[1794]: New session c1 of user core. Apr 30 13:30:41.817817 systemd[1930]: Queued start job for default target default.target. Apr 30 13:30:41.826300 systemd[1930]: Created slice app.slice - User Application Slice. Apr 30 13:30:41.826333 systemd[1930]: Reached target paths.target - Paths. Apr 30 13:30:41.826355 systemd[1930]: Reached target timers.target - Timers. Apr 30 13:30:41.827007 systemd[1930]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 13:30:41.832608 systemd[1930]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 13:30:41.832635 systemd[1930]: Reached target sockets.target - Sockets. Apr 30 13:30:41.832658 systemd[1930]: Reached target basic.target - Basic System. Apr 30 13:30:41.832682 systemd[1930]: Reached target default.target - Main User Target. Apr 30 13:30:41.832698 systemd[1930]: Startup finished in 112ms. Apr 30 13:30:41.832741 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 13:30:41.844305 coreos-metadata[1768]: Apr 30 13:30:41.844 INFO Fetch successful Apr 30 13:30:41.851920 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 13:30:41.890814 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 13:30:41.902037 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Apr 30 13:30:41.915968 systemd[1]: Started sshd@1-147.75.202.179:22-147.75.109.163:52532.service - OpenSSH per-connection server daemon (147.75.109.163:52532). Apr 30 13:30:41.957405 sshd[1947]: Accepted publickey for core from 147.75.109.163 port 52532 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:41.958091 sshd-session[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:41.960560 systemd-logind[1794]: New session 2 of user core. Apr 30 13:30:41.969896 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 13:30:42.025554 sshd[1949]: Connection closed by 147.75.109.163 port 52532 Apr 30 13:30:42.025725 sshd-session[1947]: pam_unix(sshd:session): session closed for user core Apr 30 13:30:42.034048 systemd[1]: sshd@1-147.75.202.179:22-147.75.109.163:52532.service: Deactivated successfully. Apr 30 13:30:42.034852 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 13:30:42.035471 systemd-logind[1794]: Session 2 logged out. Waiting for processes to exit. Apr 30 13:30:42.036207 systemd[1]: Started sshd@2-147.75.202.179:22-147.75.109.163:52542.service - OpenSSH per-connection server daemon (147.75.109.163:52542). Apr 30 13:30:42.047685 systemd-logind[1794]: Removed session 2. Apr 30 13:30:42.075986 sshd[1954]: Accepted publickey for core from 147.75.109.163 port 52542 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:42.076612 sshd-session[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:42.079030 systemd-logind[1794]: New session 3 of user core. Apr 30 13:30:42.093906 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 13:30:42.150626 sshd[1957]: Connection closed by 147.75.109.163 port 52542 Apr 30 13:30:42.150823 sshd-session[1954]: pam_unix(sshd:session): session closed for user core Apr 30 13:30:42.152048 systemd[1]: sshd@2-147.75.202.179:22-147.75.109.163:52542.service: Deactivated successfully. Apr 30 13:30:42.152904 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 13:30:42.153512 systemd-logind[1794]: Session 3 logged out. Waiting for processes to exit. Apr 30 13:30:42.154047 systemd-logind[1794]: Removed session 3. Apr 30 13:30:42.279221 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Apr 30 13:30:42.491360 coreos-metadata[1871]: Apr 30 13:30:42.491 INFO Fetch successful Apr 30 13:30:42.520245 login[1883]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 13:30:42.524479 systemd-logind[1794]: New session 4 of user core. Apr 30 13:30:42.525243 login[1881]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 13:30:42.540917 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 13:30:42.543447 systemd-logind[1794]: New session 5 of user core. Apr 30 13:30:42.544250 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 13:30:42.552294 unknown[1871]: wrote ssh authorized keys file for user: core Apr 30 13:30:42.567853 update-ssh-keys[1987]: Updated "/home/core/.ssh/authorized_keys" Apr 30 13:30:42.568137 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 13:30:42.568946 systemd[1]: Finished sshkeys.service. Apr 30 13:30:42.570167 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 13:30:42.570416 systemd[1]: Startup finished in 2.808s (kernel) + 23.210s (initrd) + 9.589s (userspace) = 35.608s. Apr 30 13:30:43.988851 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:51.105732 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 13:30:51.129070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:30:51.367779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:30:51.373229 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:30:51.411976 kubelet[2001]: E0430 13:30:51.411919 2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:30:51.414369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:30:51.414459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:30:51.414674 systemd[1]: kubelet.service: Consumed 164ms CPU time, 111.9M memory peak. Apr 30 13:30:52.168783 systemd[1]: Started sshd@3-147.75.202.179:22-147.75.109.163:60094.service - OpenSSH per-connection server daemon (147.75.109.163:60094). Apr 30 13:30:52.202926 sshd[2020]: Accepted publickey for core from 147.75.109.163 port 60094 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:52.203684 sshd-session[2020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:52.206889 systemd-logind[1794]: New session 6 of user core. Apr 30 13:30:52.219971 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 13:30:52.275521 sshd[2022]: Connection closed by 147.75.109.163 port 60094 Apr 30 13:30:52.275673 sshd-session[2020]: pam_unix(sshd:session): session closed for user core Apr 30 13:30:52.287122 systemd[1]: sshd@3-147.75.202.179:22-147.75.109.163:60094.service: Deactivated successfully. Apr 30 13:30:52.287958 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 13:30:52.288724 systemd-logind[1794]: Session 6 logged out. Waiting for processes to exit. Apr 30 13:30:52.289464 systemd[1]: Started sshd@4-147.75.202.179:22-147.75.109.163:60096.service - OpenSSH per-connection server daemon (147.75.109.163:60096). Apr 30 13:30:52.290095 systemd-logind[1794]: Removed session 6. Apr 30 13:30:52.322135 sshd[2027]: Accepted publickey for core from 147.75.109.163 port 60096 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:52.322832 sshd-session[2027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:52.326003 systemd-logind[1794]: New session 7 of user core. Apr 30 13:30:52.339969 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 13:30:52.394200 sshd[2031]: Connection closed by 147.75.109.163 port 60096 Apr 30 13:30:52.394984 sshd-session[2027]: pam_unix(sshd:session): session closed for user core Apr 30 13:30:52.421366 systemd[1]: sshd@4-147.75.202.179:22-147.75.109.163:60096.service: Deactivated successfully. Apr 30 13:30:52.425625 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 13:30:52.428004 systemd-logind[1794]: Session 7 logged out. Waiting for processes to exit. Apr 30 13:30:52.445590 systemd[1]: Started sshd@5-147.75.202.179:22-147.75.109.163:60102.service - OpenSSH per-connection server daemon (147.75.109.163:60102). Apr 30 13:30:52.449057 systemd-logind[1794]: Removed session 7. Apr 30 13:30:52.507033 sshd[2036]: Accepted publickey for core from 147.75.109.163 port 60102 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:52.507996 sshd-session[2036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:52.511848 systemd-logind[1794]: New session 8 of user core. Apr 30 13:30:52.520963 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 13:30:52.582938 sshd[2039]: Connection closed by 147.75.109.163 port 60102 Apr 30 13:30:52.583701 sshd-session[2036]: pam_unix(sshd:session): session closed for user core Apr 30 13:30:52.618476 systemd[1]: sshd@5-147.75.202.179:22-147.75.109.163:60102.service: Deactivated successfully. Apr 30 13:30:52.622604 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 13:30:52.624981 systemd-logind[1794]: Session 8 logged out. Waiting for processes to exit. Apr 30 13:30:52.639083 systemd[1]: Started sshd@6-147.75.202.179:22-147.75.109.163:60106.service - OpenSSH per-connection server daemon (147.75.109.163:60106). Apr 30 13:30:52.639684 systemd-logind[1794]: Removed session 8. Apr 30 13:30:52.669147 sshd[2044]: Accepted publickey for core from 147.75.109.163 port 60106 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:52.669862 sshd-session[2044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:52.673052 systemd-logind[1794]: New session 9 of user core. Apr 30 13:30:52.678952 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 13:30:52.742344 sudo[2048]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 13:30:52.742490 sudo[2048]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:30:52.754352 sudo[2048]: pam_unix(sudo:session): session closed for user root Apr 30 13:30:52.755262 sshd[2047]: Connection closed by 147.75.109.163 port 60106 Apr 30 13:30:52.755445 sshd-session[2044]: pam_unix(sshd:session): session closed for user core Apr 30 13:30:52.765556 systemd[1]: sshd@6-147.75.202.179:22-147.75.109.163:60106.service: Deactivated successfully. Apr 30 13:30:52.766530 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 13:30:52.767133 systemd-logind[1794]: Session 9 logged out. Waiting for processes to exit. Apr 30 13:30:52.768272 systemd[1]: Started sshd@7-147.75.202.179:22-147.75.109.163:60122.service - OpenSSH per-connection server daemon (147.75.109.163:60122). Apr 30 13:30:52.769029 systemd-logind[1794]: Removed session 9. Apr 30 13:30:52.807051 sshd[2053]: Accepted publickey for core from 147.75.109.163 port 60122 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:52.807989 sshd-session[2053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:52.811806 systemd-logind[1794]: New session 10 of user core. Apr 30 13:30:52.837207 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 13:30:52.898632 sudo[2058]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 13:30:52.898783 sudo[2058]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:30:52.900882 sudo[2058]: pam_unix(sudo:session): session closed for user root Apr 30 13:30:52.903533 sudo[2057]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 13:30:52.903681 sudo[2057]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:30:52.927184 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:30:52.953071 augenrules[2080]: No rules Apr 30 13:30:52.953413 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:30:52.953526 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:30:52.954113 sudo[2057]: pam_unix(sudo:session): session closed for user root Apr 30 13:30:52.955010 sshd[2056]: Connection closed by 147.75.109.163 port 60122 Apr 30 13:30:52.955158 sshd-session[2053]: pam_unix(sshd:session): session closed for user core Apr 30 13:30:52.957542 systemd[1]: sshd@7-147.75.202.179:22-147.75.109.163:60122.service: Deactivated successfully. Apr 30 13:30:52.958273 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 13:30:52.958664 systemd-logind[1794]: Session 10 logged out. Waiting for processes to exit. Apr 30 13:30:52.959526 systemd[1]: Started sshd@8-147.75.202.179:22-147.75.109.163:60138.service - OpenSSH per-connection server daemon (147.75.109.163:60138). Apr 30 13:30:52.960072 systemd-logind[1794]: Removed session 10. Apr 30 13:30:52.993176 sshd[2088]: Accepted publickey for core from 147.75.109.163 port 60138 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:30:52.993951 sshd-session[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:30:52.997298 systemd-logind[1794]: New session 11 of user core. Apr 30 13:30:53.013155 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 13:30:53.067472 sudo[2092]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 13:30:53.067619 sudo[2092]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:30:53.351078 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 13:30:53.351133 (dockerd)[2118]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 13:30:53.628264 dockerd[2118]: time="2025-04-30T13:30:53.628232340Z" level=info msg="Starting up" Apr 30 13:30:53.694771 dockerd[2118]: time="2025-04-30T13:30:53.694696338Z" level=info msg="Loading containers: start." Apr 30 13:30:53.817725 kernel: Initializing XFRM netlink socket Apr 30 13:30:53.833070 systemd-timesyncd[1726]: Network configuration changed, trying to establish connection. Apr 30 13:30:53.878356 systemd-networkd[1724]: docker0: Link UP Apr 30 13:30:53.916673 dockerd[2118]: time="2025-04-30T13:30:53.916627923Z" level=info msg="Loading containers: done." Apr 30 13:30:53.923861 dockerd[2118]: time="2025-04-30T13:30:53.923814479Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 13:30:53.923861 dockerd[2118]: time="2025-04-30T13:30:53.923855754Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 13:30:53.923951 dockerd[2118]: time="2025-04-30T13:30:53.923906274Z" level=info msg="Daemon has completed initialization" Apr 30 13:30:53.937871 dockerd[2118]: time="2025-04-30T13:30:53.937807978Z" level=info msg="API listen on /run/docker.sock" Apr 30 13:30:53.937896 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 13:30:53.702528 systemd-resolved[1725]: Clock change detected. Flushing caches. Apr 30 13:30:53.712494 systemd-journald[1357]: Time jumped backwards, rotating. Apr 30 13:30:53.702606 systemd-timesyncd[1726]: Contacted time server [2604:180:f3::26d]:123 (2.flatcar.pool.ntp.org). Apr 30 13:30:53.702640 systemd-timesyncd[1726]: Initial clock synchronization to Wed 2025-04-30 13:30:53.702460 UTC. Apr 30 13:30:54.210162 containerd[1804]: time="2025-04-30T13:30:54.210003166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 13:30:54.807092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1371588717.mount: Deactivated successfully. Apr 30 13:30:56.484988 containerd[1804]: time="2025-04-30T13:30:56.484933185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:56.485189 containerd[1804]: time="2025-04-30T13:30:56.485072182Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" Apr 30 13:30:56.485549 containerd[1804]: time="2025-04-30T13:30:56.485509247Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:56.487727 containerd[1804]: time="2025-04-30T13:30:56.487712328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:56.488846 containerd[1804]: time="2025-04-30T13:30:56.488824443Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.278714475s" Apr 30 13:30:56.488846 containerd[1804]: time="2025-04-30T13:30:56.488840853Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Apr 30 13:30:56.489134 containerd[1804]: time="2025-04-30T13:30:56.489121301Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 13:30:58.398661 containerd[1804]: time="2025-04-30T13:30:58.398609250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:58.398852 containerd[1804]: time="2025-04-30T13:30:58.398792861Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" Apr 30 13:30:58.399287 containerd[1804]: time="2025-04-30T13:30:58.399242602Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:58.400884 containerd[1804]: time="2025-04-30T13:30:58.400844228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:58.401491 containerd[1804]: time="2025-04-30T13:30:58.401450430Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 1.912311768s" Apr 30 13:30:58.401491 containerd[1804]: time="2025-04-30T13:30:58.401471496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Apr 30 13:30:58.401731 containerd[1804]: time="2025-04-30T13:30:58.401691781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 13:30:59.962678 containerd[1804]: time="2025-04-30T13:30:59.962625768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:59.962885 containerd[1804]: time="2025-04-30T13:30:59.962782021Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" Apr 30 13:30:59.963272 containerd[1804]: time="2025-04-30T13:30:59.963261008Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:59.965104 containerd[1804]: time="2025-04-30T13:30:59.965063258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:30:59.965599 containerd[1804]: time="2025-04-30T13:30:59.965561289Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.563855273s" Apr 30 13:30:59.965599 containerd[1804]: time="2025-04-30T13:30:59.965574571Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Apr 30 13:30:59.965864 containerd[1804]: time="2025-04-30T13:30:59.965828889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 13:31:00.976552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017642935.mount: Deactivated successfully. Apr 30 13:31:00.977325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 13:31:00.996309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:31:01.214747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:31:01.216853 (kubelet)[2406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:31:01.236407 kubelet[2406]: E0430 13:31:01.236292 2406 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:31:01.237486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:31:01.237579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:31:01.237781 systemd[1]: kubelet.service: Consumed 127ms CPU time, 112.1M memory peak. Apr 30 13:31:01.589478 containerd[1804]: time="2025-04-30T13:31:01.589388796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:01.589653 containerd[1804]: time="2025-04-30T13:31:01.589524060Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" Apr 30 13:31:01.589975 containerd[1804]: time="2025-04-30T13:31:01.589935850Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:01.590859 containerd[1804]: time="2025-04-30T13:31:01.590815450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:01.591322 containerd[1804]: time="2025-04-30T13:31:01.591276188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.625429278s" Apr 30 13:31:01.591322 containerd[1804]: time="2025-04-30T13:31:01.591296015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Apr 30 13:31:01.591645 containerd[1804]: time="2025-04-30T13:31:01.591599623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 13:31:02.098746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount675754288.mount: Deactivated successfully. Apr 30 13:31:02.598390 containerd[1804]: time="2025-04-30T13:31:02.598365967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:02.598632 containerd[1804]: time="2025-04-30T13:31:02.598610082Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Apr 30 13:31:02.599049 containerd[1804]: time="2025-04-30T13:31:02.599011785Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:02.601480 containerd[1804]: time="2025-04-30T13:31:02.601438258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:02.602179 containerd[1804]: time="2025-04-30T13:31:02.602136147Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.010521242s" Apr 30 13:31:02.602179 containerd[1804]: time="2025-04-30T13:31:02.602151967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Apr 30 13:31:02.602583 containerd[1804]: time="2025-04-30T13:31:02.602528940Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 13:31:03.043310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074983246.mount: Deactivated successfully. Apr 30 13:31:03.044577 containerd[1804]: time="2025-04-30T13:31:03.044529005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:03.044694 containerd[1804]: time="2025-04-30T13:31:03.044673765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 30 13:31:03.045060 containerd[1804]: time="2025-04-30T13:31:03.045020981Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:03.046263 containerd[1804]: time="2025-04-30T13:31:03.046250333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:03.046804 containerd[1804]: time="2025-04-30T13:31:03.046790897Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 444.232662ms" Apr 30 13:31:03.046835 containerd[1804]: time="2025-04-30T13:31:03.046805793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 30 13:31:03.047207 containerd[1804]: time="2025-04-30T13:31:03.047149015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 13:31:03.520597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733539608.mount: Deactivated successfully. Apr 30 13:31:04.611680 containerd[1804]: time="2025-04-30T13:31:04.611654683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:04.611891 containerd[1804]: time="2025-04-30T13:31:04.611874030Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Apr 30 13:31:04.612304 containerd[1804]: time="2025-04-30T13:31:04.612292861Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:04.614064 containerd[1804]: time="2025-04-30T13:31:04.614048660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:04.614867 containerd[1804]: time="2025-04-30T13:31:04.614819612Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.567654352s" Apr 30 13:31:04.614867 containerd[1804]: time="2025-04-30T13:31:04.614841120Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Apr 30 13:31:06.597461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:31:06.597573 systemd[1]: kubelet.service: Consumed 127ms CPU time, 112.1M memory peak. Apr 30 13:31:06.611352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:31:06.627818 systemd[1]: Reload requested from client PID 2592 ('systemctl') (unit session-11.scope)... Apr 30 13:31:06.627825 systemd[1]: Reloading... Apr 30 13:31:06.677078 zram_generator::config[2638]: No configuration found. Apr 30 13:31:06.745928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:31:06.828282 systemd[1]: Reloading finished in 200 ms. Apr 30 13:31:06.866498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:31:06.868266 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:31:06.868556 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:31:06.868661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:31:06.868681 systemd[1]: kubelet.service: Consumed 50ms CPU time, 91.8M memory peak. Apr 30 13:31:06.869609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:31:07.082480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:31:07.084501 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:31:07.104834 kubelet[2708]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:31:07.104834 kubelet[2708]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 13:31:07.104834 kubelet[2708]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:31:07.105074 kubelet[2708]: I0430 13:31:07.104849 2708 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:31:07.329235 kubelet[2708]: I0430 13:31:07.329199 2708 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 13:31:07.329235 kubelet[2708]: I0430 13:31:07.329212 2708 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:31:07.329394 kubelet[2708]: I0430 13:31:07.329359 2708 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 13:31:07.349685 kubelet[2708]: E0430 13:31:07.349671 2708 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.202.179:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.202.179:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:31:07.355263 kubelet[2708]: I0430 13:31:07.355226 2708 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:31:07.361834 kubelet[2708]: E0430 13:31:07.361777 2708 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 13:31:07.361834 kubelet[2708]: I0430 13:31:07.361796 2708 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 13:31:07.370187 kubelet[2708]: I0430 13:31:07.370150 2708 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:31:07.370303 kubelet[2708]: I0430 13:31:07.370258 2708 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:31:07.370397 kubelet[2708]: I0430 13:31:07.370274 2708 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-aaf56335e8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 13:31:07.370397 kubelet[2708]: I0430 13:31:07.370372 2708 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:31:07.370397 kubelet[2708]: I0430 13:31:07.370379 2708 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 13:31:07.370498 kubelet[2708]: I0430 13:31:07.370444 2708 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:31:07.374185 kubelet[2708]: I0430 13:31:07.374149 2708 kubelet.go:446] "Attempting to sync node with API server" Apr 30 13:31:07.374185 kubelet[2708]: I0430 13:31:07.374161 2708 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:31:07.374185 kubelet[2708]: I0430 13:31:07.374171 2708 kubelet.go:352] "Adding apiserver pod source" Apr 30 13:31:07.374185 kubelet[2708]: I0430 13:31:07.374177 2708 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:31:07.376986 kubelet[2708]: I0430 13:31:07.376959 2708 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:31:07.377097 kubelet[2708]: W0430 13:31:07.377020 2708 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:31:07.377097 kubelet[2708]: W0430 13:31:07.377037 2708 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.202.179:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-aaf56335e8&limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:31:07.377097 kubelet[2708]: E0430 13:31:07.377077 2708 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.202.179:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.1.1-a-aaf56335e8&limit=500&resourceVersion=0\": dial tcp 147.75.202.179:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:31:07.377097 kubelet[2708]: E0430 13:31:07.377077 2708 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.202.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.202.179:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:31:07.377738 kubelet[2708]: I0430 13:31:07.377728 2708 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:31:07.378247 kubelet[2708]: W0430 13:31:07.378237 2708 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 13:31:07.379886 kubelet[2708]: I0430 13:31:07.379878 2708 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 13:31:07.379925 kubelet[2708]: I0430 13:31:07.379895 2708 server.go:1287] "Started kubelet" Apr 30 13:31:07.380008 kubelet[2708]: I0430 13:31:07.379975 2708 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:31:07.380077 kubelet[2708]: I0430 13:31:07.380042 2708 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:31:07.380236 kubelet[2708]: I0430 13:31:07.380226 2708 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:31:07.380886 kubelet[2708]: I0430 13:31:07.380849 2708 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:31:07.380886 kubelet[2708]: I0430 13:31:07.380860 2708 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 13:31:07.380935 kubelet[2708]: E0430 13:31:07.380915 2708 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-aaf56335e8\" not found" Apr 30 13:31:07.380935 kubelet[2708]: I0430 13:31:07.380923 2708 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 13:31:07.380979 kubelet[2708]: I0430 13:31:07.380950 2708 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 13:31:07.380979 kubelet[2708]: I0430 13:31:07.380968 2708 server.go:490] "Adding debug handlers to kubelet server" Apr 30 13:31:07.381039 kubelet[2708]: I0430 13:31:07.380995 2708 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:31:07.381116 kubelet[2708]: E0430 13:31:07.381103 2708 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:31:07.381160 kubelet[2708]: W0430 13:31:07.381132 2708 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.202.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:31:07.381181 kubelet[2708]: E0430 13:31:07.381162 2708 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.202.179:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.202.179:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:31:07.381205 kubelet[2708]: I0430 13:31:07.381188 2708 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:31:07.381255 kubelet[2708]: I0430 13:31:07.381244 2708 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:31:07.381694 kubelet[2708]: I0430 13:31:07.381686 2708 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:31:07.396878 kubelet[2708]: E0430 13:31:07.396850 2708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-aaf56335e8?timeout=10s\": dial tcp 147.75.202.179:6443: connect: connection refused" interval="200ms" Apr 30 13:31:07.399585 kubelet[2708]: E0430 13:31:07.397839 2708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.202.179:6443/api/v1/namespaces/default/events\": dial tcp 147.75.202.179:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-a-aaf56335e8.183b1bd18df3e12d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-a-aaf56335e8,UID:ci-4230.1.1-a-aaf56335e8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-a-aaf56335e8,},FirstTimestamp:2025-04-30 13:31:07.379884333 +0000 UTC m=+0.293760074,LastTimestamp:2025-04-30 13:31:07.379884333 +0000 UTC m=+0.293760074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-a-aaf56335e8,}" Apr 30 13:31:07.403192 kubelet[2708]: I0430 13:31:07.403179 2708 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 13:31:07.403192 kubelet[2708]: I0430 13:31:07.403190 2708 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 13:31:07.403260 kubelet[2708]: I0430 13:31:07.403201 2708 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:31:07.403932 kubelet[2708]: I0430 13:31:07.403911 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:31:07.404146 kubelet[2708]: I0430 13:31:07.404105 2708 policy_none.go:49] "None policy: Start" Apr 30 13:31:07.404146 kubelet[2708]: I0430 13:31:07.404115 2708 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 13:31:07.404146 kubelet[2708]: I0430 13:31:07.404123 2708 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:31:07.404630 kubelet[2708]: I0430 13:31:07.404619 2708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:31:07.404658 kubelet[2708]: I0430 13:31:07.404634 2708 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 13:31:07.404658 kubelet[2708]: I0430 13:31:07.404648 2708 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 13:31:07.404658 kubelet[2708]: I0430 13:31:07.404654 2708 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 13:31:07.404724 kubelet[2708]: E0430 13:31:07.404685 2708 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:31:07.404976 kubelet[2708]: W0430 13:31:07.404954 2708 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.202.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:31:07.405031 kubelet[2708]: E0430 13:31:07.404991 2708 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.202.179:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.202.179:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:31:07.407799 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 13:31:07.428169 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 13:31:07.431667 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 13:31:07.445494 kubelet[2708]: I0430 13:31:07.445427 2708 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:31:07.445781 kubelet[2708]: I0430 13:31:07.445721 2708 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 13:31:07.445781 kubelet[2708]: I0430 13:31:07.445742 2708 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:31:07.446086 kubelet[2708]: I0430 13:31:07.446042 2708 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:31:07.447149 kubelet[2708]: E0430 13:31:07.447098 2708 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 13:31:07.447273 kubelet[2708]: E0430 13:31:07.447196 2708 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.1.1-a-aaf56335e8\" not found" Apr 30 13:31:07.517347 systemd[1]: Created slice kubepods-burstable-podedd7915d68803e35c0f5be3c2b670712.slice - libcontainer container kubepods-burstable-podedd7915d68803e35c0f5be3c2b670712.slice. Apr 30 13:31:07.538581 kubelet[2708]: E0430 13:31:07.538516 2708 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-a-aaf56335e8\" not found" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.541095 systemd[1]: Created slice kubepods-burstable-podd05bb42627e76c2490110134b4dbccc8.slice - libcontainer container kubepods-burstable-podd05bb42627e76c2490110134b4dbccc8.slice. Apr 30 13:31:07.548098 kubelet[2708]: I0430 13:31:07.548050 2708 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.548364 kubelet[2708]: E0430 13:31:07.548319 2708 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.202.179:6443/api/v1/nodes\": dial tcp 147.75.202.179:6443: connect: connection refused" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.561001 kubelet[2708]: E0430 13:31:07.560919 2708 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-a-aaf56335e8\" not found" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.568522 systemd[1]: Created slice kubepods-burstable-pod0b1cef219c8cb458eabd0856a42eaff9.slice - libcontainer container kubepods-burstable-pod0b1cef219c8cb458eabd0856a42eaff9.slice. Apr 30 13:31:07.572458 kubelet[2708]: E0430 13:31:07.572362 2708 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.1.1-a-aaf56335e8\" not found" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.582840 kubelet[2708]: I0430 13:31:07.582673 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b1cef219c8cb458eabd0856a42eaff9-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-aaf56335e8\" (UID: \"0b1cef219c8cb458eabd0856a42eaff9\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.582840 kubelet[2708]: I0430 13:31:07.582751 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.582840 kubelet[2708]: I0430 13:31:07.582813 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.583288 kubelet[2708]: I0430 13:31:07.582865 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.583288 kubelet[2708]: I0430 13:31:07.582920 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.583288 kubelet[2708]: I0430 13:31:07.583039 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edd7915d68803e35c0f5be3c2b670712-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" (UID: \"edd7915d68803e35c0f5be3c2b670712\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.583288 kubelet[2708]: I0430 13:31:07.583135 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edd7915d68803e35c0f5be3c2b670712-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" (UID: \"edd7915d68803e35c0f5be3c2b670712\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.583288 kubelet[2708]: I0430 13:31:07.583212 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edd7915d68803e35c0f5be3c2b670712-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" (UID: \"edd7915d68803e35c0f5be3c2b670712\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.583669 kubelet[2708]: I0430 13:31:07.583270 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.597803 kubelet[2708]: E0430 13:31:07.597719 2708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-aaf56335e8?timeout=10s\": dial tcp 147.75.202.179:6443: connect: connection refused" interval="400ms" Apr 30 13:31:07.664485 kubelet[2708]: E0430 13:31:07.664268 2708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.202.179:6443/api/v1/namespaces/default/events\": dial tcp 147.75.202.179:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.1.1-a-aaf56335e8.183b1bd18df3e12d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.1.1-a-aaf56335e8,UID:ci-4230.1.1-a-aaf56335e8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.1.1-a-aaf56335e8,},FirstTimestamp:2025-04-30 13:31:07.379884333 +0000 UTC m=+0.293760074,LastTimestamp:2025-04-30 13:31:07.379884333 +0000 UTC m=+0.293760074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.1.1-a-aaf56335e8,}" Apr 30 13:31:07.752204 kubelet[2708]: I0430 13:31:07.752106 2708 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.752957 kubelet[2708]: E0430 13:31:07.752839 2708 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.202.179:6443/api/v1/nodes\": dial tcp 147.75.202.179:6443: connect: connection refused" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:07.841118 containerd[1804]: time="2025-04-30T13:31:07.840864107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-aaf56335e8,Uid:edd7915d68803e35c0f5be3c2b670712,Namespace:kube-system,Attempt:0,}" Apr 30 13:31:07.861806 containerd[1804]: time="2025-04-30T13:31:07.861744089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-aaf56335e8,Uid:d05bb42627e76c2490110134b4dbccc8,Namespace:kube-system,Attempt:0,}" Apr 30 13:31:07.874403 containerd[1804]: time="2025-04-30T13:31:07.874295006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-aaf56335e8,Uid:0b1cef219c8cb458eabd0856a42eaff9,Namespace:kube-system,Attempt:0,}" Apr 30 13:31:07.999203 kubelet[2708]: E0430 13:31:07.999139 2708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.202.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.1.1-a-aaf56335e8?timeout=10s\": dial tcp 147.75.202.179:6443: connect: connection refused" interval="800ms" Apr 30 13:31:08.154913 kubelet[2708]: I0430 13:31:08.154895 2708 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:08.155125 kubelet[2708]: E0430 13:31:08.155111 2708 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://147.75.202.179:6443/api/v1/nodes\": dial tcp 147.75.202.179:6443: connect: connection refused" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:08.289435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594232318.mount: Deactivated successfully. Apr 30 13:31:08.291018 containerd[1804]: time="2025-04-30T13:31:08.290995318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:31:08.292048 containerd[1804]: time="2025-04-30T13:31:08.292004427Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 30 13:31:08.292303 containerd[1804]: time="2025-04-30T13:31:08.292289563Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:31:08.292710 containerd[1804]: time="2025-04-30T13:31:08.292670883Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:31:08.293061 containerd[1804]: time="2025-04-30T13:31:08.293018738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:31:08.293355 containerd[1804]: time="2025-04-30T13:31:08.293342669Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:31:08.293481 containerd[1804]: time="2025-04-30T13:31:08.293464246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:31:08.294699 containerd[1804]: time="2025-04-30T13:31:08.294688877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:31:08.295394 containerd[1804]: time="2025-04-30T13:31:08.295352284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 454.228467ms" Apr 30 13:31:08.295968 containerd[1804]: time="2025-04-30T13:31:08.295954778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 434.151313ms" Apr 30 13:31:08.297129 containerd[1804]: time="2025-04-30T13:31:08.297088838Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.613605ms" Apr 30 13:31:08.393195 containerd[1804]: time="2025-04-30T13:31:08.392972422Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:31:08.393195 containerd[1804]: time="2025-04-30T13:31:08.393185824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:31:08.393195 containerd[1804]: time="2025-04-30T13:31:08.393193524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:08.393324 containerd[1804]: time="2025-04-30T13:31:08.393236482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:08.393986 containerd[1804]: time="2025-04-30T13:31:08.393952316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:31:08.394025 containerd[1804]: time="2025-04-30T13:31:08.393980892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:31:08.394025 containerd[1804]: time="2025-04-30T13:31:08.393993838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:08.394226 containerd[1804]: time="2025-04-30T13:31:08.394029548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:31:08.394248 containerd[1804]: time="2025-04-30T13:31:08.394226422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:31:08.394248 containerd[1804]: time="2025-04-30T13:31:08.394234651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:08.394280 containerd[1804]: time="2025-04-30T13:31:08.394244062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:08.394294 containerd[1804]: time="2025-04-30T13:31:08.394275687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:08.416305 systemd[1]: Started cri-containerd-2b452e308d26998ce1149742e7e6e1d6b1376442bdcc6d4d0174d359636f25c9.scope - libcontainer container 2b452e308d26998ce1149742e7e6e1d6b1376442bdcc6d4d0174d359636f25c9. Apr 30 13:31:08.417073 systemd[1]: Started cri-containerd-48d03e826cf8319371e7cab54a95bdd17b51554889f69d4bb2b22f43402bd577.scope - libcontainer container 48d03e826cf8319371e7cab54a95bdd17b51554889f69d4bb2b22f43402bd577. Apr 30 13:31:08.417772 systemd[1]: Started cri-containerd-be5c3b48926cc4f81865850a2f0123346cb172095b8c720a3c01617ce33bc56b.scope - libcontainer container be5c3b48926cc4f81865850a2f0123346cb172095b8c720a3c01617ce33bc56b. Apr 30 13:31:08.443836 containerd[1804]: time="2025-04-30T13:31:08.443811986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.1.1-a-aaf56335e8,Uid:d05bb42627e76c2490110134b4dbccc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b452e308d26998ce1149742e7e6e1d6b1376442bdcc6d4d0174d359636f25c9\"" Apr 30 13:31:08.445002 containerd[1804]: time="2025-04-30T13:31:08.444985680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.1.1-a-aaf56335e8,Uid:0b1cef219c8cb458eabd0856a42eaff9,Namespace:kube-system,Attempt:0,} returns sandbox id \"be5c3b48926cc4f81865850a2f0123346cb172095b8c720a3c01617ce33bc56b\"" Apr 30 13:31:08.445539 containerd[1804]: time="2025-04-30T13:31:08.445521727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.1.1-a-aaf56335e8,Uid:edd7915d68803e35c0f5be3c2b670712,Namespace:kube-system,Attempt:0,} returns sandbox id \"48d03e826cf8319371e7cab54a95bdd17b51554889f69d4bb2b22f43402bd577\"" Apr 30 13:31:08.445588 containerd[1804]: time="2025-04-30T13:31:08.445547110Z" level=info msg="CreateContainer within sandbox \"2b452e308d26998ce1149742e7e6e1d6b1376442bdcc6d4d0174d359636f25c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 13:31:08.445974 containerd[1804]: time="2025-04-30T13:31:08.445957756Z" level=info msg="CreateContainer within sandbox \"be5c3b48926cc4f81865850a2f0123346cb172095b8c720a3c01617ce33bc56b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 13:31:08.446516 containerd[1804]: time="2025-04-30T13:31:08.446500619Z" level=info msg="CreateContainer within sandbox \"48d03e826cf8319371e7cab54a95bdd17b51554889f69d4bb2b22f43402bd577\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 13:31:08.453467 containerd[1804]: time="2025-04-30T13:31:08.453422066Z" level=info msg="CreateContainer within sandbox \"2b452e308d26998ce1149742e7e6e1d6b1376442bdcc6d4d0174d359636f25c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d180f685dc52a614b1923ced7475c8a60b4be7083f1a3dcbd168773451005ce\"" Apr 30 13:31:08.453704 containerd[1804]: time="2025-04-30T13:31:08.453691942Z" level=info msg="StartContainer for \"5d180f685dc52a614b1923ced7475c8a60b4be7083f1a3dcbd168773451005ce\"" Apr 30 13:31:08.454152 containerd[1804]: time="2025-04-30T13:31:08.454139560Z" level=info msg="CreateContainer within sandbox \"48d03e826cf8319371e7cab54a95bdd17b51554889f69d4bb2b22f43402bd577\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"292d72684b4e6a29f0a6174648d532b9b980be8b6eb549c5ccc2562eb526131f\"" Apr 30 13:31:08.454311 containerd[1804]: time="2025-04-30T13:31:08.454299770Z" level=info msg="StartContainer for \"292d72684b4e6a29f0a6174648d532b9b980be8b6eb549c5ccc2562eb526131f\"" Apr 30 13:31:08.454554 containerd[1804]: time="2025-04-30T13:31:08.454515854Z" level=info msg="CreateContainer within sandbox \"be5c3b48926cc4f81865850a2f0123346cb172095b8c720a3c01617ce33bc56b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ad6967589eee5bf43acf27702f341699e7a9224e30661f6dd49956dc6071e08a\"" Apr 30 13:31:08.454705 containerd[1804]: time="2025-04-30T13:31:08.454668560Z" level=info msg="StartContainer for \"ad6967589eee5bf43acf27702f341699e7a9224e30661f6dd49956dc6071e08a\"" Apr 30 13:31:08.464085 kubelet[2708]: W0430 13:31:08.464048 2708 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.202.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.202.179:6443: connect: connection refused Apr 30 13:31:08.464147 kubelet[2708]: E0430 13:31:08.464095 2708 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.202.179:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.202.179:6443: connect: connection refused" logger="UnhandledError" Apr 30 13:31:08.481168 systemd[1]: Started cri-containerd-292d72684b4e6a29f0a6174648d532b9b980be8b6eb549c5ccc2562eb526131f.scope - libcontainer container 292d72684b4e6a29f0a6174648d532b9b980be8b6eb549c5ccc2562eb526131f. Apr 30 13:31:08.481829 systemd[1]: Started cri-containerd-5d180f685dc52a614b1923ced7475c8a60b4be7083f1a3dcbd168773451005ce.scope - libcontainer container 5d180f685dc52a614b1923ced7475c8a60b4be7083f1a3dcbd168773451005ce. Apr 30 13:31:08.482418 systemd[1]: Started cri-containerd-ad6967589eee5bf43acf27702f341699e7a9224e30661f6dd49956dc6071e08a.scope - libcontainer container ad6967589eee5bf43acf27702f341699e7a9224e30661f6dd49956dc6071e08a. Apr 30 13:31:08.504917 containerd[1804]: time="2025-04-30T13:31:08.504893108Z" level=info msg="StartContainer for \"5d180f685dc52a614b1923ced7475c8a60b4be7083f1a3dcbd168773451005ce\" returns successfully" Apr 30 13:31:08.505020 containerd[1804]: time="2025-04-30T13:31:08.504949584Z" level=info msg="StartContainer for \"292d72684b4e6a29f0a6174648d532b9b980be8b6eb549c5ccc2562eb526131f\" returns successfully" Apr 30 13:31:08.506255 containerd[1804]: time="2025-04-30T13:31:08.506236409Z" level=info msg="StartContainer for \"ad6967589eee5bf43acf27702f341699e7a9224e30661f6dd49956dc6071e08a\" returns successfully" Apr 30 13:31:08.959695 kubelet[2708]: I0430 13:31:08.959137 2708 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.016167 kubelet[2708]: E0430 13:31:09.016144 2708 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.1.1-a-aaf56335e8\" not found" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.124067 kubelet[2708]: I0430 13:31:09.124048 2708 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.181737 kubelet[2708]: I0430 13:31:09.181679 2708 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.185027 kubelet[2708]: E0430 13:31:09.185006 2708 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.185027 kubelet[2708]: I0430 13:31:09.185028 2708 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.185884 kubelet[2708]: E0430 13:31:09.185871 2708 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.185909 kubelet[2708]: I0430 13:31:09.185885 2708 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.186626 kubelet[2708]: E0430 13:31:09.186615 2708 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-a-aaf56335e8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.375267 kubelet[2708]: I0430 13:31:09.375171 2708 apiserver.go:52] "Watching apiserver" Apr 30 13:31:09.381294 kubelet[2708]: I0430 13:31:09.381230 2708 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 13:31:09.411748 kubelet[2708]: I0430 13:31:09.411665 2708 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.414482 kubelet[2708]: I0430 13:31:09.414447 2708 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.416336 kubelet[2708]: E0430 13:31:09.416243 2708 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.417009 kubelet[2708]: I0430 13:31:09.416973 2708 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.418249 kubelet[2708]: E0430 13:31:09.418182 2708 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:09.420559 kubelet[2708]: E0430 13:31:09.420515 2708 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-a-aaf56335e8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:10.419414 kubelet[2708]: I0430 13:31:10.419319 2708 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:10.420350 kubelet[2708]: I0430 13:31:10.419558 2708 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:10.436925 kubelet[2708]: W0430 13:31:10.436870 2708 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:31:10.437227 kubelet[2708]: W0430 13:31:10.436985 2708 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:31:11.535851 systemd[1]: Reload requested from client PID 3025 ('systemctl') (unit session-11.scope)... Apr 30 13:31:11.535860 systemd[1]: Reloading... Apr 30 13:31:11.578078 zram_generator::config[3071]: No configuration found. Apr 30 13:31:11.652715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:31:11.743872 systemd[1]: Reloading finished in 207 ms. Apr 30 13:31:11.763735 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:31:11.771338 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:31:11.771784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:31:11.771888 systemd[1]: kubelet.service: Consumed 763ms CPU time, 139.7M memory peak. Apr 30 13:31:11.791543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:31:12.017626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:31:12.019758 (kubelet)[3135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:31:12.041956 kubelet[3135]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:31:12.041956 kubelet[3135]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 13:31:12.041956 kubelet[3135]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:31:12.042194 kubelet[3135]: I0430 13:31:12.041963 3135 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:31:12.045620 kubelet[3135]: I0430 13:31:12.045581 3135 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 13:31:12.045620 kubelet[3135]: I0430 13:31:12.045591 3135 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:31:12.046011 kubelet[3135]: I0430 13:31:12.045969 3135 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 13:31:12.046949 kubelet[3135]: I0430 13:31:12.046910 3135 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 13:31:12.048225 kubelet[3135]: I0430 13:31:12.048189 3135 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:31:12.049789 kubelet[3135]: E0430 13:31:12.049748 3135 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 13:31:12.049789 kubelet[3135]: I0430 13:31:12.049762 3135 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 13:31:12.057567 kubelet[3135]: I0430 13:31:12.057531 3135 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:31:12.057684 kubelet[3135]: I0430 13:31:12.057639 3135 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:31:12.057774 kubelet[3135]: I0430 13:31:12.057653 3135 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.1.1-a-aaf56335e8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 13:31:12.057774 kubelet[3135]: I0430 13:31:12.057755 3135 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:31:12.057774 kubelet[3135]: I0430 13:31:12.057762 3135 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 13:31:12.057862 kubelet[3135]: I0430 13:31:12.057790 3135 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:31:12.057912 kubelet[3135]: I0430 13:31:12.057907 3135 kubelet.go:446] "Attempting to sync node with API server" Apr 30 13:31:12.057930 kubelet[3135]: I0430 13:31:12.057915 3135 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:31:12.057930 kubelet[3135]: I0430 13:31:12.057924 3135 kubelet.go:352] "Adding apiserver pod source" Apr 30 13:31:12.057930 kubelet[3135]: I0430 13:31:12.057930 3135 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:31:12.059245 kubelet[3135]: I0430 13:31:12.058835 3135 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:31:12.059626 kubelet[3135]: I0430 13:31:12.059615 3135 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:31:12.059915 kubelet[3135]: I0430 13:31:12.059883 3135 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 13:31:12.059915 kubelet[3135]: I0430 13:31:12.059900 3135 server.go:1287] "Started kubelet" Apr 30 13:31:12.059978 kubelet[3135]: I0430 13:31:12.059934 3135 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:31:12.060008 kubelet[3135]: I0430 13:31:12.059963 3135 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:31:12.060150 kubelet[3135]: I0430 13:31:12.060140 3135 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:31:12.060696 kubelet[3135]: I0430 13:31:12.060688 3135 server.go:490] "Adding debug handlers to kubelet server" Apr 30 13:31:12.060916 kubelet[3135]: I0430 13:31:12.060908 3135 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:31:12.060916 kubelet[3135]: E0430 13:31:12.060908 3135 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:31:12.060984 kubelet[3135]: I0430 13:31:12.060912 3135 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 13:31:12.060984 kubelet[3135]: I0430 13:31:12.060934 3135 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 13:31:12.060984 kubelet[3135]: I0430 13:31:12.060952 3135 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 13:31:12.060984 kubelet[3135]: E0430 13:31:12.060964 3135 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230.1.1-a-aaf56335e8\" not found" Apr 30 13:31:12.061101 kubelet[3135]: I0430 13:31:12.061068 3135 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:31:12.061317 kubelet[3135]: I0430 13:31:12.061303 3135 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:31:12.061404 kubelet[3135]: I0430 13:31:12.061381 3135 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:31:12.062378 kubelet[3135]: I0430 13:31:12.062361 3135 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:31:12.066512 kubelet[3135]: I0430 13:31:12.066491 3135 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:31:12.067110 kubelet[3135]: I0430 13:31:12.067098 3135 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:31:12.067156 kubelet[3135]: I0430 13:31:12.067116 3135 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 13:31:12.067261 kubelet[3135]: I0430 13:31:12.067128 3135 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 13:31:12.067295 kubelet[3135]: I0430 13:31:12.067263 3135 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 13:31:12.067325 kubelet[3135]: E0430 13:31:12.067303 3135 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:31:12.076942 kubelet[3135]: I0430 13:31:12.076927 3135 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 13:31:12.076942 kubelet[3135]: I0430 13:31:12.076937 3135 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 13:31:12.076942 kubelet[3135]: I0430 13:31:12.076948 3135 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:31:12.077053 kubelet[3135]: I0430 13:31:12.077043 3135 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 13:31:12.077069 kubelet[3135]: I0430 13:31:12.077049 3135 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 13:31:12.077069 kubelet[3135]: I0430 13:31:12.077061 3135 policy_none.go:49] "None policy: Start" Apr 30 13:31:12.077069 kubelet[3135]: I0430 13:31:12.077069 3135 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 13:31:12.077111 kubelet[3135]: I0430 13:31:12.077075 3135 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:31:12.077149 kubelet[3135]: I0430 13:31:12.077134 3135 state_mem.go:75] "Updated machine memory state" Apr 30 13:31:12.078976 kubelet[3135]: I0430 13:31:12.078938 3135 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:31:12.079090 kubelet[3135]: I0430 13:31:12.079033 3135 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 13:31:12.079090 kubelet[3135]: I0430 13:31:12.079039 3135 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:31:12.079138 kubelet[3135]: I0430 13:31:12.079123 3135 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:31:12.079534 kubelet[3135]: E0430 13:31:12.079482 3135 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 13:31:12.169491 kubelet[3135]: I0430 13:31:12.169390 3135 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.169491 kubelet[3135]: I0430 13:31:12.169485 3135 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.169865 kubelet[3135]: I0430 13:31:12.169569 3135 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.177186 kubelet[3135]: W0430 13:31:12.177134 3135 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:31:12.177355 kubelet[3135]: W0430 13:31:12.177284 3135 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:31:12.177509 kubelet[3135]: E0430 13:31:12.177288 3135 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.177626 kubelet[3135]: W0430 13:31:12.177546 3135 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:31:12.177765 kubelet[3135]: E0430 13:31:12.177720 3135 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-a-aaf56335e8\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.186388 kubelet[3135]: I0430 13:31:12.186304 3135 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.212749 kubelet[3135]: I0430 13:31:12.208846 3135 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.212749 kubelet[3135]: I0430 13:31:12.209115 3135 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.362602 kubelet[3135]: I0430 13:31:12.362355 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-ca-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.362602 kubelet[3135]: I0430 13:31:12.362505 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.363120 kubelet[3135]: I0430 13:31:12.362601 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-k8s-certs\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.363120 kubelet[3135]: I0430 13:31:12.362699 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-kubeconfig\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.363120 kubelet[3135]: I0430 13:31:12.362802 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edd7915d68803e35c0f5be3c2b670712-k8s-certs\") pod \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" (UID: \"edd7915d68803e35c0f5be3c2b670712\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.363120 kubelet[3135]: I0430 13:31:12.362897 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edd7915d68803e35c0f5be3c2b670712-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" (UID: \"edd7915d68803e35c0f5be3c2b670712\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.363120 kubelet[3135]: I0430 13:31:12.362997 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b1cef219c8cb458eabd0856a42eaff9-kubeconfig\") pod \"kube-scheduler-ci-4230.1.1-a-aaf56335e8\" (UID: \"0b1cef219c8cb458eabd0856a42eaff9\") " pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.363685 kubelet[3135]: I0430 13:31:12.363116 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edd7915d68803e35c0f5be3c2b670712-ca-certs\") pod \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" (UID: \"edd7915d68803e35c0f5be3c2b670712\") " pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.363685 kubelet[3135]: I0430 13:31:12.363213 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d05bb42627e76c2490110134b4dbccc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" (UID: \"d05bb42627e76c2490110134b4dbccc8\") " pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:12.562544 sudo[3181]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 13:31:12.563352 sudo[3181]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 13:31:12.940756 sudo[3181]: pam_unix(sudo:session): session closed for user root Apr 30 13:31:13.058620 kubelet[3135]: I0430 13:31:13.058576 3135 apiserver.go:52] "Watching apiserver" Apr 30 13:31:13.061937 kubelet[3135]: I0430 13:31:13.061900 3135 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 13:31:13.070737 kubelet[3135]: I0430 13:31:13.070710 3135 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:13.070842 kubelet[3135]: I0430 13:31:13.070833 3135 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:13.070890 kubelet[3135]: I0430 13:31:13.070882 3135 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:13.073649 kubelet[3135]: W0430 13:31:13.073638 3135 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:31:13.073698 kubelet[3135]: E0430 13:31:13.073687 3135 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.1.1-a-aaf56335e8\" already exists" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:13.073886 kubelet[3135]: W0430 13:31:13.073879 3135 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:31:13.073910 kubelet[3135]: W0430 13:31:13.073893 3135 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Apr 30 13:31:13.073910 kubelet[3135]: E0430 13:31:13.073907 3135 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.1.1-a-aaf56335e8\" already exists" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:13.073951 kubelet[3135]: E0430 13:31:13.073932 3135 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.1.1-a-aaf56335e8\" already exists" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" Apr 30 13:31:13.080465 kubelet[3135]: I0430 13:31:13.080441 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.1.1-a-aaf56335e8" podStartSLOduration=3.080432698 podStartE2EDuration="3.080432698s" podCreationTimestamp="2025-04-30 13:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:31:13.080380282 +0000 UTC m=+1.058790512" watchObservedRunningTime="2025-04-30 13:31:13.080432698 +0000 UTC m=+1.058842926" Apr 30 13:31:13.087926 kubelet[3135]: I0430 13:31:13.087900 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.1.1-a-aaf56335e8" podStartSLOduration=3.087891102 podStartE2EDuration="3.087891102s" podCreationTimestamp="2025-04-30 13:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:31:13.08403193 +0000 UTC m=+1.062442162" watchObservedRunningTime="2025-04-30 13:31:13.087891102 +0000 UTC m=+1.066301333" Apr 30 13:31:13.088029 kubelet[3135]: I0430 13:31:13.087955 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.1.1-a-aaf56335e8" podStartSLOduration=1.087950538 podStartE2EDuration="1.087950538s" podCreationTimestamp="2025-04-30 13:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:31:13.087930706 +0000 UTC m=+1.066340937" watchObservedRunningTime="2025-04-30 13:31:13.087950538 +0000 UTC m=+1.066360768" Apr 30 13:31:14.206511 sudo[2092]: pam_unix(sudo:session): session closed for user root Apr 30 13:31:14.207454 sshd[2091]: Connection closed by 147.75.109.163 port 60138 Apr 30 13:31:14.207661 sshd-session[2088]: pam_unix(sshd:session): session closed for user core Apr 30 13:31:14.209711 systemd[1]: sshd@8-147.75.202.179:22-147.75.109.163:60138.service: Deactivated successfully. Apr 30 13:31:14.210954 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 13:31:14.211081 systemd[1]: session-11.scope: Consumed 3.446s CPU time, 268.8M memory peak. Apr 30 13:31:14.212420 systemd-logind[1794]: Session 11 logged out. Waiting for processes to exit. Apr 30 13:31:14.213291 systemd-logind[1794]: Removed session 11. Apr 30 13:31:17.253490 kubelet[3135]: I0430 13:31:17.253410 3135 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 13:31:17.255060 kubelet[3135]: I0430 13:31:17.254602 3135 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 13:31:17.255213 containerd[1804]: time="2025-04-30T13:31:17.254133648Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 13:31:18.299888 systemd[1]: Created slice kubepods-besteffort-pod2e2029ef_08d6_4793_bd40_0f286fb2c307.slice - libcontainer container kubepods-besteffort-pod2e2029ef_08d6_4793_bd40_0f286fb2c307.slice. Apr 30 13:31:18.326750 systemd[1]: Created slice kubepods-burstable-pod245cdc3f_5ba4_4e08_b454_20b6222d4d5b.slice - libcontainer container kubepods-burstable-pod245cdc3f_5ba4_4e08_b454_20b6222d4d5b.slice. Apr 30 13:31:18.405754 kubelet[3135]: I0430 13:31:18.405676 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f8nm\" (UniqueName: \"kubernetes.io/projected/2e2029ef-08d6-4793-bd40-0f286fb2c307-kube-api-access-9f8nm\") pod \"kube-proxy-55g5l\" (UID: \"2e2029ef-08d6-4793-bd40-0f286fb2c307\") " pod="kube-system/kube-proxy-55g5l" Apr 30 13:31:18.406722 kubelet[3135]: I0430 13:31:18.405785 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-cgroup\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.406722 kubelet[3135]: I0430 13:31:18.405861 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e2029ef-08d6-4793-bd40-0f286fb2c307-lib-modules\") pod \"kube-proxy-55g5l\" (UID: \"2e2029ef-08d6-4793-bd40-0f286fb2c307\") " pod="kube-system/kube-proxy-55g5l" Apr 30 13:31:18.406722 kubelet[3135]: I0430 13:31:18.405929 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cni-path\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.406722 kubelet[3135]: I0430 13:31:18.406007 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-etc-cni-netd\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.406722 kubelet[3135]: I0430 13:31:18.406109 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-clustermesh-secrets\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.406722 kubelet[3135]: I0430 13:31:18.406163 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-config-path\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407360 kubelet[3135]: I0430 13:31:18.406217 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-run\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407360 kubelet[3135]: I0430 13:31:18.406267 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-host-proc-sys-net\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407360 kubelet[3135]: I0430 13:31:18.406312 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zs6l\" (UniqueName: \"kubernetes.io/projected/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-kube-api-access-5zs6l\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407360 kubelet[3135]: I0430 13:31:18.406365 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-xtables-lock\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407360 kubelet[3135]: I0430 13:31:18.406523 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-bpf-maps\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407360 kubelet[3135]: I0430 13:31:18.406663 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-hostproc\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407902 kubelet[3135]: I0430 13:31:18.406757 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-host-proc-sys-kernel\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407902 kubelet[3135]: I0430 13:31:18.406862 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-hubble-tls\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.407902 kubelet[3135]: I0430 13:31:18.406971 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e2029ef-08d6-4793-bd40-0f286fb2c307-kube-proxy\") pod \"kube-proxy-55g5l\" (UID: \"2e2029ef-08d6-4793-bd40-0f286fb2c307\") " pod="kube-system/kube-proxy-55g5l" Apr 30 13:31:18.407902 kubelet[3135]: I0430 13:31:18.407078 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e2029ef-08d6-4793-bd40-0f286fb2c307-xtables-lock\") pod \"kube-proxy-55g5l\" (UID: \"2e2029ef-08d6-4793-bd40-0f286fb2c307\") " pod="kube-system/kube-proxy-55g5l" Apr 30 13:31:18.407902 kubelet[3135]: I0430 13:31:18.407164 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-lib-modules\") pod \"cilium-58vw8\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " pod="kube-system/cilium-58vw8" Apr 30 13:31:18.415358 systemd[1]: Created slice kubepods-besteffort-pod163254bd_bdd1_42dd_bd6e_355683c30e48.slice - libcontainer container kubepods-besteffort-pod163254bd_bdd1_42dd_bd6e_355683c30e48.slice. Apr 30 13:31:18.508159 kubelet[3135]: I0430 13:31:18.508005 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/163254bd-bdd1-42dd-bd6e-355683c30e48-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qpx42\" (UID: \"163254bd-bdd1-42dd-bd6e-355683c30e48\") " pod="kube-system/cilium-operator-6c4d7847fc-qpx42" Apr 30 13:31:18.508467 kubelet[3135]: I0430 13:31:18.508256 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hp2s\" (UniqueName: \"kubernetes.io/projected/163254bd-bdd1-42dd-bd6e-355683c30e48-kube-api-access-2hp2s\") pod \"cilium-operator-6c4d7847fc-qpx42\" (UID: \"163254bd-bdd1-42dd-bd6e-355683c30e48\") " pod="kube-system/cilium-operator-6c4d7847fc-qpx42" Apr 30 13:31:18.626716 containerd[1804]: time="2025-04-30T13:31:18.626603456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55g5l,Uid:2e2029ef-08d6-4793-bd40-0f286fb2c307,Namespace:kube-system,Attempt:0,}" Apr 30 13:31:18.629051 containerd[1804]: time="2025-04-30T13:31:18.629037699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58vw8,Uid:245cdc3f-5ba4-4e08-b454-20b6222d4d5b,Namespace:kube-system,Attempt:0,}" Apr 30 13:31:18.638457 containerd[1804]: time="2025-04-30T13:31:18.638414288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:31:18.638457 containerd[1804]: time="2025-04-30T13:31:18.638444389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:31:18.638457 containerd[1804]: time="2025-04-30T13:31:18.638451285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:18.638563 containerd[1804]: time="2025-04-30T13:31:18.638490209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:18.639660 containerd[1804]: time="2025-04-30T13:31:18.639431753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:31:18.639730 containerd[1804]: time="2025-04-30T13:31:18.639668452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:31:18.639730 containerd[1804]: time="2025-04-30T13:31:18.639677394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:18.639829 containerd[1804]: time="2025-04-30T13:31:18.639764190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:18.663274 systemd[1]: Started cri-containerd-1a2abfb1922227c19f6256567fec7f51d090415d8e72714f63f6ba7719d34a61.scope - libcontainer container 1a2abfb1922227c19f6256567fec7f51d090415d8e72714f63f6ba7719d34a61. Apr 30 13:31:18.664991 systemd[1]: Started cri-containerd-87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4.scope - libcontainer container 87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4. Apr 30 13:31:18.674918 containerd[1804]: time="2025-04-30T13:31:18.674891399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55g5l,Uid:2e2029ef-08d6-4793-bd40-0f286fb2c307,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a2abfb1922227c19f6256567fec7f51d090415d8e72714f63f6ba7719d34a61\"" Apr 30 13:31:18.675937 containerd[1804]: time="2025-04-30T13:31:18.675919952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-58vw8,Uid:245cdc3f-5ba4-4e08-b454-20b6222d4d5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\"" Apr 30 13:31:18.676447 containerd[1804]: time="2025-04-30T13:31:18.676430810Z" level=info msg="CreateContainer within sandbox \"1a2abfb1922227c19f6256567fec7f51d090415d8e72714f63f6ba7719d34a61\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 13:31:18.676703 containerd[1804]: time="2025-04-30T13:31:18.676688848Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 13:31:18.681899 containerd[1804]: time="2025-04-30T13:31:18.681882703Z" level=info msg="CreateContainer within sandbox \"1a2abfb1922227c19f6256567fec7f51d090415d8e72714f63f6ba7719d34a61\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a5d8e4fb7e1955c0a11cc75ad88023ac974bae772b6170556d73a1f79ecdae37\"" Apr 30 13:31:18.682243 containerd[1804]: time="2025-04-30T13:31:18.682193179Z" level=info msg="StartContainer for \"a5d8e4fb7e1955c0a11cc75ad88023ac974bae772b6170556d73a1f79ecdae37\"" Apr 30 13:31:18.710105 systemd[1]: Started cri-containerd-a5d8e4fb7e1955c0a11cc75ad88023ac974bae772b6170556d73a1f79ecdae37.scope - libcontainer container a5d8e4fb7e1955c0a11cc75ad88023ac974bae772b6170556d73a1f79ecdae37. Apr 30 13:31:18.719158 containerd[1804]: time="2025-04-30T13:31:18.719101162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qpx42,Uid:163254bd-bdd1-42dd-bd6e-355683c30e48,Namespace:kube-system,Attempt:0,}" Apr 30 13:31:18.724911 containerd[1804]: time="2025-04-30T13:31:18.724888269Z" level=info msg="StartContainer for \"a5d8e4fb7e1955c0a11cc75ad88023ac974bae772b6170556d73a1f79ecdae37\" returns successfully" Apr 30 13:31:18.728615 containerd[1804]: time="2025-04-30T13:31:18.728550479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:31:18.728615 containerd[1804]: time="2025-04-30T13:31:18.728605125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:31:18.728898 containerd[1804]: time="2025-04-30T13:31:18.728624283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:18.728957 containerd[1804]: time="2025-04-30T13:31:18.728938422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:18.748315 systemd[1]: Started cri-containerd-3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8.scope - libcontainer container 3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8. Apr 30 13:31:18.769977 containerd[1804]: time="2025-04-30T13:31:18.769956768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qpx42,Uid:163254bd-bdd1-42dd-bd6e-355683c30e48,Namespace:kube-system,Attempt:0,} returns sandbox id \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\"" Apr 30 13:31:21.480461 update_engine[1799]: I20250430 13:31:21.480281 1799 update_attempter.cc:509] Updating boot flags... Apr 30 13:31:21.512051 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (3570) Apr 30 13:31:21.538025 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (3572) Apr 30 13:31:21.675580 kubelet[3135]: I0430 13:31:21.675474 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-55g5l" podStartSLOduration=3.675435893 podStartE2EDuration="3.675435893s" podCreationTimestamp="2025-04-30 13:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:31:19.108942859 +0000 UTC m=+7.087353149" watchObservedRunningTime="2025-04-30 13:31:21.675435893 +0000 UTC m=+9.653846192" Apr 30 13:31:26.131795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728868965.mount: Deactivated successfully. Apr 30 13:31:26.934879 containerd[1804]: time="2025-04-30T13:31:26.934822826Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:26.935089 containerd[1804]: time="2025-04-30T13:31:26.935010418Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 30 13:31:26.935371 containerd[1804]: time="2025-04-30T13:31:26.935332238Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:26.936281 containerd[1804]: time="2025-04-30T13:31:26.936231220Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.259521433s" Apr 30 13:31:26.936281 containerd[1804]: time="2025-04-30T13:31:26.936250918Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 30 13:31:26.936750 containerd[1804]: time="2025-04-30T13:31:26.936736567Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 13:31:26.937282 containerd[1804]: time="2025-04-30T13:31:26.937242924Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 13:31:26.941865 containerd[1804]: time="2025-04-30T13:31:26.941846550Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\"" Apr 30 13:31:26.942088 containerd[1804]: time="2025-04-30T13:31:26.942074380Z" level=info msg="StartContainer for \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\"" Apr 30 13:31:26.957279 systemd[1]: Started cri-containerd-d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0.scope - libcontainer container d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0. Apr 30 13:31:26.957971 systemd[1]: Started sshd@9-147.75.202.179:22-83.97.24.41:51210.service - OpenSSH per-connection server daemon (83.97.24.41:51210). Apr 30 13:31:26.968448 containerd[1804]: time="2025-04-30T13:31:26.968399066Z" level=info msg="StartContainer for \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\" returns successfully" Apr 30 13:31:26.973461 systemd[1]: cri-containerd-d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0.scope: Deactivated successfully. Apr 30 13:31:26.982249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0-rootfs.mount: Deactivated successfully. Apr 30 13:31:27.944550 sshd[3626]: Invalid user mukund from 83.97.24.41 port 51210 Apr 30 13:31:28.123065 sshd[3626]: Received disconnect from 83.97.24.41 port 51210:11: Bye Bye [preauth] Apr 30 13:31:28.123065 sshd[3626]: Disconnected from invalid user mukund 83.97.24.41 port 51210 [preauth] Apr 30 13:31:28.124389 systemd[1]: sshd@9-147.75.202.179:22-83.97.24.41:51210.service: Deactivated successfully. Apr 30 13:31:28.135767 containerd[1804]: time="2025-04-30T13:31:28.135716998Z" level=info msg="shim disconnected" id=d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0 namespace=k8s.io Apr 30 13:31:28.135767 containerd[1804]: time="2025-04-30T13:31:28.135762925Z" level=warning msg="cleaning up after shim disconnected" id=d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0 namespace=k8s.io Apr 30 13:31:28.135767 containerd[1804]: time="2025-04-30T13:31:28.135768288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:31:29.123046 containerd[1804]: time="2025-04-30T13:31:29.122946366Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 13:31:29.131696 containerd[1804]: time="2025-04-30T13:31:29.131656047Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\"" Apr 30 13:31:29.131962 containerd[1804]: time="2025-04-30T13:31:29.131944375Z" level=info msg="StartContainer for \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\"" Apr 30 13:31:29.162329 systemd[1]: Started cri-containerd-3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9.scope - libcontainer container 3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9. Apr 30 13:31:29.175318 containerd[1804]: time="2025-04-30T13:31:29.175277769Z" level=info msg="StartContainer for \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\" returns successfully" Apr 30 13:31:29.183099 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 13:31:29.183271 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:31:29.183424 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:31:29.195525 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:31:29.196827 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 13:31:29.197161 systemd[1]: cri-containerd-3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9.scope: Deactivated successfully. Apr 30 13:31:29.201803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:31:29.204113 containerd[1804]: time="2025-04-30T13:31:29.204082561Z" level=info msg="shim disconnected" id=3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9 namespace=k8s.io Apr 30 13:31:29.204183 containerd[1804]: time="2025-04-30T13:31:29.204114089Z" level=warning msg="cleaning up after shim disconnected" id=3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9 namespace=k8s.io Apr 30 13:31:29.204183 containerd[1804]: time="2025-04-30T13:31:29.204119051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:31:30.129590 containerd[1804]: time="2025-04-30T13:31:30.129493608Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 13:31:30.135813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9-rootfs.mount: Deactivated successfully. Apr 30 13:31:30.142190 containerd[1804]: time="2025-04-30T13:31:30.142169390Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\"" Apr 30 13:31:30.142519 containerd[1804]: time="2025-04-30T13:31:30.142504584Z" level=info msg="StartContainer for \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\"" Apr 30 13:31:30.170189 systemd[1]: Started cri-containerd-f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72.scope - libcontainer container f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72. Apr 30 13:31:30.184257 containerd[1804]: time="2025-04-30T13:31:30.184234598Z" level=info msg="StartContainer for \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\" returns successfully" Apr 30 13:31:30.185497 systemd[1]: cri-containerd-f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72.scope: Deactivated successfully. Apr 30 13:31:30.207413 containerd[1804]: time="2025-04-30T13:31:30.207383255Z" level=info msg="shim disconnected" id=f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72 namespace=k8s.io Apr 30 13:31:30.207413 containerd[1804]: time="2025-04-30T13:31:30.207411954Z" level=warning msg="cleaning up after shim disconnected" id=f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72 namespace=k8s.io Apr 30 13:31:30.207512 containerd[1804]: time="2025-04-30T13:31:30.207419120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:31:31.135853 containerd[1804]: time="2025-04-30T13:31:31.135819467Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 13:31:31.135974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72-rootfs.mount: Deactivated successfully. Apr 30 13:31:31.143900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788099219.mount: Deactivated successfully. Apr 30 13:31:31.144271 containerd[1804]: time="2025-04-30T13:31:31.144241287Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\"" Apr 30 13:31:31.145000 containerd[1804]: time="2025-04-30T13:31:31.144980518Z" level=info msg="StartContainer for \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\"" Apr 30 13:31:31.176285 systemd[1]: Started cri-containerd-a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654.scope - libcontainer container a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654. Apr 30 13:31:31.188805 systemd[1]: cri-containerd-a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654.scope: Deactivated successfully. Apr 30 13:31:31.197207 containerd[1804]: time="2025-04-30T13:31:31.197153461Z" level=info msg="StartContainer for \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\" returns successfully" Apr 30 13:31:31.230196 containerd[1804]: time="2025-04-30T13:31:31.230081686Z" level=info msg="shim disconnected" id=a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654 namespace=k8s.io Apr 30 13:31:31.230196 containerd[1804]: time="2025-04-30T13:31:31.230189357Z" level=warning msg="cleaning up after shim disconnected" id=a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654 namespace=k8s.io Apr 30 13:31:31.230610 containerd[1804]: time="2025-04-30T13:31:31.230214088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:31:32.135702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654-rootfs.mount: Deactivated successfully. Apr 30 13:31:32.153971 containerd[1804]: time="2025-04-30T13:31:32.153855422Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 13:31:32.166318 containerd[1804]: time="2025-04-30T13:31:32.166294506Z" level=info msg="CreateContainer within sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\"" Apr 30 13:31:32.166665 containerd[1804]: time="2025-04-30T13:31:32.166651295Z" level=info msg="StartContainer for \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\"" Apr 30 13:31:32.199317 systemd[1]: Started cri-containerd-beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78.scope - libcontainer container beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78. Apr 30 13:31:32.214599 containerd[1804]: time="2025-04-30T13:31:32.214547229Z" level=info msg="StartContainer for \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\" returns successfully" Apr 30 13:31:32.320582 kubelet[3135]: I0430 13:31:32.320558 3135 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 13:31:32.333320 systemd[1]: Created slice kubepods-burstable-poddf9f0f5a_b334_4857_aacc_aa3e6deda2e8.slice - libcontainer container kubepods-burstable-poddf9f0f5a_b334_4857_aacc_aa3e6deda2e8.slice. Apr 30 13:31:32.335985 systemd[1]: Created slice kubepods-burstable-pod1ed0d88c_a591_4e30_92b7_ba3af0f3abb1.slice - libcontainer container kubepods-burstable-pod1ed0d88c_a591_4e30_92b7_ba3af0f3abb1.slice. Apr 30 13:31:32.409291 kubelet[3135]: I0430 13:31:32.409242 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48bvr\" (UniqueName: \"kubernetes.io/projected/1ed0d88c-a591-4e30-92b7-ba3af0f3abb1-kube-api-access-48bvr\") pod \"coredns-668d6bf9bc-glc2w\" (UID: \"1ed0d88c-a591-4e30-92b7-ba3af0f3abb1\") " pod="kube-system/coredns-668d6bf9bc-glc2w" Apr 30 13:31:32.409291 kubelet[3135]: I0430 13:31:32.409270 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ed0d88c-a591-4e30-92b7-ba3af0f3abb1-config-volume\") pod \"coredns-668d6bf9bc-glc2w\" (UID: \"1ed0d88c-a591-4e30-92b7-ba3af0f3abb1\") " pod="kube-system/coredns-668d6bf9bc-glc2w" Apr 30 13:31:32.409291 kubelet[3135]: I0430 13:31:32.409284 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df9f0f5a-b334-4857-aacc-aa3e6deda2e8-config-volume\") pod \"coredns-668d6bf9bc-2phl8\" (UID: \"df9f0f5a-b334-4857-aacc-aa3e6deda2e8\") " pod="kube-system/coredns-668d6bf9bc-2phl8" Apr 30 13:31:32.409387 kubelet[3135]: I0430 13:31:32.409301 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nbpz\" (UniqueName: \"kubernetes.io/projected/df9f0f5a-b334-4857-aacc-aa3e6deda2e8-kube-api-access-7nbpz\") pod \"coredns-668d6bf9bc-2phl8\" (UID: \"df9f0f5a-b334-4857-aacc-aa3e6deda2e8\") " pod="kube-system/coredns-668d6bf9bc-2phl8" Apr 30 13:31:32.636639 containerd[1804]: time="2025-04-30T13:31:32.636512197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2phl8,Uid:df9f0f5a-b334-4857-aacc-aa3e6deda2e8,Namespace:kube-system,Attempt:0,}" Apr 30 13:31:32.638100 containerd[1804]: time="2025-04-30T13:31:32.638054915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glc2w,Uid:1ed0d88c-a591-4e30-92b7-ba3af0f3abb1,Namespace:kube-system,Attempt:0,}" Apr 30 13:31:33.152544 kubelet[3135]: I0430 13:31:33.152511 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-58vw8" podStartSLOduration=6.892239544 podStartE2EDuration="15.152497376s" podCreationTimestamp="2025-04-30 13:31:18 +0000 UTC" firstStartedPulling="2025-04-30 13:31:18.6764287 +0000 UTC m=+6.654838932" lastFinishedPulling="2025-04-30 13:31:26.936686533 +0000 UTC m=+14.915096764" observedRunningTime="2025-04-30 13:31:33.152230523 +0000 UTC m=+21.130640758" watchObservedRunningTime="2025-04-30 13:31:33.152497376 +0000 UTC m=+21.130907605" Apr 30 13:31:33.775803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338299132.mount: Deactivated successfully. Apr 30 13:31:33.967717 containerd[1804]: time="2025-04-30T13:31:33.967663937Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:33.967927 containerd[1804]: time="2025-04-30T13:31:33.967802535Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 30 13:31:33.968245 containerd[1804]: time="2025-04-30T13:31:33.968205315Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:31:33.968906 containerd[1804]: time="2025-04-30T13:31:33.968884683Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.032131767s" Apr 30 13:31:33.968906 containerd[1804]: time="2025-04-30T13:31:33.968902277Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 30 13:31:33.969864 containerd[1804]: time="2025-04-30T13:31:33.969852065Z" level=info msg="CreateContainer within sandbox \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 13:31:33.974029 containerd[1804]: time="2025-04-30T13:31:33.973983791Z" level=info msg="CreateContainer within sandbox \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\"" Apr 30 13:31:33.974209 containerd[1804]: time="2025-04-30T13:31:33.974193919Z" level=info msg="StartContainer for \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\"" Apr 30 13:31:33.993171 systemd[1]: Started cri-containerd-ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44.scope - libcontainer container ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44. Apr 30 13:31:34.004613 containerd[1804]: time="2025-04-30T13:31:34.004589911Z" level=info msg="StartContainer for \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\" returns successfully" Apr 30 13:31:34.166973 kubelet[3135]: I0430 13:31:34.166859 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qpx42" podStartSLOduration=0.967993503 podStartE2EDuration="16.166820023s" podCreationTimestamp="2025-04-30 13:31:18 +0000 UTC" firstStartedPulling="2025-04-30 13:31:18.770479707 +0000 UTC m=+6.748889937" lastFinishedPulling="2025-04-30 13:31:33.969306225 +0000 UTC m=+21.947716457" observedRunningTime="2025-04-30 13:31:34.166683633 +0000 UTC m=+22.145093933" watchObservedRunningTime="2025-04-30 13:31:34.166820023 +0000 UTC m=+22.145230308" Apr 30 13:31:38.032203 systemd-networkd[1724]: cilium_host: Link UP Apr 30 13:31:38.032309 systemd-networkd[1724]: cilium_net: Link UP Apr 30 13:31:38.032472 systemd-networkd[1724]: cilium_net: Gained carrier Apr 30 13:31:38.032596 systemd-networkd[1724]: cilium_host: Gained carrier Apr 30 13:31:38.078538 systemd-networkd[1724]: cilium_vxlan: Link UP Apr 30 13:31:38.078542 systemd-networkd[1724]: cilium_vxlan: Gained carrier Apr 30 13:31:38.215026 kernel: NET: Registered PF_ALG protocol family Apr 30 13:31:38.252153 systemd-networkd[1724]: cilium_host: Gained IPv6LL Apr 30 13:31:38.276102 systemd-networkd[1724]: cilium_net: Gained IPv6LL Apr 30 13:31:38.657720 systemd-networkd[1724]: lxc_health: Link UP Apr 30 13:31:38.658160 systemd-networkd[1724]: lxc_health: Gained carrier Apr 30 13:31:39.223054 kernel: eth0: renamed from tmp04507 Apr 30 13:31:39.234074 kernel: eth0: renamed from tmpfd8ce Apr 30 13:31:39.244696 systemd-networkd[1724]: lxcb772ac8a1ef6: Link UP Apr 30 13:31:39.244909 systemd-networkd[1724]: lxccb6184c71254: Link UP Apr 30 13:31:39.245241 systemd-networkd[1724]: lxcb772ac8a1ef6: Gained carrier Apr 30 13:31:39.245342 systemd-networkd[1724]: lxccb6184c71254: Gained carrier Apr 30 13:31:39.995186 systemd-networkd[1724]: cilium_vxlan: Gained IPv6LL Apr 30 13:31:40.187191 systemd-networkd[1724]: lxc_health: Gained IPv6LL Apr 30 13:31:40.507173 systemd-networkd[1724]: lxcb772ac8a1ef6: Gained IPv6LL Apr 30 13:31:41.211202 systemd-networkd[1724]: lxccb6184c71254: Gained IPv6LL Apr 30 13:31:41.534437 containerd[1804]: time="2025-04-30T13:31:41.534347497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:31:41.534437 containerd[1804]: time="2025-04-30T13:31:41.534376574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:31:41.534437 containerd[1804]: time="2025-04-30T13:31:41.534383565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:41.534437 containerd[1804]: time="2025-04-30T13:31:41.534346095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:31:41.534437 containerd[1804]: time="2025-04-30T13:31:41.534376718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:31:41.534437 containerd[1804]: time="2025-04-30T13:31:41.534383764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:41.534437 containerd[1804]: time="2025-04-30T13:31:41.534423845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:41.534437 containerd[1804]: time="2025-04-30T13:31:41.534423596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:31:41.560465 systemd[1]: Started cri-containerd-04507739ce24ccfddd2f5f43f3c4488927c6507e66b1a0b89dd90b2dd1b8fd22.scope - libcontainer container 04507739ce24ccfddd2f5f43f3c4488927c6507e66b1a0b89dd90b2dd1b8fd22. Apr 30 13:31:41.563837 systemd[1]: Started cri-containerd-fd8ce525db50bc9c6026dbc9f89b3eff5d4bda5c3b7aa0bed6ba50afad31b2d9.scope - libcontainer container fd8ce525db50bc9c6026dbc9f89b3eff5d4bda5c3b7aa0bed6ba50afad31b2d9. Apr 30 13:31:41.628399 containerd[1804]: time="2025-04-30T13:31:41.628370188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2phl8,Uid:df9f0f5a-b334-4857-aacc-aa3e6deda2e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"04507739ce24ccfddd2f5f43f3c4488927c6507e66b1a0b89dd90b2dd1b8fd22\"" Apr 30 13:31:41.629820 containerd[1804]: time="2025-04-30T13:31:41.629800029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-glc2w,Uid:1ed0d88c-a591-4e30-92b7-ba3af0f3abb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd8ce525db50bc9c6026dbc9f89b3eff5d4bda5c3b7aa0bed6ba50afad31b2d9\"" Apr 30 13:31:41.629878 containerd[1804]: time="2025-04-30T13:31:41.629818782Z" level=info msg="CreateContainer within sandbox \"04507739ce24ccfddd2f5f43f3c4488927c6507e66b1a0b89dd90b2dd1b8fd22\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 13:31:41.630960 containerd[1804]: time="2025-04-30T13:31:41.630943393Z" level=info msg="CreateContainer within sandbox \"fd8ce525db50bc9c6026dbc9f89b3eff5d4bda5c3b7aa0bed6ba50afad31b2d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 13:31:41.635536 containerd[1804]: time="2025-04-30T13:31:41.635511248Z" level=info msg="CreateContainer within sandbox \"04507739ce24ccfddd2f5f43f3c4488927c6507e66b1a0b89dd90b2dd1b8fd22\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05f936320c4dd0a88bf57238c7124cf34d6eaba5109f7b121e7f96a8eb18f32d\"" Apr 30 13:31:41.635793 containerd[1804]: time="2025-04-30T13:31:41.635778846Z" level=info msg="StartContainer for \"05f936320c4dd0a88bf57238c7124cf34d6eaba5109f7b121e7f96a8eb18f32d\"" Apr 30 13:31:41.636417 containerd[1804]: time="2025-04-30T13:31:41.636402740Z" level=info msg="CreateContainer within sandbox \"fd8ce525db50bc9c6026dbc9f89b3eff5d4bda5c3b7aa0bed6ba50afad31b2d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc5883c2ad41bbaf380c8db77f9e716c647126bd3e19393b066e15e757a71a2a\"" Apr 30 13:31:41.636599 containerd[1804]: time="2025-04-30T13:31:41.636586357Z" level=info msg="StartContainer for \"fc5883c2ad41bbaf380c8db77f9e716c647126bd3e19393b066e15e757a71a2a\"" Apr 30 13:31:41.668386 systemd[1]: Started cri-containerd-05f936320c4dd0a88bf57238c7124cf34d6eaba5109f7b121e7f96a8eb18f32d.scope - libcontainer container 05f936320c4dd0a88bf57238c7124cf34d6eaba5109f7b121e7f96a8eb18f32d. Apr 30 13:31:41.671362 systemd[1]: Started cri-containerd-fc5883c2ad41bbaf380c8db77f9e716c647126bd3e19393b066e15e757a71a2a.scope - libcontainer container fc5883c2ad41bbaf380c8db77f9e716c647126bd3e19393b066e15e757a71a2a. Apr 30 13:31:41.710320 containerd[1804]: time="2025-04-30T13:31:41.710278674Z" level=info msg="StartContainer for \"05f936320c4dd0a88bf57238c7124cf34d6eaba5109f7b121e7f96a8eb18f32d\" returns successfully" Apr 30 13:31:41.711924 containerd[1804]: time="2025-04-30T13:31:41.711908842Z" level=info msg="StartContainer for \"fc5883c2ad41bbaf380c8db77f9e716c647126bd3e19393b066e15e757a71a2a\" returns successfully" Apr 30 13:31:42.185825 kubelet[3135]: I0430 13:31:42.185731 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-glc2w" podStartSLOduration=24.185689723 podStartE2EDuration="24.185689723s" podCreationTimestamp="2025-04-30 13:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:31:42.185254485 +0000 UTC m=+30.163664785" watchObservedRunningTime="2025-04-30 13:31:42.185689723 +0000 UTC m=+30.164100006" Apr 30 13:31:42.203101 kubelet[3135]: I0430 13:31:42.202958 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2phl8" podStartSLOduration=24.202905798 podStartE2EDuration="24.202905798s" podCreationTimestamp="2025-04-30 13:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:31:42.201960129 +0000 UTC m=+30.180370435" watchObservedRunningTime="2025-04-30 13:31:42.202905798 +0000 UTC m=+30.181316068" Apr 30 13:31:46.422061 kubelet[3135]: I0430 13:31:46.421922 3135 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 30 13:32:46.723223 systemd[1]: Started sshd@10-147.75.202.179:22-103.215.80.141:59860.service - OpenSSH per-connection server daemon (103.215.80.141:59860). Apr 30 13:32:47.386854 sshd[4727]: Invalid user islam from 103.215.80.141 port 59860 Apr 30 13:32:47.550796 sshd[4727]: Received disconnect from 103.215.80.141 port 59860:11: Bye Bye [preauth] Apr 30 13:32:47.550796 sshd[4727]: Disconnected from invalid user islam 103.215.80.141 port 59860 [preauth] Apr 30 13:32:47.551651 systemd[1]: sshd@10-147.75.202.179:22-103.215.80.141:59860.service: Deactivated successfully. Apr 30 13:34:02.170204 systemd[1]: Started sshd@11-147.75.202.179:22-116.204.182.224:45990.service - OpenSSH per-connection server daemon (116.204.182.224:45990). Apr 30 13:34:03.340829 sshd[4740]: Invalid user centos from 116.204.182.224 port 45990 Apr 30 13:34:03.552160 sshd[4740]: Received disconnect from 116.204.182.224 port 45990:11: Bye Bye [preauth] Apr 30 13:34:03.552160 sshd[4740]: Disconnected from invalid user centos 116.204.182.224 port 45990 [preauth] Apr 30 13:34:03.552912 systemd[1]: sshd@11-147.75.202.179:22-116.204.182.224:45990.service: Deactivated successfully. Apr 30 13:34:49.146111 systemd[1]: Started sshd@12-147.75.202.179:22-88.214.48.11:55470.service - OpenSSH per-connection server daemon (88.214.48.11:55470). Apr 30 13:34:50.622477 sshd[4751]: Invalid user nexus from 88.214.48.11 port 55470 Apr 30 13:34:51.313178 sshd[4751]: Connection closed by invalid user nexus 88.214.48.11 port 55470 [preauth] Apr 30 13:34:51.317579 systemd[1]: sshd@12-147.75.202.179:22-88.214.48.11:55470.service: Deactivated successfully. Apr 30 13:35:30.813806 systemd[1]: Started sshd@13-147.75.202.179:22-14.128.54.101:52476.service - OpenSSH per-connection server daemon (14.128.54.101:52476). Apr 30 13:35:31.813355 sshd[4760]: Received disconnect from 14.128.54.101 port 52476:11: Bye Bye [preauth] Apr 30 13:35:31.813355 sshd[4760]: Disconnected from authenticating user root 14.128.54.101 port 52476 [preauth] Apr 30 13:35:31.814168 systemd[1]: sshd@13-147.75.202.179:22-14.128.54.101:52476.service: Deactivated successfully. Apr 30 13:36:00.154194 systemd[1]: Started sshd@14-147.75.202.179:22-83.97.24.41:60328.service - OpenSSH per-connection server daemon (83.97.24.41:60328). Apr 30 13:36:01.307569 sshd[4772]: Received disconnect from 83.97.24.41 port 60328:11: Bye Bye [preauth] Apr 30 13:36:01.307569 sshd[4772]: Disconnected from authenticating user root 83.97.24.41 port 60328 [preauth] Apr 30 13:36:01.310824 systemd[1]: sshd@14-147.75.202.179:22-83.97.24.41:60328.service: Deactivated successfully. Apr 30 13:37:16.510590 update_engine[1799]: I20250430 13:37:16.510449 1799 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 30 13:37:16.510590 update_engine[1799]: I20250430 13:37:16.510555 1799 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 30 13:37:16.511777 update_engine[1799]: I20250430 13:37:16.510946 1799 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 30 13:37:16.512122 update_engine[1799]: I20250430 13:37:16.512002 1799 omaha_request_params.cc:62] Current group set to beta Apr 30 13:37:16.512301 update_engine[1799]: I20250430 13:37:16.512261 1799 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 30 13:37:16.512301 update_engine[1799]: I20250430 13:37:16.512292 1799 update_attempter.cc:643] Scheduling an action processor start. Apr 30 13:37:16.512505 update_engine[1799]: I20250430 13:37:16.512331 1799 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 13:37:16.512505 update_engine[1799]: I20250430 13:37:16.512403 1799 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 30 13:37:16.512670 update_engine[1799]: I20250430 13:37:16.512565 1799 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 13:37:16.512670 update_engine[1799]: I20250430 13:37:16.512596 1799 omaha_request_action.cc:272] Request: Apr 30 13:37:16.512670 update_engine[1799]: Apr 30 13:37:16.512670 update_engine[1799]: Apr 30 13:37:16.512670 update_engine[1799]: Apr 30 13:37:16.512670 update_engine[1799]: Apr 30 13:37:16.512670 update_engine[1799]: Apr 30 13:37:16.512670 update_engine[1799]: Apr 30 13:37:16.512670 update_engine[1799]: Apr 30 13:37:16.512670 update_engine[1799]: Apr 30 13:37:16.512670 update_engine[1799]: I20250430 13:37:16.512614 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:37:16.513605 locksmithd[1834]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 30 13:37:16.515410 update_engine[1799]: I20250430 13:37:16.515369 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:37:16.515603 update_engine[1799]: I20250430 13:37:16.515563 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:37:16.516088 update_engine[1799]: E20250430 13:37:16.516029 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:37:16.516121 update_engine[1799]: I20250430 13:37:16.516101 1799 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 30 13:37:16.867791 systemd[1]: Started sshd@15-147.75.202.179:22-147.75.109.163:47238.service - OpenSSH per-connection server daemon (147.75.109.163:47238). Apr 30 13:37:16.900080 sshd[4785]: Accepted publickey for core from 147.75.109.163 port 47238 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:16.900874 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:16.903955 systemd-logind[1794]: New session 12 of user core. Apr 30 13:37:16.923361 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 13:37:17.031695 sshd[4787]: Connection closed by 147.75.109.163 port 47238 Apr 30 13:37:17.031874 sshd-session[4785]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:17.033532 systemd[1]: sshd@15-147.75.202.179:22-147.75.109.163:47238.service: Deactivated successfully. Apr 30 13:37:17.034504 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 13:37:17.035240 systemd-logind[1794]: Session 12 logged out. Waiting for processes to exit. Apr 30 13:37:17.035841 systemd-logind[1794]: Removed session 12. Apr 30 13:37:22.065199 systemd[1]: Started sshd@16-147.75.202.179:22-147.75.109.163:47250.service - OpenSSH per-connection server daemon (147.75.109.163:47250). Apr 30 13:37:22.095284 sshd[4816]: Accepted publickey for core from 147.75.109.163 port 47250 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:22.096124 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:22.099461 systemd-logind[1794]: New session 13 of user core. Apr 30 13:37:22.115227 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 13:37:22.208289 sshd[4818]: Connection closed by 147.75.109.163 port 47250 Apr 30 13:37:22.208509 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:22.210826 systemd[1]: sshd@16-147.75.202.179:22-147.75.109.163:47250.service: Deactivated successfully. Apr 30 13:37:22.211906 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 13:37:22.212456 systemd-logind[1794]: Session 13 logged out. Waiting for processes to exit. Apr 30 13:37:22.213113 systemd-logind[1794]: Removed session 13. Apr 30 13:37:26.481314 update_engine[1799]: I20250430 13:37:26.481141 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:37:26.482171 update_engine[1799]: I20250430 13:37:26.481695 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:37:26.482390 update_engine[1799]: I20250430 13:37:26.482290 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:37:26.485420 update_engine[1799]: E20250430 13:37:26.485309 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:37:26.485605 update_engine[1799]: I20250430 13:37:26.485479 1799 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 30 13:37:27.240349 systemd[1]: Started sshd@17-147.75.202.179:22-147.75.109.163:54236.service - OpenSSH per-connection server daemon (147.75.109.163:54236). Apr 30 13:37:27.270122 sshd[4844]: Accepted publickey for core from 147.75.109.163 port 54236 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:27.270904 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:27.273865 systemd-logind[1794]: New session 14 of user core. Apr 30 13:37:27.284293 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 13:37:27.370185 sshd[4846]: Connection closed by 147.75.109.163 port 54236 Apr 30 13:37:27.370356 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:27.372231 systemd[1]: sshd@17-147.75.202.179:22-147.75.109.163:54236.service: Deactivated successfully. Apr 30 13:37:27.373096 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 13:37:27.373523 systemd-logind[1794]: Session 14 logged out. Waiting for processes to exit. Apr 30 13:37:27.374002 systemd-logind[1794]: Removed session 14. Apr 30 13:37:32.406323 systemd[1]: Started sshd@18-147.75.202.179:22-147.75.109.163:54246.service - OpenSSH per-connection server daemon (147.75.109.163:54246). Apr 30 13:37:32.437198 sshd[4873]: Accepted publickey for core from 147.75.109.163 port 54246 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:32.440195 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:32.451310 systemd-logind[1794]: New session 15 of user core. Apr 30 13:37:32.463540 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 13:37:32.560711 sshd[4875]: Connection closed by 147.75.109.163 port 54246 Apr 30 13:37:32.560897 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:32.562629 systemd[1]: sshd@18-147.75.202.179:22-147.75.109.163:54246.service: Deactivated successfully. Apr 30 13:37:32.563563 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 13:37:32.564273 systemd-logind[1794]: Session 15 logged out. Waiting for processes to exit. Apr 30 13:37:32.564844 systemd-logind[1794]: Removed session 15. Apr 30 13:37:36.482197 update_engine[1799]: I20250430 13:37:36.482000 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:37:36.483142 update_engine[1799]: I20250430 13:37:36.482595 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:37:36.483349 update_engine[1799]: I20250430 13:37:36.483250 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:37:36.483858 update_engine[1799]: E20250430 13:37:36.483744 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:37:36.484080 update_engine[1799]: I20250430 13:37:36.483880 1799 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 30 13:37:37.578756 systemd[1]: Started sshd@19-147.75.202.179:22-147.75.109.163:38104.service - OpenSSH per-connection server daemon (147.75.109.163:38104). Apr 30 13:37:37.612102 sshd[4901]: Accepted publickey for core from 147.75.109.163 port 38104 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:37.612893 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:37.616280 systemd-logind[1794]: New session 16 of user core. Apr 30 13:37:37.625175 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 13:37:37.715179 sshd[4903]: Connection closed by 147.75.109.163 port 38104 Apr 30 13:37:37.715373 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:37.733690 systemd[1]: sshd@19-147.75.202.179:22-147.75.109.163:38104.service: Deactivated successfully. Apr 30 13:37:37.734800 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 13:37:37.735744 systemd-logind[1794]: Session 16 logged out. Waiting for processes to exit. Apr 30 13:37:37.736536 systemd[1]: Started sshd@20-147.75.202.179:22-147.75.109.163:38106.service - OpenSSH per-connection server daemon (147.75.109.163:38106). Apr 30 13:37:37.737146 systemd-logind[1794]: Removed session 16. Apr 30 13:37:37.776420 sshd[4928]: Accepted publickey for core from 147.75.109.163 port 38106 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:37.777465 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:37.781722 systemd-logind[1794]: New session 17 of user core. Apr 30 13:37:37.800364 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 13:37:38.000436 sshd[4933]: Connection closed by 147.75.109.163 port 38106 Apr 30 13:37:38.000729 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:38.012773 systemd[1]: sshd@20-147.75.202.179:22-147.75.109.163:38106.service: Deactivated successfully. Apr 30 13:37:38.014441 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 13:37:38.015572 systemd-logind[1794]: Session 17 logged out. Waiting for processes to exit. Apr 30 13:37:38.016711 systemd[1]: Started sshd@21-147.75.202.179:22-147.75.109.163:38120.service - OpenSSH per-connection server daemon (147.75.109.163:38120). Apr 30 13:37:38.017667 systemd-logind[1794]: Removed session 17. Apr 30 13:37:38.056476 sshd[4955]: Accepted publickey for core from 147.75.109.163 port 38120 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:38.057243 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:38.060590 systemd-logind[1794]: New session 18 of user core. Apr 30 13:37:38.087286 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 13:37:38.237183 sshd[4958]: Connection closed by 147.75.109.163 port 38120 Apr 30 13:37:38.237386 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:38.239050 systemd[1]: sshd@21-147.75.202.179:22-147.75.109.163:38120.service: Deactivated successfully. Apr 30 13:37:38.240037 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 13:37:38.240816 systemd-logind[1794]: Session 18 logged out. Waiting for processes to exit. Apr 30 13:37:38.241582 systemd-logind[1794]: Removed session 18. Apr 30 13:37:43.278347 systemd[1]: Started sshd@22-147.75.202.179:22-147.75.109.163:38134.service - OpenSSH per-connection server daemon (147.75.109.163:38134). Apr 30 13:37:43.308446 sshd[4984]: Accepted publickey for core from 147.75.109.163 port 38134 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:43.309205 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:43.312471 systemd-logind[1794]: New session 19 of user core. Apr 30 13:37:43.325244 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 13:37:43.412222 sshd[4986]: Connection closed by 147.75.109.163 port 38134 Apr 30 13:37:43.412406 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:43.413976 systemd[1]: sshd@22-147.75.202.179:22-147.75.109.163:38134.service: Deactivated successfully. Apr 30 13:37:43.414902 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 13:37:43.415670 systemd-logind[1794]: Session 19 logged out. Waiting for processes to exit. Apr 30 13:37:43.416371 systemd-logind[1794]: Removed session 19. Apr 30 13:37:46.480616 update_engine[1799]: I20250430 13:37:46.480441 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:37:46.481470 update_engine[1799]: I20250430 13:37:46.480995 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:37:46.481746 update_engine[1799]: I20250430 13:37:46.481633 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:37:46.482150 update_engine[1799]: E20250430 13:37:46.482053 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:37:46.482334 update_engine[1799]: I20250430 13:37:46.482159 1799 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 13:37:46.482334 update_engine[1799]: I20250430 13:37:46.482187 1799 omaha_request_action.cc:617] Omaha request response: Apr 30 13:37:46.482603 update_engine[1799]: E20250430 13:37:46.482352 1799 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 30 13:37:46.482603 update_engine[1799]: I20250430 13:37:46.482395 1799 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 30 13:37:46.482603 update_engine[1799]: I20250430 13:37:46.482414 1799 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:37:46.482603 update_engine[1799]: I20250430 13:37:46.482429 1799 update_attempter.cc:306] Processing Done. Apr 30 13:37:46.482603 update_engine[1799]: E20250430 13:37:46.482461 1799 update_attempter.cc:619] Update failed. Apr 30 13:37:46.482603 update_engine[1799]: I20250430 13:37:46.482478 1799 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 30 13:37:46.482603 update_engine[1799]: I20250430 13:37:46.482496 1799 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 30 13:37:46.482603 update_engine[1799]: I20250430 13:37:46.482511 1799 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 30 13:37:46.483273 update_engine[1799]: I20250430 13:37:46.482669 1799 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 30 13:37:46.483273 update_engine[1799]: I20250430 13:37:46.482729 1799 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 30 13:37:46.483273 update_engine[1799]: I20250430 13:37:46.482748 1799 omaha_request_action.cc:272] Request: Apr 30 13:37:46.483273 update_engine[1799]: Apr 30 13:37:46.483273 update_engine[1799]: Apr 30 13:37:46.483273 update_engine[1799]: Apr 30 13:37:46.483273 update_engine[1799]: Apr 30 13:37:46.483273 update_engine[1799]: Apr 30 13:37:46.483273 update_engine[1799]: Apr 30 13:37:46.483273 update_engine[1799]: I20250430 13:37:46.482765 1799 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 30 13:37:46.484052 update_engine[1799]: I20250430 13:37:46.483282 1799 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 30 13:37:46.484052 update_engine[1799]: I20250430 13:37:46.483744 1799 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 30 13:37:46.484246 locksmithd[1834]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 30 13:37:46.486048 update_engine[1799]: E20250430 13:37:46.485945 1799 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 30 13:37:46.486208 update_engine[1799]: I20250430 13:37:46.486110 1799 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 30 13:37:46.486208 update_engine[1799]: I20250430 13:37:46.486142 1799 omaha_request_action.cc:617] Omaha request response: Apr 30 13:37:46.486208 update_engine[1799]: I20250430 13:37:46.486160 1799 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:37:46.486208 update_engine[1799]: I20250430 13:37:46.486176 1799 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 30 13:37:46.486208 update_engine[1799]: I20250430 13:37:46.486191 1799 update_attempter.cc:306] Processing Done. Apr 30 13:37:46.486664 update_engine[1799]: I20250430 13:37:46.486210 1799 update_attempter.cc:310] Error event sent. Apr 30 13:37:46.486664 update_engine[1799]: I20250430 13:37:46.486236 1799 update_check_scheduler.cc:74] Next update check in 45m34s Apr 30 13:37:46.486966 locksmithd[1834]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 30 13:37:48.431298 systemd[1]: Started sshd@23-147.75.202.179:22-147.75.109.163:43610.service - OpenSSH per-connection server daemon (147.75.109.163:43610). Apr 30 13:37:48.466150 sshd[5010]: Accepted publickey for core from 147.75.109.163 port 43610 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:48.466964 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:48.470218 systemd-logind[1794]: New session 20 of user core. Apr 30 13:37:48.483220 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 13:37:48.574818 sshd[5012]: Connection closed by 147.75.109.163 port 43610 Apr 30 13:37:48.575033 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:48.605570 systemd[1]: sshd@23-147.75.202.179:22-147.75.109.163:43610.service: Deactivated successfully. Apr 30 13:37:48.607104 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 13:37:48.608399 systemd-logind[1794]: Session 20 logged out. Waiting for processes to exit. Apr 30 13:37:48.609565 systemd[1]: Started sshd@24-147.75.202.179:22-147.75.109.163:43626.service - OpenSSH per-connection server daemon (147.75.109.163:43626). Apr 30 13:37:48.610583 systemd-logind[1794]: Removed session 20. Apr 30 13:37:48.666614 sshd[5036]: Accepted publickey for core from 147.75.109.163 port 43626 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:48.669591 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:48.681367 systemd-logind[1794]: New session 21 of user core. Apr 30 13:37:48.695388 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 13:37:48.915504 sshd[5040]: Connection closed by 147.75.109.163 port 43626 Apr 30 13:37:48.915715 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:48.929580 systemd[1]: sshd@24-147.75.202.179:22-147.75.109.163:43626.service: Deactivated successfully. Apr 30 13:37:48.930669 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 13:37:48.931599 systemd-logind[1794]: Session 21 logged out. Waiting for processes to exit. Apr 30 13:37:48.932398 systemd[1]: Started sshd@25-147.75.202.179:22-147.75.109.163:43634.service - OpenSSH per-connection server daemon (147.75.109.163:43634). Apr 30 13:37:48.932979 systemd-logind[1794]: Removed session 21. Apr 30 13:37:48.969044 sshd[5064]: Accepted publickey for core from 147.75.109.163 port 43634 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:48.969886 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:48.973357 systemd-logind[1794]: New session 22 of user core. Apr 30 13:37:48.991476 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 13:37:49.881290 sshd[5068]: Connection closed by 147.75.109.163 port 43634 Apr 30 13:37:49.881640 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:49.898876 systemd[1]: sshd@25-147.75.202.179:22-147.75.109.163:43634.service: Deactivated successfully. Apr 30 13:37:49.900454 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 13:37:49.901742 systemd-logind[1794]: Session 22 logged out. Waiting for processes to exit. Apr 30 13:37:49.903005 systemd[1]: Started sshd@26-147.75.202.179:22-147.75.109.163:43642.service - OpenSSH per-connection server daemon (147.75.109.163:43642). Apr 30 13:37:49.903892 systemd-logind[1794]: Removed session 22. Apr 30 13:37:49.949081 sshd[5097]: Accepted publickey for core from 147.75.109.163 port 43642 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:49.952278 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:49.964251 systemd-logind[1794]: New session 23 of user core. Apr 30 13:37:49.983457 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 13:37:50.187396 sshd[5103]: Connection closed by 147.75.109.163 port 43642 Apr 30 13:37:50.187545 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:50.221302 systemd[1]: sshd@26-147.75.202.179:22-147.75.109.163:43642.service: Deactivated successfully. Apr 30 13:37:50.225180 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 13:37:50.228344 systemd-logind[1794]: Session 23 logged out. Waiting for processes to exit. Apr 30 13:37:50.246758 systemd[1]: Started sshd@27-147.75.202.179:22-147.75.109.163:43644.service - OpenSSH per-connection server daemon (147.75.109.163:43644). Apr 30 13:37:50.249443 systemd-logind[1794]: Removed session 23. Apr 30 13:37:50.307071 sshd[5126]: Accepted publickey for core from 147.75.109.163 port 43644 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:50.311003 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:50.322924 systemd-logind[1794]: New session 24 of user core. Apr 30 13:37:50.339437 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 13:37:50.473883 sshd[5131]: Connection closed by 147.75.109.163 port 43644 Apr 30 13:37:50.474089 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:50.475817 systemd[1]: sshd@27-147.75.202.179:22-147.75.109.163:43644.service: Deactivated successfully. Apr 30 13:37:50.476734 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 13:37:50.477486 systemd-logind[1794]: Session 24 logged out. Waiting for processes to exit. Apr 30 13:37:50.478012 systemd-logind[1794]: Removed session 24. Apr 30 13:37:54.838204 systemd[1]: Started sshd@28-147.75.202.179:22-103.215.80.141:44646.service - OpenSSH per-connection server daemon (103.215.80.141:44646). Apr 30 13:37:55.508346 systemd[1]: Started sshd@29-147.75.202.179:22-147.75.109.163:43650.service - OpenSSH per-connection server daemon (147.75.109.163:43650). Apr 30 13:37:55.508749 sshd[5159]: Invalid user kuba from 103.215.80.141 port 44646 Apr 30 13:37:55.538374 sshd[5162]: Accepted publickey for core from 147.75.109.163 port 43650 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:37:55.539137 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:37:55.541987 systemd-logind[1794]: New session 25 of user core. Apr 30 13:37:55.557493 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 13:37:55.656101 sshd[5164]: Connection closed by 147.75.109.163 port 43650 Apr 30 13:37:55.656317 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Apr 30 13:37:55.658205 systemd[1]: sshd@29-147.75.202.179:22-147.75.109.163:43650.service: Deactivated successfully. Apr 30 13:37:55.659326 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 13:37:55.660199 systemd-logind[1794]: Session 25 logged out. Waiting for processes to exit. Apr 30 13:37:55.660897 systemd-logind[1794]: Removed session 25. Apr 30 13:37:55.670314 sshd[5159]: Received disconnect from 103.215.80.141 port 44646:11: Bye Bye [preauth] Apr 30 13:37:55.670314 sshd[5159]: Disconnected from invalid user kuba 103.215.80.141 port 44646 [preauth] Apr 30 13:37:55.671491 systemd[1]: sshd@28-147.75.202.179:22-103.215.80.141:44646.service: Deactivated successfully. Apr 30 13:38:00.697716 systemd[1]: Started sshd@30-147.75.202.179:22-147.75.109.163:42358.service - OpenSSH per-connection server daemon (147.75.109.163:42358). Apr 30 13:38:00.762335 sshd[5192]: Accepted publickey for core from 147.75.109.163 port 42358 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:38:00.765612 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:38:00.776683 systemd-logind[1794]: New session 26 of user core. Apr 30 13:38:00.792453 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 13:38:00.881164 sshd[5194]: Connection closed by 147.75.109.163 port 42358 Apr 30 13:38:00.881354 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Apr 30 13:38:00.882940 systemd[1]: sshd@30-147.75.202.179:22-147.75.109.163:42358.service: Deactivated successfully. Apr 30 13:38:00.883872 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 13:38:00.884612 systemd-logind[1794]: Session 26 logged out. Waiting for processes to exit. Apr 30 13:38:00.885244 systemd-logind[1794]: Removed session 26. Apr 30 13:38:05.906771 systemd[1]: Started sshd@31-147.75.202.179:22-147.75.109.163:42366.service - OpenSSH per-connection server daemon (147.75.109.163:42366). Apr 30 13:38:05.939978 sshd[5218]: Accepted publickey for core from 147.75.109.163 port 42366 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:38:05.943705 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:38:05.956088 systemd-logind[1794]: New session 27 of user core. Apr 30 13:38:05.973432 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 13:38:06.069341 sshd[5220]: Connection closed by 147.75.109.163 port 42366 Apr 30 13:38:06.069581 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Apr 30 13:38:06.089619 systemd[1]: sshd@31-147.75.202.179:22-147.75.109.163:42366.service: Deactivated successfully. Apr 30 13:38:06.090624 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 13:38:06.091504 systemd-logind[1794]: Session 27 logged out. Waiting for processes to exit. Apr 30 13:38:06.092381 systemd[1]: Started sshd@32-147.75.202.179:22-147.75.109.163:42374.service - OpenSSH per-connection server daemon (147.75.109.163:42374). Apr 30 13:38:06.092962 systemd-logind[1794]: Removed session 27. Apr 30 13:38:06.127953 sshd[5243]: Accepted publickey for core from 147.75.109.163 port 42374 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:38:06.128840 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:38:06.132469 systemd-logind[1794]: New session 28 of user core. Apr 30 13:38:06.154548 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 13:38:07.525695 containerd[1804]: time="2025-04-30T13:38:07.525611128Z" level=info msg="StopContainer for \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\" with timeout 30 (s)" Apr 30 13:38:07.526733 containerd[1804]: time="2025-04-30T13:38:07.526363910Z" level=info msg="Stop container \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\" with signal terminated" Apr 30 13:38:07.550182 systemd[1]: cri-containerd-ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44.scope: Deactivated successfully. Apr 30 13:38:07.565625 containerd[1804]: time="2025-04-30T13:38:07.565584108Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 13:38:07.571583 containerd[1804]: time="2025-04-30T13:38:07.571568324Z" level=info msg="StopContainer for \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\" with timeout 2 (s)" Apr 30 13:38:07.571849 containerd[1804]: time="2025-04-30T13:38:07.571690674Z" level=info msg="Stop container \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\" with signal terminated" Apr 30 13:38:07.571952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44-rootfs.mount: Deactivated successfully. Apr 30 13:38:07.574882 systemd-networkd[1724]: lxc_health: Link DOWN Apr 30 13:38:07.574884 systemd-networkd[1724]: lxc_health: Lost carrier Apr 30 13:38:07.599796 containerd[1804]: time="2025-04-30T13:38:07.599728802Z" level=info msg="shim disconnected" id=ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44 namespace=k8s.io Apr 30 13:38:07.599796 containerd[1804]: time="2025-04-30T13:38:07.599760002Z" level=warning msg="cleaning up after shim disconnected" id=ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44 namespace=k8s.io Apr 30 13:38:07.599796 containerd[1804]: time="2025-04-30T13:38:07.599767756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:38:07.608071 containerd[1804]: time="2025-04-30T13:38:07.608021114Z" level=info msg="StopContainer for \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\" returns successfully" Apr 30 13:38:07.608495 containerd[1804]: time="2025-04-30T13:38:07.608442151Z" level=info msg="StopPodSandbox for \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\"" Apr 30 13:38:07.608539 containerd[1804]: time="2025-04-30T13:38:07.608476456Z" level=info msg="Container to stop \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:38:07.610074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8-shm.mount: Deactivated successfully. Apr 30 13:38:07.612616 systemd[1]: cri-containerd-3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8.scope: Deactivated successfully. Apr 30 13:38:07.620085 systemd[1]: cri-containerd-beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78.scope: Deactivated successfully. Apr 30 13:38:07.620371 systemd[1]: cri-containerd-beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78.scope: Consumed 6.535s CPU time, 162.9M memory peak, 144K read from disk, 13.3M written to disk. Apr 30 13:38:07.626705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8-rootfs.mount: Deactivated successfully. Apr 30 13:38:07.639601 containerd[1804]: time="2025-04-30T13:38:07.639561964Z" level=info msg="shim disconnected" id=3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8 namespace=k8s.io Apr 30 13:38:07.639601 containerd[1804]: time="2025-04-30T13:38:07.639600106Z" level=warning msg="cleaning up after shim disconnected" id=3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8 namespace=k8s.io Apr 30 13:38:07.639697 containerd[1804]: time="2025-04-30T13:38:07.639606115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:38:07.645989 containerd[1804]: time="2025-04-30T13:38:07.645941499Z" level=info msg="TearDown network for sandbox \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\" successfully" Apr 30 13:38:07.645989 containerd[1804]: time="2025-04-30T13:38:07.645957157Z" level=info msg="StopPodSandbox for \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\" returns successfully" Apr 30 13:38:07.653326 containerd[1804]: time="2025-04-30T13:38:07.653298288Z" level=info msg="shim disconnected" id=beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78 namespace=k8s.io Apr 30 13:38:07.653326 containerd[1804]: time="2025-04-30T13:38:07.653324176Z" level=warning msg="cleaning up after shim disconnected" id=beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78 namespace=k8s.io Apr 30 13:38:07.653326 containerd[1804]: time="2025-04-30T13:38:07.653329167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:38:07.660843 containerd[1804]: time="2025-04-30T13:38:07.660795387Z" level=info msg="StopContainer for \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\" returns successfully" Apr 30 13:38:07.661138 containerd[1804]: time="2025-04-30T13:38:07.661098721Z" level=info msg="StopPodSandbox for \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\"" Apr 30 13:38:07.661138 containerd[1804]: time="2025-04-30T13:38:07.661118856Z" level=info msg="Container to stop \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:38:07.661199 containerd[1804]: time="2025-04-30T13:38:07.661141349Z" level=info msg="Container to stop \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:38:07.661199 containerd[1804]: time="2025-04-30T13:38:07.661147069Z" level=info msg="Container to stop \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:38:07.661199 containerd[1804]: time="2025-04-30T13:38:07.661152298Z" level=info msg="Container to stop \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:38:07.661199 containerd[1804]: time="2025-04-30T13:38:07.661157301Z" level=info msg="Container to stop \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 13:38:07.664340 systemd[1]: cri-containerd-87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4.scope: Deactivated successfully. Apr 30 13:38:07.674808 containerd[1804]: time="2025-04-30T13:38:07.674745032Z" level=info msg="shim disconnected" id=87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4 namespace=k8s.io Apr 30 13:38:07.674808 containerd[1804]: time="2025-04-30T13:38:07.674775718Z" level=warning msg="cleaning up after shim disconnected" id=87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4 namespace=k8s.io Apr 30 13:38:07.674808 containerd[1804]: time="2025-04-30T13:38:07.674780948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:38:07.681062 containerd[1804]: time="2025-04-30T13:38:07.681029047Z" level=info msg="TearDown network for sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" successfully" Apr 30 13:38:07.681062 containerd[1804]: time="2025-04-30T13:38:07.681047434Z" level=info msg="StopPodSandbox for \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" returns successfully" Apr 30 13:38:07.819447 kubelet[3135]: I0430 13:38:07.819208 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-host-proc-sys-kernel\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.819447 kubelet[3135]: I0430 13:38:07.819305 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-lib-modules\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.819447 kubelet[3135]: I0430 13:38:07.819362 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-bpf-maps\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.819447 kubelet[3135]: I0430 13:38:07.819424 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hp2s\" (UniqueName: \"kubernetes.io/projected/163254bd-bdd1-42dd-bd6e-355683c30e48-kube-api-access-2hp2s\") pod \"163254bd-bdd1-42dd-bd6e-355683c30e48\" (UID: \"163254bd-bdd1-42dd-bd6e-355683c30e48\") " Apr 30 13:38:07.819447 kubelet[3135]: I0430 13:38:07.819410 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.821400 kubelet[3135]: I0430 13:38:07.819471 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cni-path\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.821400 kubelet[3135]: I0430 13:38:07.819494 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.821400 kubelet[3135]: I0430 13:38:07.819523 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-clustermesh-secrets\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.821400 kubelet[3135]: I0430 13:38:07.819599 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.821400 kubelet[3135]: I0430 13:38:07.819627 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cni-path" (OuterVolumeSpecName: "cni-path") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.821896 kubelet[3135]: I0430 13:38:07.819666 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zs6l\" (UniqueName: \"kubernetes.io/projected/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-kube-api-access-5zs6l\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.821896 kubelet[3135]: I0430 13:38:07.819837 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-host-proc-sys-net\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.821896 kubelet[3135]: I0430 13:38:07.819949 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/163254bd-bdd1-42dd-bd6e-355683c30e48-cilium-config-path\") pod \"163254bd-bdd1-42dd-bd6e-355683c30e48\" (UID: \"163254bd-bdd1-42dd-bd6e-355683c30e48\") " Apr 30 13:38:07.821896 kubelet[3135]: I0430 13:38:07.819969 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.821896 kubelet[3135]: I0430 13:38:07.820072 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-cgroup\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.821896 kubelet[3135]: I0430 13:38:07.820199 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-config-path\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.822502 kubelet[3135]: I0430 13:38:07.820178 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.822502 kubelet[3135]: I0430 13:38:07.820290 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-hostproc\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.822502 kubelet[3135]: I0430 13:38:07.820376 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-run\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.822502 kubelet[3135]: I0430 13:38:07.820392 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-hostproc" (OuterVolumeSpecName: "hostproc") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.822502 kubelet[3135]: I0430 13:38:07.820465 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-hubble-tls\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.823087 kubelet[3135]: I0430 13:38:07.820519 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.823087 kubelet[3135]: I0430 13:38:07.820551 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-etc-cni-netd\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.823087 kubelet[3135]: I0430 13:38:07.820630 3135 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-xtables-lock\") pod \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\" (UID: \"245cdc3f-5ba4-4e08-b454-20b6222d4d5b\") " Apr 30 13:38:07.823087 kubelet[3135]: I0430 13:38:07.820658 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.823087 kubelet[3135]: I0430 13:38:07.820769 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 13:38:07.823087 kubelet[3135]: I0430 13:38:07.820812 3135 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-hostproc\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.823653 kubelet[3135]: I0430 13:38:07.820877 3135 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-run\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.823653 kubelet[3135]: I0430 13:38:07.820937 3135 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-host-proc-sys-kernel\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.823653 kubelet[3135]: I0430 13:38:07.820989 3135 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-lib-modules\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.823653 kubelet[3135]: I0430 13:38:07.821060 3135 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-bpf-maps\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.823653 kubelet[3135]: I0430 13:38:07.821106 3135 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cni-path\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.823653 kubelet[3135]: I0430 13:38:07.821150 3135 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-cgroup\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.823653 kubelet[3135]: I0430 13:38:07.821198 3135 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-host-proc-sys-net\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.825876 kubelet[3135]: I0430 13:38:07.825795 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163254bd-bdd1-42dd-bd6e-355683c30e48-kube-api-access-2hp2s" (OuterVolumeSpecName: "kube-api-access-2hp2s") pod "163254bd-bdd1-42dd-bd6e-355683c30e48" (UID: "163254bd-bdd1-42dd-bd6e-355683c30e48"). InnerVolumeSpecName "kube-api-access-2hp2s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 13:38:07.826134 kubelet[3135]: I0430 13:38:07.825903 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-kube-api-access-5zs6l" (OuterVolumeSpecName: "kube-api-access-5zs6l") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "kube-api-access-5zs6l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 13:38:07.826734 kubelet[3135]: I0430 13:38:07.826630 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 13:38:07.826902 kubelet[3135]: I0430 13:38:07.826783 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 13:38:07.827295 kubelet[3135]: I0430 13:38:07.827054 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "245cdc3f-5ba4-4e08-b454-20b6222d4d5b" (UID: "245cdc3f-5ba4-4e08-b454-20b6222d4d5b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 13:38:07.827295 kubelet[3135]: I0430 13:38:07.827189 3135 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/163254bd-bdd1-42dd-bd6e-355683c30e48-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "163254bd-bdd1-42dd-bd6e-355683c30e48" (UID: "163254bd-bdd1-42dd-bd6e-355683c30e48"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 13:38:07.922409 kubelet[3135]: I0430 13:38:07.922298 3135 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-clustermesh-secrets\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.922409 kubelet[3135]: I0430 13:38:07.922373 3135 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5zs6l\" (UniqueName: \"kubernetes.io/projected/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-kube-api-access-5zs6l\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.922409 kubelet[3135]: I0430 13:38:07.922408 3135 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2hp2s\" (UniqueName: \"kubernetes.io/projected/163254bd-bdd1-42dd-bd6e-355683c30e48-kube-api-access-2hp2s\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.922856 kubelet[3135]: I0430 13:38:07.922439 3135 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/163254bd-bdd1-42dd-bd6e-355683c30e48-cilium-config-path\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.922856 kubelet[3135]: I0430 13:38:07.922467 3135 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-cilium-config-path\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.922856 kubelet[3135]: I0430 13:38:07.922495 3135 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-hubble-tls\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.922856 kubelet[3135]: I0430 13:38:07.922522 3135 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-etc-cni-netd\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:07.922856 kubelet[3135]: I0430 13:38:07.922549 3135 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/245cdc3f-5ba4-4e08-b454-20b6222d4d5b-xtables-lock\") on node \"ci-4230.1.1-a-aaf56335e8\" DevicePath \"\"" Apr 30 13:38:08.080035 systemd[1]: Removed slice kubepods-burstable-pod245cdc3f_5ba4_4e08_b454_20b6222d4d5b.slice - libcontainer container kubepods-burstable-pod245cdc3f_5ba4_4e08_b454_20b6222d4d5b.slice. Apr 30 13:38:08.080143 systemd[1]: kubepods-burstable-pod245cdc3f_5ba4_4e08_b454_20b6222d4d5b.slice: Consumed 6.578s CPU time, 163.5M memory peak, 144K read from disk, 13.3M written to disk. Apr 30 13:38:08.080913 systemd[1]: Removed slice kubepods-besteffort-pod163254bd_bdd1_42dd_bd6e_355683c30e48.slice - libcontainer container kubepods-besteffort-pod163254bd_bdd1_42dd_bd6e_355683c30e48.slice. Apr 30 13:38:08.197784 kubelet[3135]: I0430 13:38:08.197661 3135 scope.go:117] "RemoveContainer" containerID="ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44" Apr 30 13:38:08.200550 containerd[1804]: time="2025-04-30T13:38:08.200467738Z" level=info msg="RemoveContainer for \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\"" Apr 30 13:38:08.202847 containerd[1804]: time="2025-04-30T13:38:08.202819113Z" level=info msg="RemoveContainer for \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\" returns successfully" Apr 30 13:38:08.202944 kubelet[3135]: I0430 13:38:08.202934 3135 scope.go:117] "RemoveContainer" containerID="ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44" Apr 30 13:38:08.203037 containerd[1804]: time="2025-04-30T13:38:08.203022013Z" level=error msg="ContainerStatus for \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\": not found" Apr 30 13:38:08.203081 kubelet[3135]: E0430 13:38:08.203070 3135 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\": not found" containerID="ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44" Apr 30 13:38:08.203121 kubelet[3135]: I0430 13:38:08.203085 3135 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44"} err="failed to get container status \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffc45a7848d47361dfa3f713a23860c2da3a57a500f89a12b2e4fddc03883e44\": not found" Apr 30 13:38:08.203155 kubelet[3135]: I0430 13:38:08.203122 3135 scope.go:117] "RemoveContainer" containerID="beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78" Apr 30 13:38:08.203557 containerd[1804]: time="2025-04-30T13:38:08.203545335Z" level=info msg="RemoveContainer for \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\"" Apr 30 13:38:08.205237 containerd[1804]: time="2025-04-30T13:38:08.205221743Z" level=info msg="RemoveContainer for \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\" returns successfully" Apr 30 13:38:08.205362 kubelet[3135]: I0430 13:38:08.205321 3135 scope.go:117] "RemoveContainer" containerID="a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654" Apr 30 13:38:08.205775 containerd[1804]: time="2025-04-30T13:38:08.205763872Z" level=info msg="RemoveContainer for \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\"" Apr 30 13:38:08.206937 containerd[1804]: time="2025-04-30T13:38:08.206925303Z" level=info msg="RemoveContainer for \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\" returns successfully" Apr 30 13:38:08.207020 kubelet[3135]: I0430 13:38:08.207004 3135 scope.go:117] "RemoveContainer" containerID="f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72" Apr 30 13:38:08.207455 containerd[1804]: time="2025-04-30T13:38:08.207443511Z" level=info msg="RemoveContainer for \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\"" Apr 30 13:38:08.208494 containerd[1804]: time="2025-04-30T13:38:08.208460722Z" level=info msg="RemoveContainer for \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\" returns successfully" Apr 30 13:38:08.208525 kubelet[3135]: I0430 13:38:08.208514 3135 scope.go:117] "RemoveContainer" containerID="3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9" Apr 30 13:38:08.209080 containerd[1804]: time="2025-04-30T13:38:08.209068396Z" level=info msg="RemoveContainer for \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\"" Apr 30 13:38:08.210126 containerd[1804]: time="2025-04-30T13:38:08.210116311Z" level=info msg="RemoveContainer for \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\" returns successfully" Apr 30 13:38:08.210181 kubelet[3135]: I0430 13:38:08.210174 3135 scope.go:117] "RemoveContainer" containerID="d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0" Apr 30 13:38:08.210802 containerd[1804]: time="2025-04-30T13:38:08.210787550Z" level=info msg="RemoveContainer for \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\"" Apr 30 13:38:08.211853 containerd[1804]: time="2025-04-30T13:38:08.211840086Z" level=info msg="RemoveContainer for \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\" returns successfully" Apr 30 13:38:08.211928 kubelet[3135]: I0430 13:38:08.211916 3135 scope.go:117] "RemoveContainer" containerID="beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78" Apr 30 13:38:08.212052 containerd[1804]: time="2025-04-30T13:38:08.212031293Z" level=error msg="ContainerStatus for \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\": not found" Apr 30 13:38:08.212111 kubelet[3135]: E0430 13:38:08.212101 3135 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\": not found" containerID="beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78" Apr 30 13:38:08.212136 kubelet[3135]: I0430 13:38:08.212116 3135 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78"} err="failed to get container status \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\": rpc error: code = NotFound desc = an error occurred when try to find container \"beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78\": not found" Apr 30 13:38:08.212136 kubelet[3135]: I0430 13:38:08.212128 3135 scope.go:117] "RemoveContainer" containerID="a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654" Apr 30 13:38:08.212211 containerd[1804]: time="2025-04-30T13:38:08.212198821Z" level=error msg="ContainerStatus for \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\": not found" Apr 30 13:38:08.212285 kubelet[3135]: E0430 13:38:08.212276 3135 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\": not found" containerID="a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654" Apr 30 13:38:08.212305 kubelet[3135]: I0430 13:38:08.212289 3135 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654"} err="failed to get container status \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6dbd4dd84444f292a27de07b51bf6591b24ebf897ff9daaf5f37c7a4bc10654\": not found" Apr 30 13:38:08.212305 kubelet[3135]: I0430 13:38:08.212299 3135 scope.go:117] "RemoveContainer" containerID="f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72" Apr 30 13:38:08.212388 containerd[1804]: time="2025-04-30T13:38:08.212376303Z" level=error msg="ContainerStatus for \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\": not found" Apr 30 13:38:08.212456 kubelet[3135]: E0430 13:38:08.212449 3135 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\": not found" containerID="f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72" Apr 30 13:38:08.212474 kubelet[3135]: I0430 13:38:08.212460 3135 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72"} err="failed to get container status \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\": rpc error: code = NotFound desc = an error occurred when try to find container \"f042e2924f984171fd10710fc7aa016a952a7be4738b02604b46d9be726fca72\": not found" Apr 30 13:38:08.212474 kubelet[3135]: I0430 13:38:08.212468 3135 scope.go:117] "RemoveContainer" containerID="3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9" Apr 30 13:38:08.212548 containerd[1804]: time="2025-04-30T13:38:08.212535750Z" level=error msg="ContainerStatus for \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\": not found" Apr 30 13:38:08.212593 kubelet[3135]: E0430 13:38:08.212587 3135 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\": not found" containerID="3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9" Apr 30 13:38:08.212614 kubelet[3135]: I0430 13:38:08.212596 3135 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9"} err="failed to get container status \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bfa4c3068ce4e5d68617bb8b3af52e4c647ecdb93c5c0ea633c72e9ef75dcc9\": not found" Apr 30 13:38:08.212614 kubelet[3135]: I0430 13:38:08.212603 3135 scope.go:117] "RemoveContainer" containerID="d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0" Apr 30 13:38:08.212665 containerd[1804]: time="2025-04-30T13:38:08.212654909Z" level=error msg="ContainerStatus for \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\": not found" Apr 30 13:38:08.212710 kubelet[3135]: E0430 13:38:08.212703 3135 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\": not found" containerID="d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0" Apr 30 13:38:08.212728 kubelet[3135]: I0430 13:38:08.212714 3135 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0"} err="failed to get container status \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3e484c84645f631d914f397aa386c7c20fce61c69362de116b1dce76237dfa0\": not found" Apr 30 13:38:08.547794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-beb612c265991efd3fd63039164b112c10494c3eec5419c2f830d75127206f78-rootfs.mount: Deactivated successfully. Apr 30 13:38:08.547853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4-rootfs.mount: Deactivated successfully. Apr 30 13:38:08.547891 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4-shm.mount: Deactivated successfully. Apr 30 13:38:08.547972 systemd[1]: var-lib-kubelet-pods-163254bd\x2dbdd1\x2d42dd\x2dbd6e\x2d355683c30e48-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2hp2s.mount: Deactivated successfully. Apr 30 13:38:08.548015 systemd[1]: var-lib-kubelet-pods-245cdc3f\x2d5ba4\x2d4e08\x2db454\x2d20b6222d4d5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5zs6l.mount: Deactivated successfully. Apr 30 13:38:08.548105 systemd[1]: var-lib-kubelet-pods-245cdc3f\x2d5ba4\x2d4e08\x2db454\x2d20b6222d4d5b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 13:38:08.548143 systemd[1]: var-lib-kubelet-pods-245cdc3f\x2d5ba4\x2d4e08\x2db454\x2d20b6222d4d5b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 13:38:09.474662 sshd[5246]: Connection closed by 147.75.109.163 port 42374 Apr 30 13:38:09.475962 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Apr 30 13:38:09.501073 systemd[1]: sshd@32-147.75.202.179:22-147.75.109.163:42374.service: Deactivated successfully. Apr 30 13:38:09.504995 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 13:38:09.507255 systemd-logind[1794]: Session 28 logged out. Waiting for processes to exit. Apr 30 13:38:09.530896 systemd[1]: Started sshd@33-147.75.202.179:22-147.75.109.163:54668.service - OpenSSH per-connection server daemon (147.75.109.163:54668). Apr 30 13:38:09.534118 systemd-logind[1794]: Removed session 28. Apr 30 13:38:09.565689 sshd[5418]: Accepted publickey for core from 147.75.109.163 port 54668 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:38:09.568646 sshd-session[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:38:09.581647 systemd-logind[1794]: New session 29 of user core. Apr 30 13:38:09.594411 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 13:38:09.984090 sshd[5421]: Connection closed by 147.75.109.163 port 54668 Apr 30 13:38:09.984862 sshd-session[5418]: pam_unix(sshd:session): session closed for user core Apr 30 13:38:10.003556 kubelet[3135]: I0430 13:38:10.003479 3135 memory_manager.go:355] "RemoveStaleState removing state" podUID="245cdc3f-5ba4-4e08-b454-20b6222d4d5b" containerName="cilium-agent" Apr 30 13:38:10.003556 kubelet[3135]: I0430 13:38:10.003555 3135 memory_manager.go:355] "RemoveStaleState removing state" podUID="163254bd-bdd1-42dd-bd6e-355683c30e48" containerName="cilium-operator" Apr 30 13:38:10.011934 systemd[1]: sshd@33-147.75.202.179:22-147.75.109.163:54668.service: Deactivated successfully. Apr 30 13:38:10.016218 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 13:38:10.018232 systemd-logind[1794]: Session 29 logged out. Waiting for processes to exit. Apr 30 13:38:10.020540 systemd-logind[1794]: Removed session 29. Apr 30 13:38:10.039674 systemd[1]: Started sshd@34-147.75.202.179:22-147.75.109.163:54680.service - OpenSSH per-connection server daemon (147.75.109.163:54680). Apr 30 13:38:10.045499 systemd[1]: Created slice kubepods-burstable-pod80c8eb88_109e_408d_a47d_a434deec2b83.slice - libcontainer container kubepods-burstable-pod80c8eb88_109e_408d_a47d_a434deec2b83.slice. Apr 30 13:38:10.070096 kubelet[3135]: I0430 13:38:10.070069 3135 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163254bd-bdd1-42dd-bd6e-355683c30e48" path="/var/lib/kubelet/pods/163254bd-bdd1-42dd-bd6e-355683c30e48/volumes" Apr 30 13:38:10.070438 kubelet[3135]: I0430 13:38:10.070424 3135 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="245cdc3f-5ba4-4e08-b454-20b6222d4d5b" path="/var/lib/kubelet/pods/245cdc3f-5ba4-4e08-b454-20b6222d4d5b/volumes" Apr 30 13:38:10.088196 sshd[5444]: Accepted publickey for core from 147.75.109.163 port 54680 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:38:10.091537 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:38:10.102816 systemd-logind[1794]: New session 30 of user core. Apr 30 13:38:10.120427 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 13:38:10.139338 kubelet[3135]: I0430 13:38:10.139274 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-cilium-run\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.139512 kubelet[3135]: I0430 13:38:10.139358 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-xtables-lock\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.139512 kubelet[3135]: I0430 13:38:10.139481 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80c8eb88-109e-408d-a47d-a434deec2b83-cilium-ipsec-secrets\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.139772 kubelet[3135]: I0430 13:38:10.139534 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6rm4\" (UniqueName: \"kubernetes.io/projected/80c8eb88-109e-408d-a47d-a434deec2b83-kube-api-access-k6rm4\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.139772 kubelet[3135]: I0430 13:38:10.139665 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80c8eb88-109e-408d-a47d-a434deec2b83-hubble-tls\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.139989 kubelet[3135]: I0430 13:38:10.139783 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-cni-path\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.139989 kubelet[3135]: I0430 13:38:10.139849 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-host-proc-sys-kernel\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.139989 kubelet[3135]: I0430 13:38:10.139916 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-cilium-cgroup\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.139989 kubelet[3135]: I0430 13:38:10.139977 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-etc-cni-netd\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.140445 kubelet[3135]: I0430 13:38:10.140103 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-hostproc\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.140445 kubelet[3135]: I0430 13:38:10.140177 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80c8eb88-109e-408d-a47d-a434deec2b83-cilium-config-path\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.140445 kubelet[3135]: I0430 13:38:10.140226 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-host-proc-sys-net\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.140445 kubelet[3135]: I0430 13:38:10.140274 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-bpf-maps\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.140445 kubelet[3135]: I0430 13:38:10.140315 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80c8eb88-109e-408d-a47d-a434deec2b83-lib-modules\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.140445 kubelet[3135]: I0430 13:38:10.140353 3135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80c8eb88-109e-408d-a47d-a434deec2b83-clustermesh-secrets\") pod \"cilium-74958\" (UID: \"80c8eb88-109e-408d-a47d-a434deec2b83\") " pod="kube-system/cilium-74958" Apr 30 13:38:10.180836 sshd[5447]: Connection closed by 147.75.109.163 port 54680 Apr 30 13:38:10.181655 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Apr 30 13:38:10.204653 systemd[1]: sshd@34-147.75.202.179:22-147.75.109.163:54680.service: Deactivated successfully. Apr 30 13:38:10.208677 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 13:38:10.210878 systemd-logind[1794]: Session 30 logged out. Waiting for processes to exit. Apr 30 13:38:10.230818 systemd[1]: Started sshd@35-147.75.202.179:22-147.75.109.163:54696.service - OpenSSH per-connection server daemon (147.75.109.163:54696). Apr 30 13:38:10.233417 systemd-logind[1794]: Removed session 30. Apr 30 13:38:10.293962 sshd[5453]: Accepted publickey for core from 147.75.109.163 port 54696 ssh2: RSA SHA256:seTT0A3BCJ07Wpm/bsogaVpkx5ykDeg93RjVoABI290 Apr 30 13:38:10.295370 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:38:10.299703 systemd-logind[1794]: New session 31 of user core. Apr 30 13:38:10.321405 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 30 13:38:10.351320 containerd[1804]: time="2025-04-30T13:38:10.351209545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-74958,Uid:80c8eb88-109e-408d-a47d-a434deec2b83,Namespace:kube-system,Attempt:0,}" Apr 30 13:38:10.361237 containerd[1804]: time="2025-04-30T13:38:10.361158850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:38:10.361237 containerd[1804]: time="2025-04-30T13:38:10.361193948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:38:10.361237 containerd[1804]: time="2025-04-30T13:38:10.361217356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:38:10.361462 containerd[1804]: time="2025-04-30T13:38:10.361403487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:38:10.382261 systemd[1]: Started cri-containerd-7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de.scope - libcontainer container 7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de. Apr 30 13:38:10.398349 containerd[1804]: time="2025-04-30T13:38:10.398294212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-74958,Uid:80c8eb88-109e-408d-a47d-a434deec2b83,Namespace:kube-system,Attempt:0,} returns sandbox id \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\"" Apr 30 13:38:10.400104 containerd[1804]: time="2025-04-30T13:38:10.400082594Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 13:38:10.408591 containerd[1804]: time="2025-04-30T13:38:10.408572309Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65fedfbe3f15927b0fe387f8f96bd9de578c86c26d100c8571bc760f6bf578d1\"" Apr 30 13:38:10.408847 containerd[1804]: time="2025-04-30T13:38:10.408816120Z" level=info msg="StartContainer for \"65fedfbe3f15927b0fe387f8f96bd9de578c86c26d100c8571bc760f6bf578d1\"" Apr 30 13:38:10.436165 systemd[1]: Started cri-containerd-65fedfbe3f15927b0fe387f8f96bd9de578c86c26d100c8571bc760f6bf578d1.scope - libcontainer container 65fedfbe3f15927b0fe387f8f96bd9de578c86c26d100c8571bc760f6bf578d1. Apr 30 13:38:10.448685 containerd[1804]: time="2025-04-30T13:38:10.448657311Z" level=info msg="StartContainer for \"65fedfbe3f15927b0fe387f8f96bd9de578c86c26d100c8571bc760f6bf578d1\" returns successfully" Apr 30 13:38:10.453767 systemd[1]: cri-containerd-65fedfbe3f15927b0fe387f8f96bd9de578c86c26d100c8571bc760f6bf578d1.scope: Deactivated successfully. Apr 30 13:38:10.471350 containerd[1804]: time="2025-04-30T13:38:10.471317165Z" level=info msg="shim disconnected" id=65fedfbe3f15927b0fe387f8f96bd9de578c86c26d100c8571bc760f6bf578d1 namespace=k8s.io Apr 30 13:38:10.471350 containerd[1804]: time="2025-04-30T13:38:10.471349303Z" level=warning msg="cleaning up after shim disconnected" id=65fedfbe3f15927b0fe387f8f96bd9de578c86c26d100c8571bc760f6bf578d1 namespace=k8s.io Apr 30 13:38:10.471350 containerd[1804]: time="2025-04-30T13:38:10.471354182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:38:11.221795 containerd[1804]: time="2025-04-30T13:38:11.221696849Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 13:38:11.228861 containerd[1804]: time="2025-04-30T13:38:11.228823408Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552\"" Apr 30 13:38:11.229137 containerd[1804]: time="2025-04-30T13:38:11.229099903Z" level=info msg="StartContainer for \"dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552\"" Apr 30 13:38:11.255295 systemd[1]: Started cri-containerd-dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552.scope - libcontainer container dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552. Apr 30 13:38:11.271027 containerd[1804]: time="2025-04-30T13:38:11.270990603Z" level=info msg="StartContainer for \"dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552\" returns successfully" Apr 30 13:38:11.276354 systemd[1]: cri-containerd-dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552.scope: Deactivated successfully. Apr 30 13:38:11.294846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552-rootfs.mount: Deactivated successfully. Apr 30 13:38:11.313335 containerd[1804]: time="2025-04-30T13:38:11.313303764Z" level=info msg="shim disconnected" id=dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552 namespace=k8s.io Apr 30 13:38:11.313389 containerd[1804]: time="2025-04-30T13:38:11.313335714Z" level=warning msg="cleaning up after shim disconnected" id=dff80ae4d60451dc1d150921be2ca5a29d00acb9a66adfe4dcb02aa745b77552 namespace=k8s.io Apr 30 13:38:11.313389 containerd[1804]: time="2025-04-30T13:38:11.313342515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:38:12.072844 containerd[1804]: time="2025-04-30T13:38:12.072820713Z" level=info msg="StopPodSandbox for \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\"" Apr 30 13:38:12.073079 containerd[1804]: time="2025-04-30T13:38:12.072882413Z" level=info msg="TearDown network for sandbox \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\" successfully" Apr 30 13:38:12.073079 containerd[1804]: time="2025-04-30T13:38:12.072893147Z" level=info msg="StopPodSandbox for \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\" returns successfully" Apr 30 13:38:12.073132 containerd[1804]: time="2025-04-30T13:38:12.073098349Z" level=info msg="RemovePodSandbox for \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\"" Apr 30 13:38:12.073132 containerd[1804]: time="2025-04-30T13:38:12.073112140Z" level=info msg="Forcibly stopping sandbox \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\"" Apr 30 13:38:12.073168 containerd[1804]: time="2025-04-30T13:38:12.073138226Z" level=info msg="TearDown network for sandbox \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\" successfully" Apr 30 13:38:12.074347 containerd[1804]: time="2025-04-30T13:38:12.074335320Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:38:12.074373 containerd[1804]: time="2025-04-30T13:38:12.074353822Z" level=info msg="RemovePodSandbox \"3877a95eae83fd3c118dc4c63273b7dea4eb1312d9e13ecb9c04aca1e1355de8\" returns successfully" Apr 30 13:38:12.074605 containerd[1804]: time="2025-04-30T13:38:12.074580368Z" level=info msg="StopPodSandbox for \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\"" Apr 30 13:38:12.074650 containerd[1804]: time="2025-04-30T13:38:12.074643366Z" level=info msg="TearDown network for sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" successfully" Apr 30 13:38:12.074702 containerd[1804]: time="2025-04-30T13:38:12.074650299Z" level=info msg="StopPodSandbox for \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" returns successfully" Apr 30 13:38:12.074893 containerd[1804]: time="2025-04-30T13:38:12.074883795Z" level=info msg="RemovePodSandbox for \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\"" Apr 30 13:38:12.074919 containerd[1804]: time="2025-04-30T13:38:12.074896742Z" level=info msg="Forcibly stopping sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\"" Apr 30 13:38:12.074955 containerd[1804]: time="2025-04-30T13:38:12.074921319Z" level=info msg="TearDown network for sandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" successfully" Apr 30 13:38:12.076028 containerd[1804]: time="2025-04-30T13:38:12.075995483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:38:12.076054 containerd[1804]: time="2025-04-30T13:38:12.076033968Z" level=info msg="RemovePodSandbox \"87b26d4641510895c0cdf9432de2efc5288c4d41f81661544e8b062fa120b6e4\" returns successfully" Apr 30 13:38:12.212693 kubelet[3135]: E0430 13:38:12.212508 3135 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 13:38:12.229051 containerd[1804]: time="2025-04-30T13:38:12.228924555Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 13:38:12.239365 containerd[1804]: time="2025-04-30T13:38:12.239321242Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65\"" Apr 30 13:38:12.239880 containerd[1804]: time="2025-04-30T13:38:12.239806076Z" level=info msg="StartContainer for \"9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65\"" Apr 30 13:38:12.272256 systemd[1]: Started cri-containerd-9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65.scope - libcontainer container 9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65. Apr 30 13:38:12.287588 containerd[1804]: time="2025-04-30T13:38:12.287559018Z" level=info msg="StartContainer for \"9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65\" returns successfully" Apr 30 13:38:12.288950 systemd[1]: cri-containerd-9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65.scope: Deactivated successfully. Apr 30 13:38:12.302988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65-rootfs.mount: Deactivated successfully. Apr 30 13:38:12.319134 containerd[1804]: time="2025-04-30T13:38:12.318930048Z" level=info msg="shim disconnected" id=9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65 namespace=k8s.io Apr 30 13:38:12.319134 containerd[1804]: time="2025-04-30T13:38:12.319092737Z" level=warning msg="cleaning up after shim disconnected" id=9234d36f0115bf3435f060ca753d23059a71bc428352529b5e384946145e8c65 namespace=k8s.io Apr 30 13:38:12.319134 containerd[1804]: time="2025-04-30T13:38:12.319117926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:38:13.236367 containerd[1804]: time="2025-04-30T13:38:13.236283924Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 13:38:13.245221 containerd[1804]: time="2025-04-30T13:38:13.245202451Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2\"" Apr 30 13:38:13.245570 containerd[1804]: time="2025-04-30T13:38:13.245526634Z" level=info msg="StartContainer for \"f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2\"" Apr 30 13:38:13.265352 systemd[1]: Started cri-containerd-f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2.scope - libcontainer container f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2. Apr 30 13:38:13.277377 systemd[1]: cri-containerd-f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2.scope: Deactivated successfully. Apr 30 13:38:13.286119 containerd[1804]: time="2025-04-30T13:38:13.286062061Z" level=info msg="StartContainer for \"f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2\" returns successfully" Apr 30 13:38:13.295643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2-rootfs.mount: Deactivated successfully. Apr 30 13:38:13.315503 containerd[1804]: time="2025-04-30T13:38:13.315453572Z" level=info msg="shim disconnected" id=f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2 namespace=k8s.io Apr 30 13:38:13.315503 containerd[1804]: time="2025-04-30T13:38:13.315485090Z" level=warning msg="cleaning up after shim disconnected" id=f82cd9a8c36ecb10aa1e50df3003a8393e5c40d84d848ec3ef68e474dfe222e2 namespace=k8s.io Apr 30 13:38:13.315503 containerd[1804]: time="2025-04-30T13:38:13.315489903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:38:14.241674 containerd[1804]: time="2025-04-30T13:38:14.241653639Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 13:38:14.248706 containerd[1804]: time="2025-04-30T13:38:14.248651040Z" level=info msg="CreateContainer within sandbox \"7675a312bfac56b2bcfc698b56d54b42c07378c1ee370d16a8fa17eba05e07de\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e0747326b1b7a1dc1d8d9b30b5d6e3ea194ef60538ed039d45de4465a3640d1d\"" Apr 30 13:38:14.248996 containerd[1804]: time="2025-04-30T13:38:14.248982130Z" level=info msg="StartContainer for \"e0747326b1b7a1dc1d8d9b30b5d6e3ea194ef60538ed039d45de4465a3640d1d\"" Apr 30 13:38:14.273166 systemd[1]: Started cri-containerd-e0747326b1b7a1dc1d8d9b30b5d6e3ea194ef60538ed039d45de4465a3640d1d.scope - libcontainer container e0747326b1b7a1dc1d8d9b30b5d6e3ea194ef60538ed039d45de4465a3640d1d. Apr 30 13:38:14.287978 containerd[1804]: time="2025-04-30T13:38:14.287922800Z" level=info msg="StartContainer for \"e0747326b1b7a1dc1d8d9b30b5d6e3ea194ef60538ed039d45de4465a3640d1d\" returns successfully" Apr 30 13:38:14.505087 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 30 13:38:15.269921 kubelet[3135]: I0430 13:38:15.269893 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-74958" podStartSLOduration=6.269881514 podStartE2EDuration="6.269881514s" podCreationTimestamp="2025-04-30 13:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 13:38:15.269241279 +0000 UTC m=+423.247651509" watchObservedRunningTime="2025-04-30 13:38:15.269881514 +0000 UTC m=+423.248291742" Apr 30 13:38:17.695529 systemd-networkd[1724]: lxc_health: Link UP Apr 30 13:38:17.695793 systemd-networkd[1724]: lxc_health: Gained carrier Apr 30 13:38:19.483192 systemd-networkd[1724]: lxc_health: Gained IPv6LL Apr 30 13:38:23.018163 sshd[5460]: Connection closed by 147.75.109.163 port 54696 Apr 30 13:38:23.018935 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Apr 30 13:38:23.025333 systemd[1]: sshd@35-147.75.202.179:22-147.75.109.163:54696.service: Deactivated successfully. Apr 30 13:38:23.029197 systemd[1]: session-31.scope: Deactivated successfully. Apr 30 13:38:23.032282 systemd-logind[1794]: Session 31 logged out. Waiting for processes to exit. Apr 30 13:38:23.034802 systemd-logind[1794]: Removed session 31.