Dec 13 16:06:50.563408 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 16:06:50.563421 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:06:50.563428 kernel: BIOS-provided physical RAM map: Dec 13 16:06:50.563432 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 16:06:50.563436 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 16:06:50.563440 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 16:06:50.563444 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 16:06:50.563448 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 16:06:50.563452 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Dec 13 16:06:50.563456 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Dec 13 16:06:50.563461 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Dec 13 16:06:50.563473 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Dec 13 16:06:50.563477 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Dec 13 16:06:50.563481 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Dec 13 16:06:50.563486 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Dec 13 16:06:50.563491 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Dec 13 16:06:50.563495 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Dec 13 16:06:50.563500 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Dec 13 16:06:50.563504 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 16:06:50.563508 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 16:06:50.563512 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 16:06:50.563516 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 16:06:50.563521 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 16:06:50.563525 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Dec 13 16:06:50.563529 kernel: NX (Execute Disable) protection: active Dec 13 16:06:50.563533 kernel: SMBIOS 3.2.1 present. Dec 13 16:06:50.563539 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Dec 13 16:06:50.563543 kernel: tsc: Detected 3400.000 MHz processor Dec 13 16:06:50.563547 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 16:06:50.563552 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 16:06:50.563556 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 16:06:50.563561 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Dec 13 16:06:50.563565 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 16:06:50.563570 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Dec 13 16:06:50.563574 kernel: Using GB pages for direct mapping Dec 13 16:06:50.563578 kernel: ACPI: Early table checksum verification disabled Dec 13 16:06:50.563583 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 16:06:50.563588 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 16:06:50.563592 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Dec 13 16:06:50.563597 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 16:06:50.563603 kernel: ACPI: FACS 0x000000008C66CF80 000040 Dec 13 16:06:50.563608 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Dec 13 16:06:50.563613 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Dec 13 16:06:50.563618 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 16:06:50.563623 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 16:06:50.563628 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 16:06:50.563632 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 16:06:50.563637 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 16:06:50.563642 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 16:06:50.563647 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:06:50.563652 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 16:06:50.563657 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 16:06:50.563662 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:06:50.563666 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:06:50.563671 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 16:06:50.563676 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 16:06:50.563681 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:06:50.563685 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 16:06:50.563691 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 16:06:50.563696 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Dec 13 16:06:50.563700 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 16:06:50.563705 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 16:06:50.563710 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 16:06:50.563715 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Dec 13 16:06:50.563719 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 16:06:50.563724 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 16:06:50.563729 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 16:06:50.563734 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 16:06:50.563739 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 16:06:50.563744 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Dec 13 16:06:50.563749 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Dec 13 16:06:50.563753 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Dec 13 16:06:50.563758 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Dec 13 16:06:50.563763 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Dec 13 16:06:50.563767 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Dec 13 16:06:50.563773 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Dec 13 16:06:50.563778 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Dec 13 16:06:50.563783 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Dec 13 16:06:50.563787 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Dec 13 16:06:50.563792 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Dec 13 16:06:50.563797 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Dec 13 16:06:50.563801 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Dec 13 16:06:50.563806 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Dec 13 16:06:50.563811 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Dec 13 16:06:50.563816 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Dec 13 16:06:50.563821 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Dec 13 16:06:50.563826 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Dec 13 16:06:50.563830 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Dec 13 16:06:50.563835 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Dec 13 16:06:50.563840 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Dec 13 16:06:50.563844 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Dec 13 16:06:50.563849 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Dec 13 16:06:50.563854 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Dec 13 16:06:50.563859 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Dec 13 16:06:50.563864 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Dec 13 16:06:50.563869 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Dec 13 16:06:50.563873 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Dec 13 16:06:50.563878 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Dec 13 16:06:50.563883 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Dec 13 16:06:50.563888 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Dec 13 16:06:50.563893 kernel: No NUMA configuration found Dec 13 16:06:50.563897 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Dec 13 16:06:50.563903 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Dec 13 16:06:50.563908 kernel: Zone ranges: Dec 13 16:06:50.563913 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 16:06:50.563918 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 16:06:50.563922 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Dec 13 16:06:50.563927 kernel: Movable zone start for each node Dec 13 16:06:50.563932 kernel: Early memory node ranges Dec 13 16:06:50.563937 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 16:06:50.563942 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 16:06:50.563946 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Dec 13 16:06:50.563952 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Dec 13 16:06:50.563957 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Dec 13 16:06:50.563961 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Dec 13 16:06:50.563966 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Dec 13 16:06:50.563971 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Dec 13 16:06:50.563976 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 16:06:50.563984 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 16:06:50.563989 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 16:06:50.563994 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 16:06:50.564000 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Dec 13 16:06:50.564005 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Dec 13 16:06:50.564011 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Dec 13 16:06:50.564016 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Dec 13 16:06:50.564021 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 16:06:50.564026 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 16:06:50.564031 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 16:06:50.564036 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 16:06:50.564042 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 16:06:50.564047 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 16:06:50.564052 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 16:06:50.564057 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 16:06:50.564062 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 16:06:50.564067 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 16:06:50.564072 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 16:06:50.564077 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 16:06:50.564082 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 16:06:50.564088 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 16:06:50.564093 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 16:06:50.564098 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 16:06:50.564103 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 16:06:50.564108 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 16:06:50.564113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 16:06:50.564118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 16:06:50.564123 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 16:06:50.564128 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 16:06:50.564134 kernel: TSC deadline timer available Dec 13 16:06:50.564139 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 16:06:50.564144 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Dec 13 16:06:50.564149 kernel: Booting paravirtualized kernel on bare hardware Dec 13 16:06:50.564154 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 16:06:50.564159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 16:06:50.564164 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 16:06:50.564169 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 16:06:50.564174 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 16:06:50.564180 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Dec 13 16:06:50.564185 kernel: Policy zone: Normal Dec 13 16:06:50.564191 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:06:50.564196 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 16:06:50.564201 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 16:06:50.564207 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 16:06:50.564212 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 16:06:50.564218 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 730116K reserved, 0K cma-reserved) Dec 13 16:06:50.564223 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 16:06:50.564228 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 16:06:50.564233 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 16:06:50.564238 kernel: rcu: Hierarchical RCU implementation. Dec 13 16:06:50.564243 kernel: rcu: RCU event tracing is enabled. Dec 13 16:06:50.564249 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 16:06:50.564254 kernel: Rude variant of Tasks RCU enabled. Dec 13 16:06:50.564259 kernel: Tracing variant of Tasks RCU enabled. Dec 13 16:06:50.564265 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 16:06:50.564270 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 16:06:50.564275 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 16:06:50.564280 kernel: random: crng init done Dec 13 16:06:50.564285 kernel: Console: colour dummy device 80x25 Dec 13 16:06:50.564290 kernel: printk: console [tty0] enabled Dec 13 16:06:50.564295 kernel: printk: console [ttyS1] enabled Dec 13 16:06:50.564300 kernel: ACPI: Core revision 20210730 Dec 13 16:06:50.564305 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Dec 13 16:06:50.564310 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 16:06:50.564316 kernel: DMAR: Host address width 39 Dec 13 16:06:50.564321 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 16:06:50.564327 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 16:06:50.564332 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Dec 13 16:06:50.564337 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Dec 13 16:06:50.564342 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 16:06:50.564347 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 16:06:50.564352 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 16:06:50.564357 kernel: x2apic enabled Dec 13 16:06:50.564363 kernel: Switched APIC routing to cluster x2apic. Dec 13 16:06:50.564368 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 16:06:50.564373 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 16:06:50.564378 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 16:06:50.564383 kernel: process: using mwait in idle threads Dec 13 16:06:50.564388 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 16:06:50.564393 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 16:06:50.564398 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 16:06:50.564403 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 16:06:50.564409 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 16:06:50.564414 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 16:06:50.564419 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 16:06:50.564424 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 16:06:50.564429 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 16:06:50.564434 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 16:06:50.564438 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 16:06:50.564443 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 16:06:50.564449 kernel: TAA: Mitigation: TSX disabled Dec 13 16:06:50.564454 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 16:06:50.564459 kernel: SRBDS: Mitigation: Microcode Dec 13 16:06:50.564468 kernel: GDS: Vulnerable: No microcode Dec 13 16:06:50.564474 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 16:06:50.564479 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 16:06:50.564484 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 16:06:50.564489 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 16:06:50.564507 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 16:06:50.564512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 16:06:50.564517 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 16:06:50.564522 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 16:06:50.564527 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 16:06:50.564532 kernel: Freeing SMP alternatives memory: 32K Dec 13 16:06:50.564537 kernel: pid_max: default: 32768 minimum: 301 Dec 13 16:06:50.564542 kernel: LSM: Security Framework initializing Dec 13 16:06:50.564547 kernel: SELinux: Initializing. Dec 13 16:06:50.564552 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 16:06:50.564557 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 16:06:50.564562 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 16:06:50.564567 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 16:06:50.564572 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 16:06:50.564577 kernel: ... version: 4 Dec 13 16:06:50.564581 kernel: ... bit width: 48 Dec 13 16:06:50.564586 kernel: ... generic registers: 4 Dec 13 16:06:50.564592 kernel: ... value mask: 0000ffffffffffff Dec 13 16:06:50.564597 kernel: ... max period: 00007fffffffffff Dec 13 16:06:50.564602 kernel: ... fixed-purpose events: 3 Dec 13 16:06:50.564607 kernel: ... event mask: 000000070000000f Dec 13 16:06:50.564612 kernel: signal: max sigframe size: 2032 Dec 13 16:06:50.564617 kernel: rcu: Hierarchical SRCU implementation. Dec 13 16:06:50.564622 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 16:06:50.564627 kernel: smp: Bringing up secondary CPUs ... Dec 13 16:06:50.564632 kernel: x86: Booting SMP configuration: Dec 13 16:06:50.564638 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 16:06:50.564643 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 16:06:50.564648 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 16:06:50.564653 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 16:06:50.564658 kernel: smpboot: Max logical packages: 1 Dec 13 16:06:50.564663 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 16:06:50.564667 kernel: devtmpfs: initialized Dec 13 16:06:50.564672 kernel: x86/mm: Memory block size: 128MB Dec 13 16:06:50.564677 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Dec 13 16:06:50.564683 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Dec 13 16:06:50.564688 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 16:06:50.564693 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 16:06:50.564698 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 16:06:50.564703 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 16:06:50.564708 kernel: audit: initializing netlink subsys (disabled) Dec 13 16:06:50.564713 kernel: audit: type=2000 audit(1734106005.041:1): state=initialized audit_enabled=0 res=1 Dec 13 16:06:50.564718 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 16:06:50.564723 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 16:06:50.564729 kernel: cpuidle: using governor menu Dec 13 16:06:50.564734 kernel: ACPI: bus type PCI registered Dec 13 16:06:50.564739 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 16:06:50.564744 kernel: dca service started, version 1.12.1 Dec 13 16:06:50.564749 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 16:06:50.564753 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 16:06:50.564758 kernel: PCI: Using configuration type 1 for base access Dec 13 16:06:50.564763 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 16:06:50.564768 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 16:06:50.564774 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 16:06:50.564779 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 16:06:50.564784 kernel: ACPI: Added _OSI(Module Device) Dec 13 16:06:50.564789 kernel: ACPI: Added _OSI(Processor Device) Dec 13 16:06:50.564794 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 16:06:50.564799 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 16:06:50.564804 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 16:06:50.564809 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 16:06:50.564814 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 16:06:50.564819 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 16:06:50.564824 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:06:50.564829 kernel: ACPI: SSDT 0xFFFF9F0800218B00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 16:06:50.564834 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 16:06:50.564839 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:06:50.564844 kernel: ACPI: SSDT 0xFFFF9F0801AE2000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 16:06:50.564849 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:06:50.564854 kernel: ACPI: SSDT 0xFFFF9F0801A5A800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 16:06:50.564859 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:06:50.564864 kernel: ACPI: SSDT 0xFFFF9F0801B4F800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 16:06:50.564869 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:06:50.564874 kernel: ACPI: SSDT 0xFFFF9F080014A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 16:06:50.564879 kernel: ACPI: Dynamic OEM Table Load: Dec 13 16:06:50.564884 kernel: ACPI: SSDT 0xFFFF9F0801AE1800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 16:06:50.564889 kernel: ACPI: Interpreter enabled Dec 13 16:06:50.564894 kernel: ACPI: PM: (supports S0 S5) Dec 13 16:06:50.564899 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 16:06:50.564904 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 16:06:50.564910 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 16:06:50.564914 kernel: HEST: Table parsing has been initialized. Dec 13 16:06:50.564919 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 16:06:50.564924 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 16:06:50.564929 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 16:06:50.564934 kernel: ACPI: PM: Power Resource [USBC] Dec 13 16:06:50.564939 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 16:06:50.564944 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 16:06:50.564949 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 16:06:50.564954 kernel: ACPI: PM: Power Resource [WRST] Dec 13 16:06:50.564960 kernel: ACPI: PM: Power Resource [FN00] Dec 13 16:06:50.564964 kernel: ACPI: PM: Power Resource [FN01] Dec 13 16:06:50.564969 kernel: ACPI: PM: Power Resource [FN02] Dec 13 16:06:50.564974 kernel: ACPI: PM: Power Resource [FN03] Dec 13 16:06:50.564979 kernel: ACPI: PM: Power Resource [FN04] Dec 13 16:06:50.564984 kernel: ACPI: PM: Power Resource [PIN] Dec 13 16:06:50.564989 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 16:06:50.565054 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 16:06:50.565102 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 16:06:50.565144 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 16:06:50.565151 kernel: PCI host bridge to bus 0000:00 Dec 13 16:06:50.565197 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 16:06:50.565235 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 16:06:50.565272 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 16:06:50.565310 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Dec 13 16:06:50.565348 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 16:06:50.565385 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 16:06:50.565436 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 16:06:50.565487 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 16:06:50.565532 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 16:06:50.565580 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 16:06:50.565626 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Dec 13 16:06:50.565672 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 16:06:50.565715 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Dec 13 16:06:50.565764 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 16:06:50.565806 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Dec 13 16:06:50.565850 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 16:06:50.565897 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 16:06:50.565939 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Dec 13 16:06:50.565981 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Dec 13 16:06:50.566026 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 16:06:50.566068 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 16:06:50.566115 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 16:06:50.566158 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 16:06:50.566204 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 16:06:50.566246 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Dec 13 16:06:50.566288 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 16:06:50.566333 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 16:06:50.566375 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Dec 13 16:06:50.566416 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 16:06:50.566467 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 16:06:50.566511 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Dec 13 16:06:50.566552 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 16:06:50.566598 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 16:06:50.566640 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Dec 13 16:06:50.566683 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Dec 13 16:06:50.566732 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Dec 13 16:06:50.566775 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Dec 13 16:06:50.566818 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Dec 13 16:06:50.566860 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Dec 13 16:06:50.566902 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 16:06:50.566948 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 16:06:50.566991 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 16:06:50.567037 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 16:06:50.567082 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 16:06:50.567130 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 16:06:50.567173 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 16:06:50.567220 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 16:06:50.567263 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 16:06:50.567313 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Dec 13 16:06:50.567356 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Dec 13 16:06:50.567404 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 16:06:50.567447 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 16:06:50.567497 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 16:06:50.567544 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 16:06:50.567586 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Dec 13 16:06:50.567629 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 16:06:50.567674 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 16:06:50.567716 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 16:06:50.567768 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 16:06:50.567813 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 16:06:50.567857 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Dec 13 16:06:50.567901 kernel: pci 0000:01:00.0: PME# supported from D3cold Dec 13 16:06:50.567945 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 16:06:50.567989 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 16:06:50.568037 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 16:06:50.568083 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 16:06:50.568128 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Dec 13 16:06:50.568170 kernel: pci 0000:01:00.1: PME# supported from D3cold Dec 13 16:06:50.568214 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 16:06:50.568257 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 16:06:50.568301 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 16:06:50.568343 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 16:06:50.568387 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 16:06:50.568430 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 16:06:50.568482 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Dec 13 16:06:50.568527 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Dec 13 16:06:50.568570 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Dec 13 16:06:50.568615 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 16:06:50.568659 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Dec 13 16:06:50.568703 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 16:06:50.568747 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 16:06:50.568790 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 16:06:50.568833 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 16:06:50.568881 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 16:06:50.568926 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 16:06:50.568970 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Dec 13 16:06:50.569014 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 16:06:50.569133 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Dec 13 16:06:50.569178 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 16:06:50.569221 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 16:06:50.569264 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 16:06:50.569306 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 16:06:50.569349 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 16:06:50.569396 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 16:06:50.569440 kernel: pci 0000:06:00.0: enabling Extended Tags Dec 13 16:06:50.569511 kernel: pci 0000:06:00.0: supports D1 D2 Dec 13 16:06:50.569569 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 16:06:50.569612 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 16:06:50.569654 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 16:06:50.569696 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 16:06:50.569743 kernel: pci_bus 0000:07: extended config space not accessible Dec 13 16:06:50.569794 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 16:06:50.569842 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Dec 13 16:06:50.569888 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Dec 13 16:06:50.569934 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 16:06:50.569980 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 16:06:50.570025 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 16:06:50.570072 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 16:06:50.570117 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 16:06:50.570164 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 16:06:50.570209 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 16:06:50.570216 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 16:06:50.570222 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 16:06:50.570227 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 16:06:50.570232 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 16:06:50.570238 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 16:06:50.570243 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 16:06:50.570248 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 16:06:50.570255 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 16:06:50.570260 kernel: iommu: Default domain type: Translated Dec 13 16:06:50.570265 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 16:06:50.570311 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Dec 13 16:06:50.570357 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 16:06:50.570403 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Dec 13 16:06:50.570411 kernel: vgaarb: loaded Dec 13 16:06:50.570418 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 16:06:50.570424 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 16:06:50.570429 kernel: PTP clock support registered Dec 13 16:06:50.570434 kernel: PCI: Using ACPI for IRQ routing Dec 13 16:06:50.570440 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 16:06:50.570445 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 16:06:50.570450 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Dec 13 16:06:50.570455 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Dec 13 16:06:50.570460 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Dec 13 16:06:50.570489 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Dec 13 16:06:50.570495 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Dec 13 16:06:50.570500 kernel: clocksource: Switched to clocksource tsc-early Dec 13 16:06:50.570506 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 16:06:50.570511 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 16:06:50.570534 kernel: pnp: PnP ACPI init Dec 13 16:06:50.570581 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 16:06:50.570623 kernel: pnp 00:02: [dma 0 disabled] Dec 13 16:06:50.570666 kernel: pnp 00:03: [dma 0 disabled] Dec 13 16:06:50.570709 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 16:06:50.570748 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 16:06:50.570789 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 16:06:50.570831 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 16:06:50.570869 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 16:06:50.570907 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 16:06:50.570947 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 16:06:50.570984 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 16:06:50.571022 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 16:06:50.571059 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 16:06:50.571097 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 16:06:50.571141 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 16:06:50.571180 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 16:06:50.571219 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 16:06:50.571256 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 16:06:50.571294 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 16:06:50.571332 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 16:06:50.571370 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 16:06:50.571411 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 16:06:50.571419 kernel: pnp: PnP ACPI: found 10 devices Dec 13 16:06:50.571426 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 16:06:50.571431 kernel: NET: Registered PF_INET protocol family Dec 13 16:06:50.571437 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 16:06:50.571442 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 16:06:50.571447 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 16:06:50.571453 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 16:06:50.571458 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 16:06:50.571466 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 16:06:50.571492 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 16:06:50.571499 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 16:06:50.571504 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 16:06:50.571509 kernel: NET: Registered PF_XDP protocol family Dec 13 16:06:50.571568 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Dec 13 16:06:50.571611 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Dec 13 16:06:50.571654 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Dec 13 16:06:50.571699 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 16:06:50.571743 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 16:06:50.571789 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 16:06:50.571833 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 16:06:50.571876 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 16:06:50.571919 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 16:06:50.571962 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 16:06:50.572005 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 16:06:50.572050 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 16:06:50.572093 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 16:06:50.572135 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 16:06:50.572179 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 16:06:50.572221 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 16:06:50.572265 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 16:06:50.572306 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 16:06:50.572352 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 16:06:50.572396 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 16:06:50.572441 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 16:06:50.572509 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 16:06:50.572570 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 16:06:50.572613 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 16:06:50.572652 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 16:06:50.572690 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 16:06:50.572727 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 16:06:50.572767 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 16:06:50.572804 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Dec 13 16:06:50.572841 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 16:06:50.572885 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Dec 13 16:06:50.572925 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 16:06:50.572969 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Dec 13 16:06:50.573010 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Dec 13 16:06:50.573053 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 16:06:50.573092 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Dec 13 16:06:50.573137 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Dec 13 16:06:50.573176 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Dec 13 16:06:50.573217 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 16:06:50.573259 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Dec 13 16:06:50.573268 kernel: PCI: CLS 64 bytes, default 64 Dec 13 16:06:50.573273 kernel: DMAR: No ATSR found Dec 13 16:06:50.573279 kernel: DMAR: No SATC found Dec 13 16:06:50.573284 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 16:06:50.573326 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 16:06:50.573371 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 16:06:50.573413 kernel: pci 0000:00:08.0: Adding to iommu group 2 Dec 13 16:06:50.573456 kernel: pci 0000:00:12.0: Adding to iommu group 3 Dec 13 16:06:50.573526 kernel: pci 0000:00:14.0: Adding to iommu group 4 Dec 13 16:06:50.573569 kernel: pci 0000:00:14.2: Adding to iommu group 4 Dec 13 16:06:50.573613 kernel: pci 0000:00:15.0: Adding to iommu group 5 Dec 13 16:06:50.573655 kernel: pci 0000:00:15.1: Adding to iommu group 5 Dec 13 16:06:50.573698 kernel: pci 0000:00:16.0: Adding to iommu group 6 Dec 13 16:06:50.573739 kernel: pci 0000:00:16.1: Adding to iommu group 6 Dec 13 16:06:50.573783 kernel: pci 0000:00:16.4: Adding to iommu group 6 Dec 13 16:06:50.573825 kernel: pci 0000:00:17.0: Adding to iommu group 7 Dec 13 16:06:50.573868 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Dec 13 16:06:50.573913 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Dec 13 16:06:50.573957 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Dec 13 16:06:50.574000 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Dec 13 16:06:50.574044 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Dec 13 16:06:50.574087 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Dec 13 16:06:50.574130 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Dec 13 16:06:50.574173 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Dec 13 16:06:50.574216 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Dec 13 16:06:50.574262 kernel: pci 0000:01:00.0: Adding to iommu group 1 Dec 13 16:06:50.574307 kernel: pci 0000:01:00.1: Adding to iommu group 1 Dec 13 16:06:50.574351 kernel: pci 0000:03:00.0: Adding to iommu group 15 Dec 13 16:06:50.574396 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 16:06:50.574439 kernel: pci 0000:06:00.0: Adding to iommu group 17 Dec 13 16:06:50.574489 kernel: pci 0000:07:00.0: Adding to iommu group 17 Dec 13 16:06:50.574497 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 16:06:50.574503 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 16:06:50.574510 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Dec 13 16:06:50.574515 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Dec 13 16:06:50.574521 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 16:06:50.574526 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 16:06:50.574531 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 16:06:50.574578 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 16:06:50.574586 kernel: Initialise system trusted keyrings Dec 13 16:06:50.574591 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 16:06:50.574598 kernel: Key type asymmetric registered Dec 13 16:06:50.574604 kernel: Asymmetric key parser 'x509' registered Dec 13 16:06:50.574609 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 16:06:50.574614 kernel: io scheduler mq-deadline registered Dec 13 16:06:50.574620 kernel: io scheduler kyber registered Dec 13 16:06:50.574625 kernel: io scheduler bfq registered Dec 13 16:06:50.574669 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Dec 13 16:06:50.574712 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Dec 13 16:06:50.574757 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Dec 13 16:06:50.574802 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Dec 13 16:06:50.574845 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Dec 13 16:06:50.574889 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Dec 13 16:06:50.574937 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 16:06:50.574945 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 16:06:50.574951 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 16:06:50.574957 kernel: pstore: Registered erst as persistent store backend Dec 13 16:06:50.574964 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 16:06:50.574969 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 16:06:50.574975 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 16:06:50.574980 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 16:06:50.574985 kernel: hpet_acpi_add: no address or irqs in _CRS Dec 13 16:06:50.575029 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 16:06:50.575037 kernel: i8042: PNP: No PS/2 controller found. Dec 13 16:06:50.575075 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 16:06:50.575119 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 16:06:50.575158 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T16:06:49 UTC (1734106009) Dec 13 16:06:50.575198 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 16:06:50.575206 kernel: fail to initialize ptp_kvm Dec 13 16:06:50.575211 kernel: intel_pstate: Intel P-state driver initializing Dec 13 16:06:50.575217 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 16:06:50.575222 kernel: intel_pstate: HWP enabled Dec 13 16:06:50.575228 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 16:06:50.575233 kernel: vesafb: scrolling: redraw Dec 13 16:06:50.575240 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 16:06:50.575245 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000088f51a2a, using 768k, total 768k Dec 13 16:06:50.575251 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 16:06:50.575256 kernel: fb0: VESA VGA frame buffer device Dec 13 16:06:50.575261 kernel: NET: Registered PF_INET6 protocol family Dec 13 16:06:50.575267 kernel: Segment Routing with IPv6 Dec 13 16:06:50.575272 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 16:06:50.575278 kernel: NET: Registered PF_PACKET protocol family Dec 13 16:06:50.575283 kernel: Key type dns_resolver registered Dec 13 16:06:50.575289 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 16:06:50.575295 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 16:06:50.575300 kernel: IPI shorthand broadcast: enabled Dec 13 16:06:50.575305 kernel: sched_clock: Marking stable (1681135623, 1339912169)->(4464895848, -1443848056) Dec 13 16:06:50.575311 kernel: registered taskstats version 1 Dec 13 16:06:50.575316 kernel: Loading compiled-in X.509 certificates Dec 13 16:06:50.575322 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 16:06:50.575327 kernel: Key type .fscrypt registered Dec 13 16:06:50.575332 kernel: Key type fscrypt-provisioning registered Dec 13 16:06:50.575338 kernel: pstore: Using crash dump compression: deflate Dec 13 16:06:50.575344 kernel: ima: Allocated hash algorithm: sha1 Dec 13 16:06:50.575349 kernel: ima: No architecture policies found Dec 13 16:06:50.575355 kernel: clk: Disabling unused clocks Dec 13 16:06:50.575360 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 16:06:50.575366 kernel: Write protecting the kernel read-only data: 28672k Dec 13 16:06:50.575371 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 16:06:50.575376 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 16:06:50.575382 kernel: Run /init as init process Dec 13 16:06:50.575388 kernel: with arguments: Dec 13 16:06:50.575393 kernel: /init Dec 13 16:06:50.575399 kernel: with environment: Dec 13 16:06:50.575404 kernel: HOME=/ Dec 13 16:06:50.575409 kernel: TERM=linux Dec 13 16:06:50.575414 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 16:06:50.575421 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 16:06:50.575428 systemd[1]: Detected architecture x86-64. Dec 13 16:06:50.575434 systemd[1]: Running in initrd. Dec 13 16:06:50.575440 systemd[1]: No hostname configured, using default hostname. Dec 13 16:06:50.575445 systemd[1]: Hostname set to . Dec 13 16:06:50.575450 systemd[1]: Initializing machine ID from random generator. Dec 13 16:06:50.575456 systemd[1]: Queued start job for default target initrd.target. Dec 13 16:06:50.575461 systemd[1]: Started systemd-ask-password-console.path. Dec 13 16:06:50.575470 systemd[1]: Reached target cryptsetup.target. Dec 13 16:06:50.575476 systemd[1]: Reached target paths.target. Dec 13 16:06:50.575482 systemd[1]: Reached target slices.target. Dec 13 16:06:50.575488 systemd[1]: Reached target swap.target. Dec 13 16:06:50.575493 systemd[1]: Reached target timers.target. Dec 13 16:06:50.575499 systemd[1]: Listening on iscsid.socket. Dec 13 16:06:50.575505 systemd[1]: Listening on iscsiuio.socket. Dec 13 16:06:50.575510 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 16:06:50.575516 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 16:06:50.575522 systemd[1]: Listening on systemd-journald.socket. Dec 13 16:06:50.575528 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Dec 13 16:06:50.575533 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Dec 13 16:06:50.575539 systemd[1]: Listening on systemd-networkd.socket. Dec 13 16:06:50.575544 kernel: clocksource: Switched to clocksource tsc Dec 13 16:06:50.575550 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 16:06:50.575555 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 16:06:50.575561 systemd[1]: Reached target sockets.target. Dec 13 16:06:50.575566 systemd[1]: Starting kmod-static-nodes.service... Dec 13 16:06:50.575573 systemd[1]: Finished network-cleanup.service. Dec 13 16:06:50.575578 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 16:06:50.575584 systemd[1]: Starting systemd-journald.service... Dec 13 16:06:50.575589 systemd[1]: Starting systemd-modules-load.service... Dec 13 16:06:50.575597 systemd-journald[267]: Journal started Dec 13 16:06:50.575624 systemd-journald[267]: Runtime Journal (/run/log/journal/d1b27222ef954a9f8cf0adefa549b29b) is 8.0M, max 640.1M, 632.1M free. Dec 13 16:06:50.578200 systemd-modules-load[268]: Inserted module 'overlay' Dec 13 16:06:50.584000 audit: BPF prog-id=6 op=LOAD Dec 13 16:06:50.602505 kernel: audit: type=1334 audit(1734106010.584:2): prog-id=6 op=LOAD Dec 13 16:06:50.602538 systemd[1]: Starting systemd-resolved.service... Dec 13 16:06:50.652512 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 16:06:50.652528 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 16:06:50.685508 kernel: Bridge firewalling registered Dec 13 16:06:50.685524 systemd[1]: Started systemd-journald.service. Dec 13 16:06:50.699682 systemd-modules-load[268]: Inserted module 'br_netfilter' Dec 13 16:06:50.748415 kernel: audit: type=1130 audit(1734106010.707:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.702162 systemd-resolved[270]: Positive Trust Anchors: Dec 13 16:06:50.812523 kernel: SCSI subsystem initialized Dec 13 16:06:50.812612 kernel: audit: type=1130 audit(1734106010.760:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.702169 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 16:06:50.908287 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 16:06:50.908299 kernel: audit: type=1130 audit(1734106010.832:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.908307 kernel: device-mapper: uevent: version 1.0.3 Dec 13 16:06:50.908314 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 16:06:50.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.702188 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 16:06:50.999721 kernel: audit: type=1130 audit(1734106010.934:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.703766 systemd-resolved[270]: Defaulting to hostname 'linux'. Dec 13 16:06:51.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.707711 systemd[1]: Started systemd-resolved.service. Dec 13 16:06:51.109038 kernel: audit: type=1130 audit(1734106011.008:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:51.109050 kernel: audit: type=1130 audit(1734106011.062:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:51.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:50.760645 systemd[1]: Finished kmod-static-nodes.service. Dec 13 16:06:50.832615 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 16:06:50.931681 systemd-modules-load[268]: Inserted module 'dm_multipath' Dec 13 16:06:50.934763 systemd[1]: Finished systemd-modules-load.service. Dec 13 16:06:51.008816 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 16:06:51.062753 systemd[1]: Reached target nss-lookup.target. Dec 13 16:06:51.118072 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 16:06:51.137992 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:06:51.138286 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 16:06:51.141182 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 16:06:51.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:51.141992 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:06:51.253949 kernel: audit: type=1130 audit(1734106011.140:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:51.253961 kernel: audit: type=1130 audit(1734106011.205:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:51.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:51.205789 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 16:06:51.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:51.263083 systemd[1]: Starting dracut-cmdline.service... Dec 13 16:06:51.283578 dracut-cmdline[293]: dracut-dracut-053 Dec 13 16:06:51.283578 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 16:06:51.283578 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 16:06:51.352560 kernel: Loading iSCSI transport class v2.0-870. Dec 13 16:06:51.352573 kernel: iscsi: registered transport (tcp) Dec 13 16:06:51.408198 kernel: iscsi: registered transport (qla4xxx) Dec 13 16:06:51.408215 kernel: QLogic iSCSI HBA Driver Dec 13 16:06:51.424396 systemd[1]: Finished dracut-cmdline.service. Dec 13 16:06:51.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:51.424933 systemd[1]: Starting dracut-pre-udev.service... Dec 13 16:06:51.480530 kernel: raid6: avx2x4 gen() 48930 MB/s Dec 13 16:06:51.515530 kernel: raid6: avx2x4 xor() 21366 MB/s Dec 13 16:06:51.550500 kernel: raid6: avx2x2 gen() 53745 MB/s Dec 13 16:06:51.585534 kernel: raid6: avx2x2 xor() 32187 MB/s Dec 13 16:06:51.620498 kernel: raid6: avx2x1 gen() 45232 MB/s Dec 13 16:06:51.655532 kernel: raid6: avx2x1 xor() 27937 MB/s Dec 13 16:06:51.688499 kernel: raid6: sse2x4 gen() 21355 MB/s Dec 13 16:06:51.722537 kernel: raid6: sse2x4 xor() 11997 MB/s Dec 13 16:06:51.756533 kernel: raid6: sse2x2 gen() 21651 MB/s Dec 13 16:06:51.790499 kernel: raid6: sse2x2 xor() 13405 MB/s Dec 13 16:06:51.824500 kernel: raid6: sse2x1 gen() 18300 MB/s Dec 13 16:06:51.876447 kernel: raid6: sse2x1 xor() 8931 MB/s Dec 13 16:06:51.876463 kernel: raid6: using algorithm avx2x2 gen() 53745 MB/s Dec 13 16:06:51.876477 kernel: raid6: .... xor() 32187 MB/s, rmw enabled Dec 13 16:06:51.894693 kernel: raid6: using avx2x2 recovery algorithm Dec 13 16:06:51.940475 kernel: xor: automatically using best checksumming function avx Dec 13 16:06:52.020523 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 16:06:52.024934 systemd[1]: Finished dracut-pre-udev.service. Dec 13 16:06:52.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:52.034000 audit: BPF prog-id=7 op=LOAD Dec 13 16:06:52.034000 audit: BPF prog-id=8 op=LOAD Dec 13 16:06:52.035339 systemd[1]: Starting systemd-udevd.service... Dec 13 16:06:52.043376 systemd-udevd[476]: Using default interface naming scheme 'v252'. Dec 13 16:06:52.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:52.050731 systemd[1]: Started systemd-udevd.service. Dec 13 16:06:52.091594 dracut-pre-trigger[489]: rd.md=0: removing MD RAID activation Dec 13 16:06:52.068249 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 16:06:52.100749 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 16:06:52.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:52.119266 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 16:06:52.170667 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 16:06:52.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:52.199473 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 16:06:52.222606 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 16:06:52.222654 kernel: AES CTR mode by8 optimization enabled Dec 13 16:06:52.242477 kernel: libata version 3.00 loaded. Dec 13 16:06:52.277525 kernel: ACPI: bus type USB registered Dec 13 16:06:52.277552 kernel: usbcore: registered new interface driver usbfs Dec 13 16:06:52.277561 kernel: usbcore: registered new interface driver hub Dec 13 16:06:52.295114 kernel: usbcore: registered new device driver usb Dec 13 16:06:52.347921 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 16:06:52.347941 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Dec 13 16:06:53.278184 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 16:06:53.278198 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 16:06:53.278265 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 16:06:53.278317 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 16:06:53.278368 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Dec 13 16:06:53.278414 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 16:06:53.278461 kernel: pps pps0: new PPS source ptp0 Dec 13 16:06:53.278613 kernel: igb 0000:03:00.0: added PHC on eth0 Dec 13 16:06:53.278668 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 16:06:53.278717 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b6 Dec 13 16:06:53.278766 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Dec 13 16:06:53.278812 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 16:06:53.278860 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 16:06:53.278906 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 16:06:53.278955 kernel: pps pps1: new PPS source ptp1 Dec 13 16:06:53.279008 kernel: igb 0000:04:00.0: added PHC on eth1 Dec 13 16:06:53.279061 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 16:06:53.279109 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b7 Dec 13 16:06:53.279157 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Dec 13 16:06:53.279205 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 16:06:53.279252 kernel: scsi host0: ahci Dec 13 16:06:53.279306 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 16:06:53.279354 kernel: scsi host1: ahci Dec 13 16:06:53.279406 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Dec 13 16:06:53.279454 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 16:06:53.279548 kernel: scsi host2: ahci Dec 13 16:06:53.279616 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 16:06:53.279662 kernel: scsi host3: ahci Dec 13 16:06:53.279713 kernel: hub 1-0:1.0: USB hub found Dec 13 16:06:53.279773 kernel: scsi host4: ahci Dec 13 16:06:53.279827 kernel: hub 1-0:1.0: 16 ports detected Dec 13 16:06:53.279882 kernel: scsi host5: ahci Dec 13 16:06:53.279933 kernel: hub 2-0:1.0: USB hub found Dec 13 16:06:53.279987 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 16:06:53.280037 kernel: scsi host6: ahci Dec 13 16:06:53.280088 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Dec 13 16:06:53.280137 kernel: hub 2-0:1.0: 10 ports detected Dec 13 16:06:53.280188 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Dec 13 16:06:53.280196 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 16:06:53.280287 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Dec 13 16:06:53.280295 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Dec 13 16:06:53.280301 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Dec 13 16:06:53.280307 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Dec 13 16:06:53.280315 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Dec 13 16:06:53.280321 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Dec 13 16:06:53.280328 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 16:06:53.280380 kernel: hub 1-14:1.0: USB hub found Dec 13 16:06:53.280438 kernel: hub 1-14:1.0: 4 ports detected Dec 13 16:06:53.280520 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 16:06:53.280592 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Dec 13 16:06:53.888204 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 16:06:53.888272 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 16:06:53.888281 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 16:06:53.888287 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 16:06:53.888294 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 16:06:53.888398 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 16:06:53.888406 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 16:06:53.888414 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 16:06:53.888421 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 16:06:53.888427 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 16:06:53.888434 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 16:06:53.888440 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 16:06:53.888447 kernel: ata1.00: Features: NCQ-prio Dec 13 16:06:53.888453 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 16:06:53.888460 kernel: ata2.00: Features: NCQ-prio Dec 13 16:06:53.888471 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 16:06:53.888479 kernel: ata1.00: configured for UDMA/133 Dec 13 16:06:53.888485 kernel: ata2.00: configured for UDMA/133 Dec 13 16:06:53.888492 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 16:06:54.058287 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 16:06:54.181481 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 16:06:54.181646 kernel: usbcore: registered new interface driver usbhid Dec 13 16:06:54.181676 kernel: port_module: 9 callbacks suppressed Dec 13 16:06:54.181685 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Dec 13 16:06:54.181764 kernel: usbhid: USB HID core driver Dec 13 16:06:54.181775 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 16:06:54.181864 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 16:06:54.181893 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:06:54.181904 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:06:54.181915 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 16:06:54.182029 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 16:06:54.182112 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 16:06:54.182206 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 16:06:54.182313 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 16:06:54.182322 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 16:06:54.182395 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Dec 13 16:06:54.182456 kernel: sd 1:0:0:0: [sdb] Write Protect is off Dec 13 16:06:54.182528 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 16:06:54.182594 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 16:06:54.182664 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 16:06:54.182730 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 16:06:54.182790 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 16:06:54.182853 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 16:06:54.182913 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:06:54.182920 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:06:54.182928 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 16:06:54.182935 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 16:06:54.182993 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 16:06:54.183001 kernel: GPT:9289727 != 937703087 Dec 13 16:06:54.183007 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 16:06:54.183013 kernel: GPT:9289727 != 937703087 Dec 13 16:06:54.183019 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 16:06:54.183027 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 16:06:54.183033 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:06:54.183040 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Dec 13 16:06:54.200471 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Dec 13 16:06:54.226298 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 16:06:54.276703 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by (udev-worker) (539) Dec 13 16:06:54.276717 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Dec 13 16:06:54.255526 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 16:06:54.260967 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 16:06:54.289443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 16:06:54.313965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 16:06:54.377576 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:06:54.377599 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 16:06:54.377610 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:06:54.325126 systemd[1]: Starting disk-uuid.service... Dec 13 16:06:54.396561 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 16:06:54.396612 disk-uuid[691]: Primary Header is updated. Dec 13 16:06:54.396612 disk-uuid[691]: Secondary Entries is updated. Dec 13 16:06:54.396612 disk-uuid[691]: Secondary Header is updated. Dec 13 16:06:55.384029 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 16:06:55.402528 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Dec 13 16:06:55.402544 disk-uuid[692]: The operation has completed successfully. Dec 13 16:06:55.439866 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 16:06:55.536002 kernel: audit: type=1130 audit(1734106015.447:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.536021 kernel: audit: type=1131 audit(1734106015.447:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.439908 systemd[1]: Finished disk-uuid.service. Dec 13 16:06:55.565561 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 16:06:55.448181 systemd[1]: Starting verity-setup.service... Dec 13 16:06:55.594873 systemd[1]: Found device dev-mapper-usr.device. Dec 13 16:06:55.605479 systemd[1]: Mounting sysusr-usr.mount... Dec 13 16:06:55.612669 systemd[1]: Finished verity-setup.service. Dec 13 16:06:55.681553 kernel: audit: type=1130 audit(1734106015.624:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.722468 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 16:06:55.722597 systemd[1]: Mounted sysusr-usr.mount. Dec 13 16:06:55.730765 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 16:06:55.731169 systemd[1]: Starting ignition-setup.service... Dec 13 16:06:55.824577 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 16:06:55.824593 kernel: BTRFS info (device sdb6): using free space tree Dec 13 16:06:55.824601 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 16:06:55.824686 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 16:06:55.737955 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 16:06:55.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.817056 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 16:06:55.936661 kernel: audit: type=1130 audit(1734106015.833:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.936754 kernel: audit: type=1130 audit(1734106015.890:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.834030 systemd[1]: Finished ignition-setup.service. Dec 13 16:06:55.965644 kernel: audit: type=1334 audit(1734106015.944:24): prog-id=9 op=LOAD Dec 13 16:06:55.944000 audit: BPF prog-id=9 op=LOAD Dec 13 16:06:55.891183 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 16:06:55.945330 systemd[1]: Starting systemd-networkd.service... Dec 13 16:06:56.027567 kernel: audit: type=1130 audit(1734106015.980:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.979872 systemd-networkd[878]: lo: Link UP Dec 13 16:06:56.011739 ignition[867]: Ignition 2.14.0 Dec 13 16:06:55.979874 systemd-networkd[878]: lo: Gained carrier Dec 13 16:06:56.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:56.011744 ignition[867]: Stage: fetch-offline Dec 13 16:06:56.206629 kernel: audit: type=1130 audit(1734106016.063:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:56.206649 kernel: audit: type=1130 audit(1734106016.124:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:56.206657 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 16:06:56.206748 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Dec 13 16:06:56.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.980164 systemd-networkd[878]: Enumeration completed Dec 13 16:06:56.011769 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:06:55.980233 systemd[1]: Started systemd-networkd.service. Dec 13 16:06:56.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:56.011785 ignition[867]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:06:55.980629 systemd[1]: Reached target network.target. Dec 13 16:06:56.021186 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:06:56.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:56.295841 iscsid[898]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 16:06:56.295841 iscsid[898]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 16:06:56.295841 iscsid[898]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 16:06:56.295841 iscsid[898]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 16:06:56.295841 iscsid[898]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 16:06:56.295841 iscsid[898]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 16:06:56.295841 iscsid[898]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 16:06:56.450710 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 16:06:56.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:06:55.980867 systemd-networkd[878]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:06:56.021253 ignition[867]: parsed url from cmdline: "" Dec 13 16:06:56.025080 unknown[867]: fetched base config from "system" Dec 13 16:06:56.021255 ignition[867]: no config URL provided Dec 13 16:06:56.025084 unknown[867]: fetched user config from "system" Dec 13 16:06:56.021258 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 16:06:56.036041 systemd[1]: Starting iscsiuio.service... Dec 13 16:06:56.021278 ignition[867]: parsing config with SHA512: 5429b177d2edfd4396c3bbdb758075a7d07f4c0ec1d92978f1be7abf3d2614ea25a6e4d43af8d7924bc075cad99627702d604cdb9c8ea54c7a24f6ed3ad87cca Dec 13 16:06:56.051027 systemd[1]: Started iscsiuio.service. Dec 13 16:06:56.025368 ignition[867]: fetch-offline: fetch-offline passed Dec 13 16:06:56.064199 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 16:06:56.025374 ignition[867]: POST message to Packet Timeline Dec 13 16:06:56.124793 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 16:06:56.025379 ignition[867]: POST Status error: resource requires networking Dec 13 16:06:56.125348 systemd[1]: Starting ignition-kargs.service... Dec 13 16:06:56.025421 ignition[867]: Ignition finished successfully Dec 13 16:06:56.194110 systemd-networkd[878]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:06:56.197459 ignition[886]: Ignition 2.14.0 Dec 13 16:06:56.216009 systemd[1]: Starting iscsid.service... Dec 13 16:06:56.197462 ignition[886]: Stage: kargs Dec 13 16:06:56.236611 systemd[1]: Started iscsid.service. Dec 13 16:06:56.197559 ignition[886]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:06:56.254988 systemd[1]: Starting dracut-initqueue.service... Dec 13 16:06:56.197568 ignition[886]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:06:56.269656 systemd[1]: Finished dracut-initqueue.service. Dec 13 16:06:56.198854 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:06:56.285710 systemd[1]: Reached target remote-fs-pre.target. Dec 13 16:06:56.200514 ignition[886]: kargs: kargs passed Dec 13 16:06:56.304639 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 16:06:56.200532 ignition[886]: POST message to Packet Timeline Dec 13 16:06:56.322618 systemd[1]: Reached target remote-fs.target. Dec 13 16:06:56.200543 ignition[886]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:06:56.355258 systemd[1]: Starting dracut-pre-mount.service... Dec 13 16:06:56.203639 ignition[886]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49905->[::1]:53: read: connection refused Dec 13 16:06:56.395558 systemd-networkd[878]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:06:56.404102 ignition[886]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 16:06:56.401849 systemd[1]: Finished dracut-pre-mount.service. Dec 13 16:06:56.404540 ignition[886]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51275->[::1]:53: read: connection refused Dec 13 16:06:56.424025 systemd-networkd[878]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 16:06:56.453171 systemd-networkd[878]: enp1s0f1np1: Link UP Dec 13 16:06:56.453342 systemd-networkd[878]: enp1s0f1np1: Gained carrier Dec 13 16:06:56.468842 systemd-networkd[878]: enp1s0f0np0: Link UP Dec 13 16:06:56.469084 systemd-networkd[878]: eno2: Link UP Dec 13 16:06:56.805165 ignition[886]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 16:06:56.469311 systemd-networkd[878]: eno1: Link UP Dec 13 16:06:56.806385 ignition[886]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37324->[::1]:53: read: connection refused Dec 13 16:06:57.245904 systemd-networkd[878]: enp1s0f0np0: Gained carrier Dec 13 16:06:57.254583 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Dec 13 16:06:57.283694 systemd-networkd[878]: enp1s0f0np0: DHCPv4 address 147.28.180.91/31, gateway 147.28.180.90 acquired from 145.40.83.140 Dec 13 16:06:57.606921 ignition[886]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 16:06:57.608149 ignition[886]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:39788->[::1]:53: read: connection refused Dec 13 16:06:58.121055 systemd-networkd[878]: enp1s0f1np1: Gained IPv6LL Dec 13 16:06:58.505067 systemd-networkd[878]: enp1s0f0np0: Gained IPv6LL Dec 13 16:06:59.209742 ignition[886]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 16:06:59.210978 ignition[886]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41545->[::1]:53: read: connection refused Dec 13 16:07:02.414542 ignition[886]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 16:07:03.256368 ignition[886]: GET result: OK Dec 13 16:07:03.607900 ignition[886]: Ignition finished successfully Dec 13 16:07:03.612396 systemd[1]: Finished ignition-kargs.service. Dec 13 16:07:03.695061 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 16:07:03.695090 kernel: audit: type=1130 audit(1734106023.623:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:03.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:03.632984 ignition[915]: Ignition 2.14.0 Dec 13 16:07:03.625813 systemd[1]: Starting ignition-disks.service... Dec 13 16:07:03.632988 ignition[915]: Stage: disks Dec 13 16:07:03.633043 ignition[915]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:07:03.633053 ignition[915]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:07:03.634382 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:07:03.635869 ignition[915]: disks: disks passed Dec 13 16:07:03.635872 ignition[915]: POST message to Packet Timeline Dec 13 16:07:03.635883 ignition[915]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:07:04.602062 ignition[915]: GET result: OK Dec 13 16:07:04.956499 ignition[915]: Ignition finished successfully Dec 13 16:07:04.959672 systemd[1]: Finished ignition-disks.service. Dec 13 16:07:04.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:04.973025 systemd[1]: Reached target initrd-root-device.target. Dec 13 16:07:05.047725 kernel: audit: type=1130 audit(1734106024.972:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:05.033688 systemd[1]: Reached target local-fs-pre.target. Dec 13 16:07:05.033723 systemd[1]: Reached target local-fs.target. Dec 13 16:07:05.056681 systemd[1]: Reached target sysinit.target. Dec 13 16:07:05.070633 systemd[1]: Reached target basic.target. Dec 13 16:07:05.071259 systemd[1]: Starting systemd-fsck-root.service... Dec 13 16:07:05.097351 systemd-fsck[930]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 16:07:05.115043 systemd[1]: Finished systemd-fsck-root.service. Dec 13 16:07:05.203637 kernel: audit: type=1130 audit(1734106025.124:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:05.203651 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 16:07:05.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:05.129764 systemd[1]: Mounting sysroot.mount... Dec 13 16:07:05.211103 systemd[1]: Mounted sysroot.mount. Dec 13 16:07:05.224740 systemd[1]: Reached target initrd-root-fs.target. Dec 13 16:07:05.232420 systemd[1]: Mounting sysroot-usr.mount... Dec 13 16:07:05.253416 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 16:07:05.260100 systemd[1]: Starting flatcar-static-network.service... Dec 13 16:07:05.281626 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 16:07:05.281659 systemd[1]: Reached target ignition-diskful.target. Dec 13 16:07:05.299394 systemd[1]: Mounted sysroot-usr.mount. Dec 13 16:07:05.323824 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 16:07:05.463245 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (943) Dec 13 16:07:05.463264 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 16:07:05.463274 kernel: BTRFS info (device sdb6): using free space tree Dec 13 16:07:05.463282 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 16:07:05.463290 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 16:07:05.463351 coreos-metadata[938]: Dec 13 16:07:05.439 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 16:07:05.524588 kernel: audit: type=1130 audit(1734106025.471:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:05.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:05.336222 systemd[1]: Starting initrd-setup-root.service... Dec 13 16:07:05.540723 coreos-metadata[937]: Dec 13 16:07:05.432 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 16:07:05.430427 systemd[1]: Finished initrd-setup-root.service. Dec 13 16:07:05.576583 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 16:07:05.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:05.472788 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 16:07:05.646691 kernel: audit: type=1130 audit(1734106025.584:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:05.646706 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Dec 13 16:07:05.533093 systemd[1]: Starting ignition-mount.service... Dec 13 16:07:05.665684 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 16:07:05.676676 ignition[1013]: INFO : Ignition 2.14.0 Dec 13 16:07:05.676676 ignition[1013]: INFO : Stage: mount Dec 13 16:07:05.676676 ignition[1013]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:07:05.676676 ignition[1013]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:07:05.676676 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:07:05.676676 ignition[1013]: INFO : mount: mount passed Dec 13 16:07:05.676676 ignition[1013]: INFO : POST message to Packet Timeline Dec 13 16:07:05.676676 ignition[1013]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:07:05.549056 systemd[1]: Starting sysroot-boot.service... Dec 13 16:07:05.767745 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 16:07:05.570428 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 16:07:05.570475 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 16:07:05.571120 systemd[1]: Finished sysroot-boot.service. Dec 13 16:07:06.342580 coreos-metadata[937]: Dec 13 16:07:06.342 INFO Fetch successful Dec 13 16:07:06.417617 coreos-metadata[937]: Dec 13 16:07:06.417 INFO wrote hostname ci-3510.3.6-a-6bc1e3250f to /sysroot/etc/hostname Dec 13 16:07:06.418072 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 16:07:06.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:06.496651 kernel: audit: type=1130 audit(1734106026.439:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:07.013115 ignition[1013]: INFO : GET result: OK Dec 13 16:07:07.068719 coreos-metadata[938]: Dec 13 16:07:07.068 INFO Fetch successful Dec 13 16:07:07.096600 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 16:07:07.096664 systemd[1]: Finished flatcar-static-network.service. Dec 13 16:07:07.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:07.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:07.226940 kernel: audit: type=1130 audit(1734106027.113:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:07.226963 kernel: audit: type=1131 audit(1734106027.113:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:07.318010 ignition[1013]: INFO : Ignition finished successfully Dec 13 16:07:07.318894 systemd[1]: Finished ignition-mount.service. Dec 13 16:07:07.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:07.336457 systemd[1]: Starting ignition-files.service... Dec 13 16:07:07.407591 kernel: audit: type=1130 audit(1734106027.335:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:07.402262 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 16:07:07.465072 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sdb6 scanned by mount (1029) Dec 13 16:07:07.465088 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Dec 13 16:07:07.465095 kernel: BTRFS info (device sdb6): using free space tree Dec 13 16:07:07.488215 kernel: BTRFS info (device sdb6): has skinny extents Dec 13 16:07:07.537467 kernel: BTRFS info (device sdb6): enabling ssd optimizations Dec 13 16:07:07.538623 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 16:07:07.554640 ignition[1048]: INFO : Ignition 2.14.0 Dec 13 16:07:07.554640 ignition[1048]: INFO : Stage: files Dec 13 16:07:07.554640 ignition[1048]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:07:07.554640 ignition[1048]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:07:07.554640 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:07:07.558859 unknown[1048]: wrote ssh authorized keys file for user: core Dec 13 16:07:07.619643 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Dec 13 16:07:07.619643 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 16:07:07.619643 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 16:07:07.619643 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 16:07:07.619643 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 16:07:07.619643 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 16:07:07.619643 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 16:07:07.619643 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 16:07:07.725697 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 16:07:07.795196 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 16:07:07.811702 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 16:07:07.811702 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 16:07:08.316185 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 16:07:08.376752 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 16:07:08.376752 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 16:07:08.424711 kernel: BTRFS info: devid 1 device path /dev/sdb6 changed to /dev/disk/by-label/OEM scanned by ignition (1068) Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 16:07:08.424729 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem674405600" Dec 13 16:07:08.424729 ignition[1048]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem674405600": device or resource busy Dec 13 16:07:08.686808 ignition[1048]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem674405600", trying btrfs: device or resource busy Dec 13 16:07:08.686808 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem674405600" Dec 13 16:07:08.686808 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem674405600" Dec 13 16:07:08.686808 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem674405600" Dec 13 16:07:08.686808 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem674405600" Dec 13 16:07:08.686808 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 16:07:08.686808 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 16:07:08.686808 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 16:07:08.832611 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Dec 13 16:07:08.991437 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 16:07:08.991437 ignition[1048]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 16:07:08.991437 ignition[1048]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 16:07:08.991437 ignition[1048]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Dec 13 16:07:08.991437 ignition[1048]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Dec 13 16:07:08.991437 ignition[1048]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Dec 13 16:07:08.991437 ignition[1048]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 16:07:09.091793 ignition[1048]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 16:07:09.091793 ignition[1048]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 16:07:09.091793 ignition[1048]: INFO : files: files passed Dec 13 16:07:09.091793 ignition[1048]: INFO : POST message to Packet Timeline Dec 13 16:07:09.091793 ignition[1048]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:07:09.733500 ignition[1048]: INFO : GET result: OK Dec 13 16:07:10.104788 ignition[1048]: INFO : Ignition finished successfully Dec 13 16:07:10.107772 systemd[1]: Finished ignition-files.service. Dec 13 16:07:10.181485 kernel: audit: type=1130 audit(1734106030.122:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.128181 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 16:07:10.189750 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 16:07:10.223729 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 16:07:10.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.190099 systemd[1]: Starting ignition-quench.service... Dec 13 16:07:10.412759 kernel: audit: type=1130 audit(1734106030.233:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.412775 kernel: audit: type=1130 audit(1734106030.299:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.412783 kernel: audit: type=1131 audit(1734106030.299:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.206828 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 16:07:10.233829 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 16:07:10.233895 systemd[1]: Finished ignition-quench.service. Dec 13 16:07:10.576826 kernel: audit: type=1130 audit(1734106030.453:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.576839 kernel: audit: type=1131 audit(1734106030.453:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.299728 systemd[1]: Reached target ignition-complete.target. Dec 13 16:07:10.422079 systemd[1]: Starting initrd-parse-etc.service... Dec 13 16:07:10.443268 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 16:07:10.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.682477 kernel: audit: type=1130 audit(1734106030.623:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.443315 systemd[1]: Finished initrd-parse-etc.service. Dec 13 16:07:10.454171 systemd[1]: Reached target initrd-fs.target. Dec 13 16:07:10.585681 systemd[1]: Reached target initrd.target. Dec 13 16:07:10.585738 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 16:07:10.586094 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 16:07:10.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.607764 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 16:07:10.828686 kernel: audit: type=1131 audit(1734106030.753:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.624280 systemd[1]: Starting initrd-cleanup.service... Dec 13 16:07:10.692483 systemd[1]: Stopped target nss-lookup.target. Dec 13 16:07:10.704771 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 16:07:10.720672 systemd[1]: Stopped target timers.target. Dec 13 16:07:10.734741 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 16:07:10.734843 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 16:07:10.753875 systemd[1]: Stopped target initrd.target. Dec 13 16:07:10.820728 systemd[1]: Stopped target basic.target. Dec 13 16:07:10.828784 systemd[1]: Stopped target ignition-complete.target. Dec 13 16:07:10.850781 systemd[1]: Stopped target ignition-diskful.target. Dec 13 16:07:10.866855 systemd[1]: Stopped target initrd-root-device.target. Dec 13 16:07:10.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.884816 systemd[1]: Stopped target remote-fs.target. Dec 13 16:07:11.080702 kernel: audit: type=1131 audit(1734106030.993:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.899938 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 16:07:11.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.918098 systemd[1]: Stopped target sysinit.target. Dec 13 16:07:11.166715 kernel: audit: type=1131 audit(1734106031.089:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.933085 systemd[1]: Stopped target local-fs.target. Dec 13 16:07:10.948055 systemd[1]: Stopped target local-fs-pre.target. Dec 13 16:07:10.963029 systemd[1]: Stopped target swap.target. Dec 13 16:07:10.977955 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 16:07:10.978321 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 16:07:11.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:10.994305 systemd[1]: Stopped target cryptsetup.target. Dec 13 16:07:11.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.072752 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 16:07:11.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.072836 systemd[1]: Stopped dracut-initqueue.service. Dec 13 16:07:11.290586 ignition[1097]: INFO : Ignition 2.14.0 Dec 13 16:07:11.290586 ignition[1097]: INFO : Stage: umount Dec 13 16:07:11.290586 ignition[1097]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 16:07:11.290586 ignition[1097]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 16:07:11.290586 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 16:07:11.290586 ignition[1097]: INFO : umount: umount passed Dec 13 16:07:11.290586 ignition[1097]: INFO : POST message to Packet Timeline Dec 13 16:07:11.290586 ignition[1097]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 16:07:11.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.431291 iscsid[898]: iscsid shutting down. Dec 13 16:07:11.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.089834 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 16:07:11.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:11.089906 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 16:07:11.158905 systemd[1]: Stopped target paths.target. Dec 13 16:07:11.173712 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 16:07:11.179724 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 16:07:11.180785 systemd[1]: Stopped target slices.target. Dec 13 16:07:11.201853 systemd[1]: Stopped target sockets.target. Dec 13 16:07:11.217887 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 16:07:11.218018 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 16:07:11.236940 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 16:07:11.237123 systemd[1]: Stopped ignition-files.service. Dec 13 16:07:11.253273 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 16:07:11.253657 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 16:07:11.271185 systemd[1]: Stopping ignition-mount.service... Dec 13 16:07:11.283672 systemd[1]: Stopping iscsid.service... Dec 13 16:07:11.298140 systemd[1]: Stopping sysroot-boot.service... Dec 13 16:07:11.311666 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 16:07:11.311783 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 16:07:11.326828 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 16:07:11.326952 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 16:07:11.350865 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 16:07:11.351174 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 16:07:11.351216 systemd[1]: Stopped iscsid.service. Dec 13 16:07:11.373000 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 16:07:11.373057 systemd[1]: Stopped sysroot-boot.service. Dec 13 16:07:11.391114 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 16:07:11.391210 systemd[1]: Closed iscsid.socket. Dec 13 16:07:11.405869 systemd[1]: Stopping iscsiuio.service... Dec 13 16:07:11.421211 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 16:07:11.421451 systemd[1]: Stopped iscsiuio.service. Dec 13 16:07:11.438547 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 16:07:11.438774 systemd[1]: Finished initrd-cleanup.service. Dec 13 16:07:11.454742 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 16:07:11.454837 systemd[1]: Closed iscsiuio.socket. Dec 13 16:07:12.229510 ignition[1097]: INFO : GET result: OK Dec 13 16:07:12.542807 ignition[1097]: INFO : Ignition finished successfully Dec 13 16:07:12.544438 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 16:07:12.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.544588 systemd[1]: Stopped ignition-mount.service. Dec 13 16:07:12.560953 systemd[1]: Stopped target network.target. Dec 13 16:07:12.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.576695 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 16:07:12.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.576836 systemd[1]: Stopped ignition-disks.service. Dec 13 16:07:12.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.591857 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 16:07:12.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.592004 systemd[1]: Stopped ignition-kargs.service. Dec 13 16:07:12.606849 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 16:07:12.606996 systemd[1]: Stopped ignition-setup.service. Dec 13 16:07:12.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.622983 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 16:07:12.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.701000 audit: BPF prog-id=6 op=UNLOAD Dec 13 16:07:12.623133 systemd[1]: Stopped initrd-setup-root.service. Dec 13 16:07:12.638261 systemd[1]: Stopping systemd-networkd.service... Dec 13 16:07:12.652625 systemd-networkd[878]: enp1s0f1np1: DHCPv6 lease lost Dec 13 16:07:12.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.653917 systemd[1]: Stopping systemd-resolved.service... Dec 13 16:07:12.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.666655 systemd-networkd[878]: enp1s0f0np0: DHCPv6 lease lost Dec 13 16:07:12.773000 audit: BPF prog-id=9 op=UNLOAD Dec 13 16:07:12.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.668268 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 16:07:12.668526 systemd[1]: Stopped systemd-resolved.service. Dec 13 16:07:12.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.685188 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 16:07:12.685525 systemd[1]: Stopped systemd-networkd.service. Dec 13 16:07:12.700772 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 16:07:12.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.700789 systemd[1]: Closed systemd-networkd.socket. Dec 13 16:07:12.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.720142 systemd[1]: Stopping network-cleanup.service... Dec 13 16:07:12.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.733679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 16:07:12.733828 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 16:07:12.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.749897 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 16:07:12.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.750051 systemd[1]: Stopped systemd-sysctl.service. Dec 13 16:07:12.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.766199 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 16:07:12.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.766342 systemd[1]: Stopped systemd-modules-load.service. Dec 13 16:07:12.782079 systemd[1]: Stopping systemd-udevd.service... Dec 13 16:07:12.800427 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 16:07:12.801671 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 16:07:12.801734 systemd[1]: Stopped systemd-udevd.service. Dec 13 16:07:12.806822 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 16:07:12.806848 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 16:07:12.826671 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 16:07:12.826697 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 16:07:12.843661 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 16:07:12.843719 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 16:07:13.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:12.858852 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 16:07:12.858960 systemd[1]: Stopped dracut-cmdline.service. Dec 13 16:07:12.874596 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 16:07:12.874634 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 16:07:12.890160 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 16:07:12.905550 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 16:07:12.905601 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 16:07:12.923913 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 16:07:12.924000 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 16:07:12.940785 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 16:07:12.940918 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 16:07:12.960312 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 16:07:12.961625 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 16:07:12.961841 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 16:07:13.066188 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 16:07:13.066428 systemd[1]: Stopped network-cleanup.service. Dec 13 16:07:13.081976 systemd[1]: Reached target initrd-switch-root.target. Dec 13 16:07:13.099513 systemd[1]: Starting initrd-switch-root.service... Dec 13 16:07:13.123962 systemd[1]: Switching root. Dec 13 16:07:13.170292 systemd-journald[267]: Journal stopped Dec 13 16:07:16.961396 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Dec 13 16:07:16.961423 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 16:07:16.961445 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 16:07:16.961461 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 16:07:16.961478 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 16:07:16.961491 kernel: SELinux: policy capability open_perms=1 Dec 13 16:07:16.961505 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 16:07:16.961518 kernel: SELinux: policy capability always_check_network=0 Dec 13 16:07:16.961531 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 16:07:16.961550 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 16:07:16.961560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 16:07:16.961573 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 16:07:16.961586 systemd[1]: Successfully loaded SELinux policy in 321.070ms. Dec 13 16:07:16.961600 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.964ms. Dec 13 16:07:16.961617 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 16:07:16.961633 systemd[1]: Detected architecture x86-64. Dec 13 16:07:16.961647 systemd[1]: Detected first boot. Dec 13 16:07:16.961660 systemd[1]: Hostname set to . Dec 13 16:07:16.961673 systemd[1]: Initializing machine ID from random generator. Dec 13 16:07:16.961687 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 16:07:16.961699 systemd[1]: Populated /etc with preset unit settings. Dec 13 16:07:16.961723 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:07:16.961737 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:07:16.961754 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:07:16.961771 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 13 16:07:16.961781 kernel: audit: type=1334 audit(1734106035.486:93): prog-id=12 op=LOAD Dec 13 16:07:16.961795 kernel: audit: type=1334 audit(1734106035.486:94): prog-id=3 op=UNLOAD Dec 13 16:07:16.961814 kernel: audit: type=1334 audit(1734106035.531:95): prog-id=13 op=LOAD Dec 13 16:07:16.961828 kernel: audit: type=1334 audit(1734106035.575:96): prog-id=14 op=LOAD Dec 13 16:07:16.961841 kernel: audit: type=1334 audit(1734106035.576:97): prog-id=4 op=UNLOAD Dec 13 16:07:16.961854 kernel: audit: type=1334 audit(1734106035.576:98): prog-id=5 op=UNLOAD Dec 13 16:07:16.961867 kernel: audit: type=1334 audit(1734106035.619:99): prog-id=15 op=LOAD Dec 13 16:07:16.961880 kernel: audit: type=1334 audit(1734106035.619:100): prog-id=12 op=UNLOAD Dec 13 16:07:16.961899 kernel: audit: type=1334 audit(1734106035.661:101): prog-id=16 op=LOAD Dec 13 16:07:16.961909 kernel: audit: type=1334 audit(1734106035.701:102): prog-id=17 op=LOAD Dec 13 16:07:16.961923 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 16:07:16.961940 systemd[1]: Stopped initrd-switch-root.service. Dec 13 16:07:16.961957 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 16:07:16.961974 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 16:07:16.961991 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 16:07:16.962008 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 16:07:16.962025 systemd[1]: Created slice system-getty.slice. Dec 13 16:07:16.962039 systemd[1]: Created slice system-modprobe.slice. Dec 13 16:07:16.962052 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 16:07:16.962072 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 16:07:16.962089 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 16:07:16.962103 systemd[1]: Created slice user.slice. Dec 13 16:07:16.962116 systemd[1]: Started systemd-ask-password-console.path. Dec 13 16:07:16.962133 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 16:07:16.962149 systemd[1]: Set up automount boot.automount. Dec 13 16:07:16.962166 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 16:07:16.962179 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 16:07:16.962196 systemd[1]: Stopped target initrd-fs.target. Dec 13 16:07:16.962211 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 16:07:16.962231 systemd[1]: Reached target integritysetup.target. Dec 13 16:07:16.962247 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 16:07:16.962264 systemd[1]: Reached target remote-fs.target. Dec 13 16:07:16.962278 systemd[1]: Reached target slices.target. Dec 13 16:07:16.962292 systemd[1]: Reached target swap.target. Dec 13 16:07:16.962306 systemd[1]: Reached target torcx.target. Dec 13 16:07:16.962322 systemd[1]: Reached target veritysetup.target. Dec 13 16:07:16.962340 systemd[1]: Listening on systemd-coredump.socket. Dec 13 16:07:16.962353 systemd[1]: Listening on systemd-initctl.socket. Dec 13 16:07:16.962370 systemd[1]: Listening on systemd-networkd.socket. Dec 13 16:07:16.962384 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 16:07:16.962400 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 16:07:16.962414 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 16:07:16.962428 systemd[1]: Mounting dev-hugepages.mount... Dec 13 16:07:16.962441 systemd[1]: Mounting dev-mqueue.mount... Dec 13 16:07:16.962458 systemd[1]: Mounting media.mount... Dec 13 16:07:16.962481 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:07:16.962497 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 16:07:16.962509 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 16:07:16.962533 systemd[1]: Mounting tmp.mount... Dec 13 16:07:16.962556 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 16:07:16.962575 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:07:16.962594 systemd[1]: Starting kmod-static-nodes.service... Dec 13 16:07:16.962615 systemd[1]: Starting modprobe@configfs.service... Dec 13 16:07:16.962633 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:07:16.962656 systemd[1]: Starting modprobe@drm.service... Dec 13 16:07:16.962674 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:07:16.962700 systemd[1]: Starting modprobe@fuse.service... Dec 13 16:07:16.962723 kernel: fuse: init (API version 7.34) Dec 13 16:07:16.962743 systemd[1]: Starting modprobe@loop.service... Dec 13 16:07:16.962760 kernel: loop: module loaded Dec 13 16:07:16.962775 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 16:07:16.962791 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 16:07:16.962810 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 16:07:16.962828 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 16:07:16.962847 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 16:07:16.962869 systemd[1]: Stopped systemd-journald.service. Dec 13 16:07:16.962891 systemd[1]: Starting systemd-journald.service... Dec 13 16:07:16.962913 systemd[1]: Starting systemd-modules-load.service... Dec 13 16:07:16.962936 systemd-journald[1251]: Journal started Dec 13 16:07:16.962990 systemd-journald[1251]: Runtime Journal (/run/log/journal/25a052a3de47496d871eafdd1bae92de) is 8.0M, max 640.1M, 632.1M free. Dec 13 16:07:13.576000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 16:07:13.847000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 16:07:13.850000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 16:07:13.850000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 16:07:13.850000 audit: BPF prog-id=10 op=LOAD Dec 13 16:07:13.850000 audit: BPF prog-id=10 op=UNLOAD Dec 13 16:07:13.850000 audit: BPF prog-id=11 op=LOAD Dec 13 16:07:13.850000 audit: BPF prog-id=11 op=UNLOAD Dec 13 16:07:13.932000 audit[1139]: AVC avc: denied { associate } for pid=1139 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 16:07:13.932000 audit[1139]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a78e2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1122 pid=1139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:07:13.932000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 16:07:13.958000 audit[1139]: AVC avc: denied { associate } for pid=1139 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 16:07:13.958000 audit[1139]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a79b9 a2=1ed a3=0 items=2 ppid=1122 pid=1139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:07:13.958000 audit: CWD cwd="/" Dec 13 16:07:13.958000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:13.958000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:13.958000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 16:07:15.486000 audit: BPF prog-id=12 op=LOAD Dec 13 16:07:15.486000 audit: BPF prog-id=3 op=UNLOAD Dec 13 16:07:15.531000 audit: BPF prog-id=13 op=LOAD Dec 13 16:07:15.575000 audit: BPF prog-id=14 op=LOAD Dec 13 16:07:15.576000 audit: BPF prog-id=4 op=UNLOAD Dec 13 16:07:15.576000 audit: BPF prog-id=5 op=UNLOAD Dec 13 16:07:15.619000 audit: BPF prog-id=15 op=LOAD Dec 13 16:07:15.619000 audit: BPF prog-id=12 op=UNLOAD Dec 13 16:07:15.661000 audit: BPF prog-id=16 op=LOAD Dec 13 16:07:15.701000 audit: BPF prog-id=17 op=LOAD Dec 13 16:07:15.701000 audit: BPF prog-id=13 op=UNLOAD Dec 13 16:07:15.701000 audit: BPF prog-id=14 op=UNLOAD Dec 13 16:07:15.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:15.764000 audit: BPF prog-id=15 op=UNLOAD Dec 13 16:07:15.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:15.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:16.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:16.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:16.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:16.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:16.932000 audit: BPF prog-id=18 op=LOAD Dec 13 16:07:16.933000 audit: BPF prog-id=19 op=LOAD Dec 13 16:07:16.933000 audit: BPF prog-id=20 op=LOAD Dec 13 16:07:16.933000 audit: BPF prog-id=16 op=UNLOAD Dec 13 16:07:16.933000 audit: BPF prog-id=17 op=UNLOAD Dec 13 16:07:16.958000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 16:07:16.958000 audit[1251]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc1c69d200 a2=4000 a3=7ffc1c69d29c items=0 ppid=1 pid=1251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:07:16.958000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 16:07:15.485271 systemd[1]: Queued start job for default target multi-user.target. Dec 13 16:07:13.931671 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:07:15.702256 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 16:07:13.932195 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 16:07:13.932207 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 16:07:13.932229 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 16:07:13.932235 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 16:07:13.932251 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 16:07:13.932258 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 16:07:13.932369 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 16:07:13.932390 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 16:07:13.932397 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 16:07:13.932847 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 16:07:13.932866 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 16:07:13.932876 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 16:07:13.932884 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 16:07:13.932893 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 16:07:13.932900 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 16:07:15.130315 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:15Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 16:07:15.130456 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:15Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 16:07:15.130560 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:15Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 16:07:15.130656 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:15Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 16:07:15.130686 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:15Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 16:07:15.130718 /usr/lib/systemd/system-generators/torcx-generator[1139]: time="2024-12-13T16:07:15Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 16:07:16.991669 systemd[1]: Starting systemd-network-generator.service... Dec 13 16:07:17.013485 systemd[1]: Starting systemd-remount-fs.service... Dec 13 16:07:17.035516 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 16:07:17.068063 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 16:07:17.068096 systemd[1]: Stopped verity-setup.service. Dec 13 16:07:17.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.102470 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:07:17.117661 systemd[1]: Started systemd-journald.service. Dec 13 16:07:17.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.125017 systemd[1]: Mounted dev-hugepages.mount. Dec 13 16:07:17.132733 systemd[1]: Mounted dev-mqueue.mount. Dec 13 16:07:17.139739 systemd[1]: Mounted media.mount. Dec 13 16:07:17.146732 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 16:07:17.155726 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 16:07:17.164700 systemd[1]: Mounted tmp.mount. Dec 13 16:07:17.171778 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 16:07:17.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.179806 systemd[1]: Finished kmod-static-nodes.service. Dec 13 16:07:17.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.187834 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 16:07:17.187964 systemd[1]: Finished modprobe@configfs.service. Dec 13 16:07:17.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.196898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:07:17.197040 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:07:17.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.205988 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 16:07:17.206183 systemd[1]: Finished modprobe@drm.service. Dec 13 16:07:17.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.216119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:07:17.216351 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:07:17.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.226559 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 16:07:17.226978 systemd[1]: Finished modprobe@fuse.service. Dec 13 16:07:17.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.236419 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:07:17.236851 systemd[1]: Finished modprobe@loop.service. Dec 13 16:07:17.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.247382 systemd[1]: Finished systemd-modules-load.service. Dec 13 16:07:17.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.257295 systemd[1]: Finished systemd-network-generator.service. Dec 13 16:07:17.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.266300 systemd[1]: Finished systemd-remount-fs.service. Dec 13 16:07:17.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.275307 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 16:07:17.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.285978 systemd[1]: Reached target network-pre.target. Dec 13 16:07:17.297422 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 16:07:17.308282 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 16:07:17.315731 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 16:07:17.319778 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 16:07:17.329068 systemd[1]: Starting systemd-journal-flush.service... Dec 13 16:07:17.337709 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:07:17.338219 systemd[1]: Starting systemd-random-seed.service... Dec 13 16:07:17.338317 systemd-journald[1251]: Time spent on flushing to /var/log/journal/25a052a3de47496d871eafdd1bae92de is 15.251ms for 1595 entries. Dec 13 16:07:17.338317 systemd-journald[1251]: System Journal (/var/log/journal/25a052a3de47496d871eafdd1bae92de) is 8.0M, max 195.6M, 187.6M free. Dec 13 16:07:17.382778 systemd-journald[1251]: Received client request to flush runtime journal. Dec 13 16:07:17.353604 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:07:17.354104 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:07:17.363082 systemd[1]: Starting systemd-sysusers.service... Dec 13 16:07:17.370074 systemd[1]: Starting systemd-udev-settle.service... Dec 13 16:07:17.377572 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 16:07:17.385677 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 16:07:17.394703 systemd[1]: Finished systemd-journal-flush.service. Dec 13 16:07:17.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.402741 systemd[1]: Finished systemd-random-seed.service. Dec 13 16:07:17.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.410667 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:07:17.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.418659 systemd[1]: Finished systemd-sysusers.service. Dec 13 16:07:17.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.427631 systemd[1]: Reached target first-boot-complete.target. Dec 13 16:07:17.436205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 16:07:17.445618 udevadm[1267]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 16:07:17.454149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 16:07:17.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.636550 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 16:07:17.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.644000 audit: BPF prog-id=21 op=LOAD Dec 13 16:07:17.645000 audit: BPF prog-id=22 op=LOAD Dec 13 16:07:17.645000 audit: BPF prog-id=7 op=UNLOAD Dec 13 16:07:17.645000 audit: BPF prog-id=8 op=UNLOAD Dec 13 16:07:17.645831 systemd[1]: Starting systemd-udevd.service... Dec 13 16:07:17.657272 systemd-udevd[1271]: Using default interface naming scheme 'v252'. Dec 13 16:07:17.674233 systemd[1]: Started systemd-udevd.service. Dec 13 16:07:17.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.684709 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Dec 13 16:07:17.685000 audit: BPF prog-id=23 op=LOAD Dec 13 16:07:17.685873 systemd[1]: Starting systemd-networkd.service... Dec 13 16:07:17.718875 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 16:07:17.718962 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 16:07:17.718998 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 16:07:17.718000 audit: BPF prog-id=24 op=LOAD Dec 13 16:07:17.718000 audit: BPF prog-id=25 op=LOAD Dec 13 16:07:17.735441 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sdb6 scanned by (udev-worker) (1278) Dec 13 16:07:17.756473 kernel: ACPI: button: Power Button [PWRF] Dec 13 16:07:17.769000 audit: BPF prog-id=26 op=LOAD Dec 13 16:07:17.733000 audit[1339]: AVC avc: denied { confidentiality } for pid=1339 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 16:07:17.771719 systemd[1]: Starting systemd-userdbd.service... Dec 13 16:07:17.786538 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 16:07:17.786567 kernel: IPMI message handler: version 39.2 Dec 13 16:07:17.733000 audit[1339]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55cac0b29ea0 a1=4d98c a2=7f9346a23bc5 a3=5 items=42 ppid=1271 pid=1339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:07:17.733000 audit: CWD cwd="/" Dec 13 16:07:17.733000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=1 name=(null) inode=19808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=2 name=(null) inode=19808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=3 name=(null) inode=19809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=4 name=(null) inode=19808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=5 name=(null) inode=19810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=6 name=(null) inode=19808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=7 name=(null) inode=19811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=8 name=(null) inode=19811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=9 name=(null) inode=19812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=10 name=(null) inode=19811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=11 name=(null) inode=19813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=12 name=(null) inode=19811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=13 name=(null) inode=19814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=14 name=(null) inode=19811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=15 name=(null) inode=19815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=16 name=(null) inode=19811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=17 name=(null) inode=19816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=18 name=(null) inode=19808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=19 name=(null) inode=19817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=20 name=(null) inode=19817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=21 name=(null) inode=19818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=22 name=(null) inode=19817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=23 name=(null) inode=19819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=24 name=(null) inode=19817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=25 name=(null) inode=19820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=26 name=(null) inode=19817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=27 name=(null) inode=19821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=28 name=(null) inode=19817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=29 name=(null) inode=19822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=30 name=(null) inode=19808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=31 name=(null) inode=19823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=32 name=(null) inode=19823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=33 name=(null) inode=19824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=34 name=(null) inode=19823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=35 name=(null) inode=19825 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=36 name=(null) inode=19823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=37 name=(null) inode=19826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=38 name=(null) inode=19823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=39 name=(null) inode=19827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=40 name=(null) inode=19823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PATH item=41 name=(null) inode=19828 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 16:07:17.733000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 16:07:17.806052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 16:07:17.817478 kernel: ipmi device interface Dec 13 16:07:17.817529 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 16:07:17.866837 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 16:07:17.866976 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Dec 13 16:07:17.867076 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 16:07:17.900359 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 16:07:17.916472 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 16:07:17.925603 systemd[1]: Started systemd-userdbd.service. Dec 13 16:07:17.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:17.968453 kernel: ipmi_si: IPMI System Interface driver Dec 13 16:07:17.968485 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 16:07:18.001651 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 16:07:18.001674 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 16:07:18.001691 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 16:07:18.094868 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 16:07:18.095080 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Dec 13 16:07:18.095268 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Dec 13 16:07:18.095423 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 16:07:18.095582 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 16:07:18.095611 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 16:07:18.073022 systemd-networkd[1311]: bond0: netdev ready Dec 13 16:07:18.076131 systemd-networkd[1311]: lo: Link UP Dec 13 16:07:18.076134 systemd-networkd[1311]: lo: Gained carrier Dec 13 16:07:18.076604 systemd-networkd[1311]: Enumeration completed Dec 13 16:07:18.076691 systemd[1]: Started systemd-networkd.service. Dec 13 16:07:18.076884 systemd-networkd[1311]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 16:07:18.077555 systemd-networkd[1311]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:79.network. Dec 13 16:07:18.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:18.188987 kernel: intel_rapl_common: Found RAPL domain package Dec 13 16:07:18.189020 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 16:07:18.189120 kernel: intel_rapl_common: Found RAPL domain core Dec 13 16:07:18.218468 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Dec 13 16:07:18.218571 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 16:07:18.218590 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 16:07:18.220469 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Dec 13 16:07:18.223545 systemd-networkd[1311]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Dec 13 16:07:18.309510 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 16:07:18.336497 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 16:07:18.354503 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 16:07:18.437496 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 16:07:18.593477 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 16:07:18.617530 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Dec 13 16:07:18.617558 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 16:07:18.638476 systemd-networkd[1311]: bond0: Link UP Dec 13 16:07:18.638700 systemd-networkd[1311]: enp1s0f1np1: Link UP Dec 13 16:07:18.638831 systemd-networkd[1311]: enp1s0f1np1: Gained carrier Dec 13 16:07:18.639768 systemd-networkd[1311]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Dec 13 16:07:18.661517 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 16:07:18.661540 kernel: bond0: active interface up! Dec 13 16:07:18.700522 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Dec 13 16:07:18.712691 systemd[1]: Finished systemd-udev-settle.service. Dec 13 16:07:18.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:18.721170 systemd[1]: Starting lvm2-activation-early.service... Dec 13 16:07:18.737115 lvm[1376]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 16:07:18.771855 systemd[1]: Finished lvm2-activation-early.service. Dec 13 16:07:18.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:18.780588 systemd[1]: Reached target cryptsetup.target. Dec 13 16:07:18.789108 systemd[1]: Starting lvm2-activation.service... Dec 13 16:07:18.791208 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 16:07:18.821472 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:18.844511 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:18.867518 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:18.867921 systemd[1]: Finished lvm2-activation.service. Dec 13 16:07:18.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:18.884628 systemd[1]: Reached target local-fs-pre.target. Dec 13 16:07:18.891511 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:18.908578 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 16:07:18.908592 systemd[1]: Reached target local-fs.target. Dec 13 16:07:18.914512 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:18.930595 systemd[1]: Reached target machines.target. Dec 13 16:07:18.937534 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:18.954156 systemd[1]: Starting ldconfig.service... Dec 13 16:07:18.960480 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:18.977621 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:07:18.977641 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:07:18.978322 systemd[1]: Starting systemd-boot-update.service... Dec 13 16:07:18.983468 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:18.999082 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 16:07:19.005467 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.024063 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 16:07:19.026468 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.027343 systemd[1]: Starting systemd-sysext.service... Dec 13 16:07:19.027598 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1379 (bootctl) Dec 13 16:07:19.028158 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 16:07:19.047511 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.049757 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 16:07:19.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.069501 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.070697 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 16:07:19.089517 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.090156 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 16:07:19.090235 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 16:07:19.110520 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.110678 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 16:07:19.125520 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.163470 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.164837 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 16:07:19.165161 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 16:07:19.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.182469 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.182491 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 16:07:19.199489 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.227568 systemd-fsck[1388]: fsck.fat 4.2 (2021-01-31) Dec 13 16:07:19.227568 systemd-fsck[1388]: /dev/sdb1: 789 files, 119291/258078 clusters Dec 13 16:07:19.228505 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 16:07:19.237469 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.238523 systemd-networkd[1311]: enp1s0f0np0: Link UP Dec 13 16:07:19.238691 systemd-networkd[1311]: bond0: Gained carrier Dec 13 16:07:19.238779 systemd-networkd[1311]: enp1s0f0np0: Gained carrier Dec 13 16:07:19.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.264427 systemd[1]: Mounting boot.mount... Dec 13 16:07:19.271260 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 16:07:19.271282 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Dec 13 16:07:19.277331 systemd[1]: Mounted boot.mount. Dec 13 16:07:19.280798 systemd-networkd[1311]: enp1s0f1np1: Link DOWN Dec 13 16:07:19.280801 systemd-networkd[1311]: enp1s0f1np1: Lost carrier Dec 13 16:07:19.297499 systemd[1]: Finished systemd-boot-update.service. Dec 13 16:07:19.301467 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 16:07:19.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.315554 (sd-sysext)[1393]: Using extensions 'kubernetes'. Dec 13 16:07:19.315735 (sd-sysext)[1393]: Merged extensions into '/usr'. Dec 13 16:07:19.324672 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:07:19.325383 systemd[1]: Mounting usr-share-oem.mount... Dec 13 16:07:19.332660 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.333307 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:07:19.341095 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:07:19.348060 systemd[1]: Starting modprobe@loop.service... Dec 13 16:07:19.354581 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.354650 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:07:19.354714 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:07:19.356298 systemd[1]: Mounted usr-share-oem.mount. Dec 13 16:07:19.364720 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:07:19.364785 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:07:19.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.372752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:07:19.372814 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:07:19.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.380752 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:07:19.380812 systemd[1]: Finished modprobe@loop.service. Dec 13 16:07:19.386079 ldconfig[1378]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 16:07:19.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.388764 systemd[1]: Finished ldconfig.service. Dec 13 16:07:19.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.395786 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:07:19.395846 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.396366 systemd[1]: Finished systemd-sysext.service. Dec 13 16:07:19.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.405148 systemd[1]: Starting ensure-sysext.service... Dec 13 16:07:19.412046 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 16:07:19.419003 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 16:07:19.419560 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 16:07:19.421259 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 16:07:19.421668 systemd[1]: Reloading. Dec 13 16:07:19.445776 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-12-13T16:07:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:07:19.445794 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2024-12-13T16:07:19Z" level=info msg="torcx already run" Dec 13 16:07:19.468473 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 16:07:19.486487 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Dec 13 16:07:19.486547 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Dec 13 16:07:19.487365 systemd-networkd[1311]: enp1s0f1np1: Link UP Dec 13 16:07:19.487529 systemd-networkd[1311]: enp1s0f1np1: Gained carrier Dec 13 16:07:19.516470 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Dec 13 16:07:19.534500 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 16:07:19.537229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:07:19.537236 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:07:19.548347 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:07:19.589000 audit: BPF prog-id=27 op=LOAD Dec 13 16:07:19.589000 audit: BPF prog-id=23 op=UNLOAD Dec 13 16:07:19.589000 audit: BPF prog-id=28 op=LOAD Dec 13 16:07:19.589000 audit: BPF prog-id=29 op=LOAD Dec 13 16:07:19.589000 audit: BPF prog-id=21 op=UNLOAD Dec 13 16:07:19.589000 audit: BPF prog-id=22 op=UNLOAD Dec 13 16:07:19.590000 audit: BPF prog-id=30 op=LOAD Dec 13 16:07:19.590000 audit: BPF prog-id=18 op=UNLOAD Dec 13 16:07:19.590000 audit: BPF prog-id=31 op=LOAD Dec 13 16:07:19.590000 audit: BPF prog-id=32 op=LOAD Dec 13 16:07:19.590000 audit: BPF prog-id=19 op=UNLOAD Dec 13 16:07:19.590000 audit: BPF prog-id=20 op=UNLOAD Dec 13 16:07:19.591000 audit: BPF prog-id=33 op=LOAD Dec 13 16:07:19.591000 audit: BPF prog-id=24 op=UNLOAD Dec 13 16:07:19.591000 audit: BPF prog-id=34 op=LOAD Dec 13 16:07:19.592000 audit: BPF prog-id=35 op=LOAD Dec 13 16:07:19.592000 audit: BPF prog-id=25 op=UNLOAD Dec 13 16:07:19.592000 audit: BPF prog-id=26 op=UNLOAD Dec 13 16:07:19.593519 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 16:07:19.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 16:07:19.603350 systemd[1]: Starting audit-rules.service... Dec 13 16:07:19.610069 systemd[1]: Starting clean-ca-certificates.service... Dec 13 16:07:19.620172 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 16:07:19.619000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 16:07:19.619000 audit[1497]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe96feeaf0 a2=420 a3=0 items=0 ppid=1481 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 16:07:19.619000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 16:07:19.620548 augenrules[1497]: No rules Dec 13 16:07:19.630552 systemd[1]: Starting systemd-resolved.service... Dec 13 16:07:19.639434 systemd[1]: Starting systemd-timesyncd.service... Dec 13 16:07:19.647067 systemd[1]: Starting systemd-update-utmp.service... Dec 13 16:07:19.654839 systemd[1]: Finished audit-rules.service. Dec 13 16:07:19.662638 systemd[1]: Finished clean-ca-certificates.service. Dec 13 16:07:19.671612 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 16:07:19.685085 systemd[1]: Finished systemd-update-utmp.service. Dec 13 16:07:19.694133 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.694753 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:07:19.702115 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:07:19.709082 systemd[1]: Starting modprobe@loop.service... Dec 13 16:07:19.714365 systemd-resolved[1503]: Positive Trust Anchors: Dec 13 16:07:19.714371 systemd-resolved[1503]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 16:07:19.714390 systemd-resolved[1503]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 16:07:19.715584 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.715656 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:07:19.716343 systemd[1]: Starting systemd-update-done.service... Dec 13 16:07:19.718336 systemd-resolved[1503]: Using system hostname 'ci-3510.3.6-a-6bc1e3250f'. Dec 13 16:07:19.723665 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 16:07:19.724152 systemd[1]: Started systemd-timesyncd.service. Dec 13 16:07:19.732767 systemd[1]: Started systemd-resolved.service. Dec 13 16:07:19.740749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:07:19.740817 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:07:19.748750 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:07:19.748814 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:07:19.756736 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:07:19.756796 systemd[1]: Finished modprobe@loop.service. Dec 13 16:07:19.764727 systemd[1]: Finished systemd-update-done.service. Dec 13 16:07:19.773825 systemd[1]: Reached target network.target. Dec 13 16:07:19.781586 systemd[1]: Reached target nss-lookup.target. Dec 13 16:07:19.789591 systemd[1]: Reached target time-set.target. Dec 13 16:07:19.797695 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.798314 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 16:07:19.805071 systemd[1]: Starting modprobe@drm.service... Dec 13 16:07:19.812066 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 16:07:19.821714 systemd[1]: Starting modprobe@loop.service... Dec 13 16:07:19.828584 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.828648 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:07:19.829225 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 16:07:19.837556 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 16:07:19.838203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 16:07:19.838265 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 16:07:19.846733 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 16:07:19.846792 systemd[1]: Finished modprobe@drm.service. Dec 13 16:07:19.854724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 16:07:19.854783 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 16:07:19.862725 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 16:07:19.862783 systemd[1]: Finished modprobe@loop.service. Dec 13 16:07:19.870798 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 16:07:19.870869 systemd[1]: Reached target sysinit.target. Dec 13 16:07:19.878584 systemd[1]: Started motdgen.path. Dec 13 16:07:19.885667 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 16:07:19.895628 systemd[1]: Started logrotate.timer. Dec 13 16:07:19.902664 systemd[1]: Started mdadm.timer. Dec 13 16:07:19.909544 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 16:07:19.917535 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 16:07:19.917551 systemd[1]: Reached target paths.target. Dec 13 16:07:19.924535 systemd[1]: Reached target timers.target. Dec 13 16:07:19.931661 systemd[1]: Listening on dbus.socket. Dec 13 16:07:19.938997 systemd[1]: Starting docker.socket... Dec 13 16:07:19.946967 systemd[1]: Listening on sshd.socket. Dec 13 16:07:19.954048 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:07:19.954184 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.955836 systemd[1]: Finished ensure-sysext.service. Dec 13 16:07:19.964644 systemd[1]: Listening on docker.socket. Dec 13 16:07:19.971994 systemd[1]: Reached target sockets.target. Dec 13 16:07:19.980558 systemd[1]: Reached target basic.target. Dec 13 16:07:19.987568 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.987583 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 16:07:19.988028 systemd[1]: Starting containerd.service... Dec 13 16:07:19.994981 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 16:07:20.004039 systemd[1]: Starting coreos-metadata.service... Dec 13 16:07:20.011066 systemd[1]: Starting dbus.service... Dec 13 16:07:20.017170 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 16:07:20.021486 jq[1527]: false Dec 13 16:07:20.024260 systemd[1]: Starting extend-filesystems.service... Dec 13 16:07:20.024922 coreos-metadata[1520]: Dec 13 16:07:20.024 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 16:07:20.030525 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 16:07:20.030973 dbus-daemon[1526]: [system] SELinux support is enabled Dec 13 16:07:20.031192 systemd[1]: Starting motdgen.service... Dec 13 16:07:20.032809 extend-filesystems[1528]: Found loop1 Dec 13 16:07:20.032809 extend-filesystems[1528]: Found sda Dec 13 16:07:20.057620 extend-filesystems[1528]: Found sdb Dec 13 16:07:20.057620 extend-filesystems[1528]: Found sdb1 Dec 13 16:07:20.057620 extend-filesystems[1528]: Found sdb2 Dec 13 16:07:20.057620 extend-filesystems[1528]: Found sdb3 Dec 13 16:07:20.057620 extend-filesystems[1528]: Found usr Dec 13 16:07:20.057620 extend-filesystems[1528]: Found sdb4 Dec 13 16:07:20.057620 extend-filesystems[1528]: Found sdb6 Dec 13 16:07:20.057620 extend-filesystems[1528]: Found sdb7 Dec 13 16:07:20.057620 extend-filesystems[1528]: Found sdb9 Dec 13 16:07:20.057620 extend-filesystems[1528]: Checking size of /dev/sdb9 Dec 13 16:07:20.057620 extend-filesystems[1528]: Resized partition /dev/sdb9 Dec 13 16:07:20.187561 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Dec 13 16:07:20.187645 coreos-metadata[1523]: Dec 13 16:07:20.034 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 16:07:20.038310 systemd[1]: Starting prepare-helm.service... Dec 13 16:07:20.187863 extend-filesystems[1544]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 16:07:20.069342 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 16:07:20.088166 systemd[1]: Starting sshd-keygen.service... Dec 13 16:07:20.102818 systemd[1]: Starting systemd-logind.service... Dec 13 16:07:20.119501 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 16:07:20.120092 systemd[1]: Starting tcsd.service... Dec 13 16:07:20.202946 update_engine[1557]: I1213 16:07:20.181783 1557 main.cc:92] Flatcar Update Engine starting Dec 13 16:07:20.202946 update_engine[1557]: I1213 16:07:20.185088 1557 update_check_scheduler.cc:74] Next update check in 3m15s Dec 13 16:07:20.125740 systemd-logind[1555]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 16:07:20.203177 jq[1558]: true Dec 13 16:07:20.125750 systemd-logind[1555]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 16:07:20.125759 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 16:07:20.125912 systemd-logind[1555]: New seat seat0. Dec 13 16:07:20.131981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 16:07:20.132360 systemd[1]: Starting update-engine.service... Dec 13 16:07:20.146159 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 16:07:20.161915 systemd[1]: Started dbus.service. Dec 13 16:07:20.181127 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 16:07:20.181219 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 16:07:20.181371 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 16:07:20.181452 systemd[1]: Finished motdgen.service. Dec 13 16:07:20.195019 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 16:07:20.195105 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 16:07:20.213317 jq[1562]: true Dec 13 16:07:20.213909 tar[1560]: linux-amd64/helm Dec 13 16:07:20.214175 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 16:07:20.218504 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 16:07:20.218602 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 16:07:20.221608 systemd[1]: Started update-engine.service. Dec 13 16:07:20.222719 env[1563]: time="2024-12-13T16:07:20.222695702Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 16:07:20.231365 env[1563]: time="2024-12-13T16:07:20.231347735Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 16:07:20.233006 env[1563]: time="2024-12-13T16:07:20.232990954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:07:20.233605 systemd[1]: Started systemd-logind.service. Dec 13 16:07:20.233813 env[1563]: time="2024-12-13T16:07:20.233766948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:07:20.233813 env[1563]: time="2024-12-13T16:07:20.233781719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:07:20.235139 env[1563]: time="2024-12-13T16:07:20.235126332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:07:20.235139 env[1563]: time="2024-12-13T16:07:20.235139221Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 16:07:20.235205 env[1563]: time="2024-12-13T16:07:20.235147893Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 16:07:20.235205 env[1563]: time="2024-12-13T16:07:20.235153689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 16:07:20.235205 env[1563]: time="2024-12-13T16:07:20.235195373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:07:20.235332 env[1563]: time="2024-12-13T16:07:20.235323871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 16:07:20.235406 env[1563]: time="2024-12-13T16:07:20.235395624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 16:07:20.235436 env[1563]: time="2024-12-13T16:07:20.235405855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 16:07:20.237152 env[1563]: time="2024-12-13T16:07:20.237137498Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 16:07:20.237188 env[1563]: time="2024-12-13T16:07:20.237153143Z" level=info msg="metadata content store policy set" policy=shared Dec 13 16:07:20.241537 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Dec 13 16:07:20.241932 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 16:07:20.245774 env[1563]: time="2024-12-13T16:07:20.245761145Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 16:07:20.245816 env[1563]: time="2024-12-13T16:07:20.245776747Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 16:07:20.245816 env[1563]: time="2024-12-13T16:07:20.245784870Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 16:07:20.245816 env[1563]: time="2024-12-13T16:07:20.245799628Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.245816 env[1563]: time="2024-12-13T16:07:20.245807817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.245816 env[1563]: time="2024-12-13T16:07:20.245815623Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.245942 env[1563]: time="2024-12-13T16:07:20.245822684Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.245942 env[1563]: time="2024-12-13T16:07:20.245830028Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.245942 env[1563]: time="2024-12-13T16:07:20.245836849Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.245942 env[1563]: time="2024-12-13T16:07:20.245844532Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.245942 env[1563]: time="2024-12-13T16:07:20.245851435Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.245942 env[1563]: time="2024-12-13T16:07:20.245857525Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 16:07:20.245942 env[1563]: time="2024-12-13T16:07:20.245904668Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 16:07:20.246109 env[1563]: time="2024-12-13T16:07:20.245948664Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 16:07:20.246152 env[1563]: time="2024-12-13T16:07:20.246143442Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 16:07:20.246182 env[1563]: time="2024-12-13T16:07:20.246157496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246182 env[1563]: time="2024-12-13T16:07:20.246164778Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 16:07:20.246234 env[1563]: time="2024-12-13T16:07:20.246190414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246234 env[1563]: time="2024-12-13T16:07:20.246198087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246234 env[1563]: time="2024-12-13T16:07:20.246206481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246234 env[1563]: time="2024-12-13T16:07:20.246212807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246234 env[1563]: time="2024-12-13T16:07:20.246219741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246234 env[1563]: time="2024-12-13T16:07:20.246226481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246234 env[1563]: time="2024-12-13T16:07:20.246232398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246238344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246245462Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246304044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246312742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246319180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246327730Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246335238Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246341261Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246351469Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 16:07:20.246396 env[1563]: time="2024-12-13T16:07:20.246371390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 16:07:20.246644 env[1563]: time="2024-12-13T16:07:20.246485519Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 16:07:20.246644 env[1563]: time="2024-12-13T16:07:20.246517394Z" level=info msg="Connect containerd service" Dec 13 16:07:20.246644 env[1563]: time="2024-12-13T16:07:20.246536159Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246815756Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246903344Z" level=info msg="Start subscribing containerd event" Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246932306Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246934400Z" level=info msg="Start recovering state" Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246955639Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246973526Z" level=info msg="Start event monitor" Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246985027Z" level=info msg="Start snapshots syncer" Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246991290Z" level=info msg="Start cni network conf syncer for default" Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246996378Z" level=info msg="Start streaming server" Dec 13 16:07:20.248685 env[1563]: time="2024-12-13T16:07:20.246978206Z" level=info msg="containerd successfully booted in 0.024615s" Dec 13 16:07:20.251720 systemd[1]: Started containerd.service. Dec 13 16:07:20.259022 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:07:20.260130 systemd[1]: Started locksmithd.service. Dec 13 16:07:20.264516 systemd-networkd[1311]: bond0: Gained IPv6LL Dec 13 16:07:20.266585 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 16:07:20.266665 systemd[1]: Reached target system-config.target. Dec 13 16:07:20.274555 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 16:07:20.274632 systemd[1]: Reached target user-config.target. Dec 13 16:07:20.282523 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 16:07:20.321252 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 16:07:20.463529 tar[1560]: linux-amd64/LICENSE Dec 13 16:07:20.463598 tar[1560]: linux-amd64/README.md Dec 13 16:07:20.466027 systemd[1]: Finished prepare-helm.service. Dec 13 16:07:20.466111 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 16:07:20.477574 systemd[1]: Finished sshd-keygen.service. Dec 13 16:07:20.486348 systemd[1]: Starting issuegen.service... Dec 13 16:07:20.493711 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 16:07:20.493807 systemd[1]: Finished issuegen.service. Dec 13 16:07:20.502271 systemd[1]: Starting systemd-user-sessions.service... Dec 13 16:07:20.511703 systemd[1]: Finished systemd-user-sessions.service. Dec 13 16:07:20.520224 systemd[1]: Started getty@tty1.service. Dec 13 16:07:20.527193 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 16:07:20.535648 systemd[1]: Reached target getty.target. Dec 13 16:07:20.569504 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Dec 13 16:07:20.598330 extend-filesystems[1544]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Dec 13 16:07:20.598330 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 16:07:20.598330 extend-filesystems[1544]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Dec 13 16:07:20.635558 extend-filesystems[1528]: Resized filesystem in /dev/sdb9 Dec 13 16:07:20.598777 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 16:07:20.598858 systemd[1]: Finished extend-filesystems.service. Dec 13 16:07:20.713536 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 16:07:20.722687 systemd[1]: Reached target network-online.target. Dec 13 16:07:20.731277 systemd[1]: Starting kubelet.service... Dec 13 16:07:21.410194 systemd[1]: Started kubelet.service. Dec 13 16:07:21.528574 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 16:07:22.084758 kubelet[1628]: E1213 16:07:22.084688 1628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:07:22.086037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:07:22.086110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:07:25.575051 login[1621]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Dec 13 16:07:25.576366 login[1622]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 16:07:25.583089 systemd[1]: Created slice user-500.slice. Dec 13 16:07:25.583757 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 16:07:25.584808 systemd-logind[1555]: New session 1 of user core. Dec 13 16:07:25.589459 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 16:07:25.590183 systemd[1]: Starting user@500.service... Dec 13 16:07:25.592312 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:07:25.685234 systemd[1651]: Queued start job for default target default.target. Dec 13 16:07:25.685482 systemd[1651]: Reached target paths.target. Dec 13 16:07:25.685493 systemd[1651]: Reached target sockets.target. Dec 13 16:07:25.685501 systemd[1651]: Reached target timers.target. Dec 13 16:07:25.685508 systemd[1651]: Reached target basic.target. Dec 13 16:07:25.685528 systemd[1651]: Reached target default.target. Dec 13 16:07:25.685542 systemd[1651]: Startup finished in 89ms. Dec 13 16:07:25.685578 systemd[1]: Started user@500.service. Dec 13 16:07:25.686244 systemd[1]: Started session-1.scope. Dec 13 16:07:25.898044 coreos-metadata[1523]: Dec 13 16:07:25.897 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 16:07:25.898824 coreos-metadata[1520]: Dec 13 16:07:25.897 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 16:07:26.575817 login[1621]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 16:07:26.587060 systemd-logind[1555]: New session 2 of user core. Dec 13 16:07:26.589834 systemd[1]: Started session-2.scope. Dec 13 16:07:26.898230 coreos-metadata[1520]: Dec 13 16:07:26.898 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 16:07:26.898503 coreos-metadata[1523]: Dec 13 16:07:26.898 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 16:07:27.044738 coreos-metadata[1523]: Dec 13 16:07:27.044 INFO Fetch successful Dec 13 16:07:27.079072 systemd[1]: Finished coreos-metadata.service. Dec 13 16:07:27.079891 systemd[1]: Started packet-phone-home.service. Dec 13 16:07:27.085533 curl[1673]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 16:07:27.085704 curl[1673]: Dload Upload Total Spent Left Speed Dec 13 16:07:27.172520 systemd-timesyncd[1504]: Contacted time server 108.61.73.243:123 (0.flatcar.pool.ntp.org). Dec 13 16:07:27.172674 systemd-timesyncd[1504]: Initial clock synchronization to Fri 2024-12-13 16:07:27.053243 UTC. Dec 13 16:07:27.277286 systemd[1]: Created slice system-sshd.slice. Dec 13 16:07:27.277863 systemd[1]: Started sshd@0-147.28.180.91:22-139.178.89.65:37260.service. Dec 13 16:07:27.321152 sshd[1675]: Accepted publickey for core from 139.178.89.65 port 37260 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:07:27.322407 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:07:27.326794 systemd-logind[1555]: New session 3 of user core. Dec 13 16:07:27.328164 systemd[1]: Started session-3.scope. Dec 13 16:07:27.383874 curl[1673]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 16:07:27.384522 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 16:07:27.385282 systemd[1]: Started sshd@1-147.28.180.91:22-139.178.89.65:37264.service. Dec 13 16:07:27.416948 sshd[1680]: Accepted publickey for core from 139.178.89.65 port 37264 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:07:27.417670 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:07:27.419956 systemd-logind[1555]: New session 4 of user core. Dec 13 16:07:27.420661 systemd[1]: Started session-4.scope. Dec 13 16:07:27.471718 sshd[1680]: pam_unix(sshd:session): session closed for user core Dec 13 16:07:27.474724 systemd[1]: sshd@1-147.28.180.91:22-139.178.89.65:37264.service: Deactivated successfully. Dec 13 16:07:27.475451 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 16:07:27.476145 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Dec 13 16:07:27.477195 systemd[1]: Started sshd@2-147.28.180.91:22-139.178.89.65:37280.service. Dec 13 16:07:27.478133 systemd-logind[1555]: Removed session 4. Dec 13 16:07:27.512306 sshd[1686]: Accepted publickey for core from 139.178.89.65 port 37280 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:07:27.513261 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:07:27.516272 systemd-logind[1555]: New session 5 of user core. Dec 13 16:07:27.517337 systemd[1]: Started session-5.scope. Dec 13 16:07:27.573032 sshd[1686]: pam_unix(sshd:session): session closed for user core Dec 13 16:07:27.574278 systemd[1]: sshd@2-147.28.180.91:22-139.178.89.65:37280.service: Deactivated successfully. Dec 13 16:07:27.574675 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 16:07:27.575066 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Dec 13 16:07:27.575476 systemd-logind[1555]: Removed session 5. Dec 13 16:07:27.870814 coreos-metadata[1520]: Dec 13 16:07:27.870 INFO Fetch successful Dec 13 16:07:27.915473 unknown[1520]: wrote ssh authorized keys file for user: core Dec 13 16:07:27.928475 update-ssh-keys[1691]: Updated "/home/core/.ssh/authorized_keys" Dec 13 16:07:27.928728 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 16:07:27.928969 systemd[1]: Reached target multi-user.target. Dec 13 16:07:27.929667 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 16:07:27.933688 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 16:07:27.933758 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 16:07:27.933906 systemd[1]: Startup finished in 1.861s (kernel) + 23.388s (initrd) + 14.703s (userspace) = 39.953s. Dec 13 16:07:32.180029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 16:07:32.180604 systemd[1]: Stopped kubelet.service. Dec 13 16:07:32.183872 systemd[1]: Starting kubelet.service... Dec 13 16:07:32.351347 systemd[1]: Started kubelet.service. Dec 13 16:07:32.434455 kubelet[1697]: E1213 16:07:32.434355 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:07:32.438101 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:07:32.438245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:07:37.496078 systemd[1]: Started sshd@3-147.28.180.91:22-139.178.89.65:52846.service. Dec 13 16:07:37.528740 sshd[1712]: Accepted publickey for core from 139.178.89.65 port 52846 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:07:37.529708 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:07:37.533210 systemd-logind[1555]: New session 6 of user core. Dec 13 16:07:37.534096 systemd[1]: Started session-6.scope. Dec 13 16:07:37.589670 sshd[1712]: pam_unix(sshd:session): session closed for user core Dec 13 16:07:37.591367 systemd[1]: sshd@3-147.28.180.91:22-139.178.89.65:52846.service: Deactivated successfully. Dec 13 16:07:37.591692 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 16:07:37.592091 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Dec 13 16:07:37.592586 systemd[1]: Started sshd@4-147.28.180.91:22-139.178.89.65:52852.service. Dec 13 16:07:37.593105 systemd-logind[1555]: Removed session 6. Dec 13 16:07:37.624957 sshd[1718]: Accepted publickey for core from 139.178.89.65 port 52852 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:07:37.625963 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:07:37.629437 systemd-logind[1555]: New session 7 of user core. Dec 13 16:07:37.630285 systemd[1]: Started session-7.scope. Dec 13 16:07:37.686060 sshd[1718]: pam_unix(sshd:session): session closed for user core Dec 13 16:07:37.693138 systemd[1]: sshd@4-147.28.180.91:22-139.178.89.65:52852.service: Deactivated successfully. Dec 13 16:07:37.693426 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 16:07:37.693832 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Dec 13 16:07:37.694308 systemd[1]: Started sshd@5-147.28.180.91:22-139.178.89.65:52854.service. Dec 13 16:07:37.694804 systemd-logind[1555]: Removed session 7. Dec 13 16:07:37.727215 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 52854 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:07:37.728299 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:07:37.731933 systemd-logind[1555]: New session 8 of user core. Dec 13 16:07:37.732834 systemd[1]: Started session-8.scope. Dec 13 16:07:37.788784 sshd[1724]: pam_unix(sshd:session): session closed for user core Dec 13 16:07:37.790221 systemd[1]: sshd@5-147.28.180.91:22-139.178.89.65:52854.service: Deactivated successfully. Dec 13 16:07:37.790545 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 16:07:37.790835 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Dec 13 16:07:37.791350 systemd[1]: Started sshd@6-147.28.180.91:22-139.178.89.65:52856.service. Dec 13 16:07:37.791825 systemd-logind[1555]: Removed session 8. Dec 13 16:07:37.823786 sshd[1730]: Accepted publickey for core from 139.178.89.65 port 52856 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:07:37.824777 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:07:37.828257 systemd-logind[1555]: New session 9 of user core. Dec 13 16:07:37.828986 systemd[1]: Started session-9.scope. Dec 13 16:07:37.895131 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 16:07:37.895270 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 16:07:37.908457 systemd[1]: Starting docker.service... Dec 13 16:07:37.928518 env[1747]: time="2024-12-13T16:07:37.928483296Z" level=info msg="Starting up" Dec 13 16:07:37.929395 env[1747]: time="2024-12-13T16:07:37.929355329Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 16:07:37.929395 env[1747]: time="2024-12-13T16:07:37.929367468Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 16:07:37.929395 env[1747]: time="2024-12-13T16:07:37.929381881Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 16:07:37.929395 env[1747]: time="2024-12-13T16:07:37.929389915Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 16:07:37.930431 env[1747]: time="2024-12-13T16:07:37.930391609Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 16:07:37.930431 env[1747]: time="2024-12-13T16:07:37.930404214Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 16:07:37.930431 env[1747]: time="2024-12-13T16:07:37.930418314Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 16:07:37.930431 env[1747]: time="2024-12-13T16:07:37.930428550Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 16:07:37.958782 env[1747]: time="2024-12-13T16:07:37.958750113Z" level=info msg="Loading containers: start." Dec 13 16:07:38.162498 kernel: Initializing XFRM netlink socket Dec 13 16:07:38.199484 env[1747]: time="2024-12-13T16:07:38.199433341Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 16:07:38.248881 systemd-networkd[1311]: docker0: Link UP Dec 13 16:07:38.264374 env[1747]: time="2024-12-13T16:07:38.264328584Z" level=info msg="Loading containers: done." Dec 13 16:07:38.271328 env[1747]: time="2024-12-13T16:07:38.271253493Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 16:07:38.271536 env[1747]: time="2024-12-13T16:07:38.271479221Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 16:07:38.271602 env[1747]: time="2024-12-13T16:07:38.271576145Z" level=info msg="Daemon has completed initialization" Dec 13 16:07:38.272870 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2649307189-merged.mount: Deactivated successfully. Dec 13 16:07:38.281697 systemd[1]: Started docker.service. Dec 13 16:07:38.286080 env[1747]: time="2024-12-13T16:07:38.286021248Z" level=info msg="API listen on /run/docker.sock" Dec 13 16:07:39.414904 env[1563]: time="2024-12-13T16:07:39.414789235Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 16:07:40.161915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323127498.mount: Deactivated successfully. Dec 13 16:07:41.988325 env[1563]: time="2024-12-13T16:07:41.988261881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:41.988995 env[1563]: time="2024-12-13T16:07:41.988953091Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:41.990020 env[1563]: time="2024-12-13T16:07:41.989968276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:41.990910 env[1563]: time="2024-12-13T16:07:41.990876328Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:41.991396 env[1563]: time="2024-12-13T16:07:41.991333940Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 16:07:41.996777 env[1563]: time="2024-12-13T16:07:41.996749707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 16:07:42.679241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 16:07:42.679407 systemd[1]: Stopped kubelet.service. Dec 13 16:07:42.680339 systemd[1]: Starting kubelet.service... Dec 13 16:07:42.855564 systemd[1]: Started kubelet.service. Dec 13 16:07:42.913369 kubelet[1922]: E1213 16:07:42.913306 1922 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:07:42.914831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:07:42.914930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:07:44.517792 env[1563]: time="2024-12-13T16:07:44.517762384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:44.518337 env[1563]: time="2024-12-13T16:07:44.518325401Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:44.519573 env[1563]: time="2024-12-13T16:07:44.519559083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:44.520610 env[1563]: time="2024-12-13T16:07:44.520570449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:44.521033 env[1563]: time="2024-12-13T16:07:44.520996856Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 16:07:44.527172 env[1563]: time="2024-12-13T16:07:44.527153842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 16:07:45.943668 env[1563]: time="2024-12-13T16:07:45.943610983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:45.944294 env[1563]: time="2024-12-13T16:07:45.944279760Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:45.945456 env[1563]: time="2024-12-13T16:07:45.945444321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:45.946616 env[1563]: time="2024-12-13T16:07:45.946604261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:45.947041 env[1563]: time="2024-12-13T16:07:45.947012918Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 16:07:45.952738 env[1563]: time="2024-12-13T16:07:45.952713501Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 16:07:47.017393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746853098.mount: Deactivated successfully. Dec 13 16:07:47.381143 env[1563]: time="2024-12-13T16:07:47.381022361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:47.381873 env[1563]: time="2024-12-13T16:07:47.381848617Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:47.383380 env[1563]: time="2024-12-13T16:07:47.383348340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:47.385011 env[1563]: time="2024-12-13T16:07:47.384979443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:47.385562 env[1563]: time="2024-12-13T16:07:47.385498852Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 16:07:47.393980 env[1563]: time="2024-12-13T16:07:47.393930720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 16:07:47.978013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923067524.mount: Deactivated successfully. Dec 13 16:07:48.996961 env[1563]: time="2024-12-13T16:07:48.996904922Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:48.997589 env[1563]: time="2024-12-13T16:07:48.997577343Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:48.998640 env[1563]: time="2024-12-13T16:07:48.998627193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:49.000055 env[1563]: time="2024-12-13T16:07:49.000033882Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:49.001082 env[1563]: time="2024-12-13T16:07:49.001066658Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 16:07:49.006359 env[1563]: time="2024-12-13T16:07:49.006319203Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 16:07:49.568915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869334411.mount: Deactivated successfully. Dec 13 16:07:49.570529 env[1563]: time="2024-12-13T16:07:49.570453371Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:49.571048 env[1563]: time="2024-12-13T16:07:49.571038177Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:49.571735 env[1563]: time="2024-12-13T16:07:49.571724759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:49.572751 env[1563]: time="2024-12-13T16:07:49.572711350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:49.573017 env[1563]: time="2024-12-13T16:07:49.572963156Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 16:07:49.578680 env[1563]: time="2024-12-13T16:07:49.578619980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 16:07:50.184585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489453869.mount: Deactivated successfully. Dec 13 16:07:52.757854 env[1563]: time="2024-12-13T16:07:52.757785364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:52.758392 env[1563]: time="2024-12-13T16:07:52.758362426Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:52.759307 env[1563]: time="2024-12-13T16:07:52.759254019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:52.760724 env[1563]: time="2024-12-13T16:07:52.760682693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:52.761163 env[1563]: time="2024-12-13T16:07:52.761116349Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 16:07:52.929262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 16:07:52.929530 systemd[1]: Stopped kubelet.service. Dec 13 16:07:52.931187 systemd[1]: Starting kubelet.service... Dec 13 16:07:53.171964 systemd[1]: Started kubelet.service. Dec 13 16:07:53.201123 kubelet[2013]: E1213 16:07:53.201067 2013 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 16:07:53.202275 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 16:07:53.202344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 16:07:54.559617 systemd[1]: Stopped kubelet.service. Dec 13 16:07:54.560887 systemd[1]: Starting kubelet.service... Dec 13 16:07:54.569880 systemd[1]: Reloading. Dec 13 16:07:54.605508 /usr/lib/systemd/system-generators/torcx-generator[2132]: time="2024-12-13T16:07:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:07:54.605524 /usr/lib/systemd/system-generators/torcx-generator[2132]: time="2024-12-13T16:07:54Z" level=info msg="torcx already run" Dec 13 16:07:54.657112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:07:54.657120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:07:54.668498 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:07:54.726960 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 16:07:54.726997 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 16:07:54.727090 systemd[1]: Stopped kubelet.service. Dec 13 16:07:54.727938 systemd[1]: Starting kubelet.service... Dec 13 16:07:54.941892 systemd[1]: Started kubelet.service. Dec 13 16:07:54.984472 kubelet[2197]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:07:54.984472 kubelet[2197]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 16:07:54.984472 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:07:54.984811 kubelet[2197]: I1213 16:07:54.984524 2197 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 16:07:55.258835 kubelet[2197]: I1213 16:07:55.258744 2197 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 16:07:55.258835 kubelet[2197]: I1213 16:07:55.258759 2197 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 16:07:55.258923 kubelet[2197]: I1213 16:07:55.258913 2197 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 16:07:55.327389 kubelet[2197]: I1213 16:07:55.327274 2197 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 16:07:55.328605 kubelet[2197]: E1213 16:07:55.328528 2197 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://147.28.180.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.367073 kubelet[2197]: I1213 16:07:55.366998 2197 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 16:07:55.367157 kubelet[2197]: I1213 16:07:55.367150 2197 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 16:07:55.367282 kubelet[2197]: I1213 16:07:55.367247 2197 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 16:07:55.367743 kubelet[2197]: I1213 16:07:55.367708 2197 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 16:07:55.367743 kubelet[2197]: I1213 16:07:55.367716 2197 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 16:07:55.367797 kubelet[2197]: I1213 16:07:55.367771 2197 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:07:55.367822 kubelet[2197]: I1213 16:07:55.367818 2197 kubelet.go:396] "Attempting to sync node with API server" Dec 13 16:07:55.367842 kubelet[2197]: I1213 16:07:55.367826 2197 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 16:07:55.367842 kubelet[2197]: I1213 16:07:55.367838 2197 kubelet.go:312] "Adding apiserver pod source" Dec 13 16:07:55.367905 kubelet[2197]: I1213 16:07:55.367844 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 16:07:55.369458 kubelet[2197]: W1213 16:07:55.369387 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://147.28.180.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-6bc1e3250f&limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.369458 kubelet[2197]: W1213 16:07:55.369398 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.28.180.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.369518 kubelet[2197]: E1213 16:07:55.369469 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://147.28.180.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-6bc1e3250f&limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.369518 kubelet[2197]: E1213 16:07:55.369475 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.369609 kubelet[2197]: I1213 16:07:55.369541 2197 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 16:07:55.378366 kubelet[2197]: I1213 16:07:55.378326 2197 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 16:07:55.378366 kubelet[2197]: W1213 16:07:55.378359 2197 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 16:07:55.378673 kubelet[2197]: I1213 16:07:55.378632 2197 server.go:1256] "Started kubelet" Dec 13 16:07:55.378756 kubelet[2197]: I1213 16:07:55.378747 2197 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 16:07:55.378756 kubelet[2197]: I1213 16:07:55.378749 2197 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 16:07:55.378926 kubelet[2197]: I1213 16:07:55.378884 2197 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 16:07:55.388732 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 16:07:55.388792 kubelet[2197]: I1213 16:07:55.388784 2197 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 16:07:55.388879 kubelet[2197]: I1213 16:07:55.388834 2197 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 16:07:55.388879 kubelet[2197]: I1213 16:07:55.388852 2197 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 16:07:55.388938 kubelet[2197]: I1213 16:07:55.388891 2197 server.go:461] "Adding debug handlers to kubelet server" Dec 13 16:07:55.388938 kubelet[2197]: I1213 16:07:55.388904 2197 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 16:07:55.389001 kubelet[2197]: E1213 16:07:55.388993 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-6bc1e3250f?timeout=10s\": dial tcp 147.28.180.91:6443: connect: connection refused" interval="200ms" Dec 13 16:07:55.389035 kubelet[2197]: W1213 16:07:55.389017 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://147.28.180.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.389061 kubelet[2197]: E1213 16:07:55.389040 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://147.28.180.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.389061 kubelet[2197]: E1213 16:07:55.389040 2197 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 16:07:55.389180 kubelet[2197]: I1213 16:07:55.389173 2197 factory.go:221] Registration of the systemd container factory successfully Dec 13 16:07:55.389241 kubelet[2197]: I1213 16:07:55.389231 2197 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 16:07:55.389660 kubelet[2197]: I1213 16:07:55.389652 2197 factory.go:221] Registration of the containerd container factory successfully Dec 13 16:07:55.393209 kubelet[2197]: E1213 16:07:55.393167 2197 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://147.28.180.91:6443/api/v1/namespaces/default/events\": dial tcp 147.28.180.91:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-6bc1e3250f.1810c849b1ca0f7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-6bc1e3250f,UID:ci-3510.3.6-a-6bc1e3250f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-6bc1e3250f,},FirstTimestamp:2024-12-13 16:07:55.378618236 +0000 UTC m=+0.432466961,LastTimestamp:2024-12-13 16:07:55.378618236 +0000 UTC m=+0.432466961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-6bc1e3250f,}" Dec 13 16:07:55.398013 kubelet[2197]: I1213 16:07:55.397998 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 16:07:55.398619 kubelet[2197]: I1213 16:07:55.398580 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 16:07:55.398619 kubelet[2197]: I1213 16:07:55.398596 2197 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 16:07:55.398619 kubelet[2197]: I1213 16:07:55.398606 2197 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 16:07:55.398713 kubelet[2197]: E1213 16:07:55.398630 2197 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 16:07:55.398952 kubelet[2197]: W1213 16:07:55.398936 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://147.28.180.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.399064 kubelet[2197]: E1213 16:07:55.398958 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://147.28.180.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:55.426036 kubelet[2197]: I1213 16:07:55.426025 2197 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 16:07:55.426036 kubelet[2197]: I1213 16:07:55.426035 2197 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 16:07:55.426105 kubelet[2197]: I1213 16:07:55.426048 2197 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:07:55.427834 kubelet[2197]: I1213 16:07:55.427825 2197 policy_none.go:49] "None policy: Start" Dec 13 16:07:55.428147 kubelet[2197]: I1213 16:07:55.428137 2197 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 16:07:55.428185 kubelet[2197]: I1213 16:07:55.428153 2197 state_mem.go:35] "Initializing new in-memory state store" Dec 13 16:07:55.431235 systemd[1]: Created slice kubepods.slice. Dec 13 16:07:55.433793 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 16:07:55.435453 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 16:07:55.455288 kubelet[2197]: I1213 16:07:55.455246 2197 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 16:07:55.455438 kubelet[2197]: I1213 16:07:55.455428 2197 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 16:07:55.456213 kubelet[2197]: E1213 16:07:55.456198 2197 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-6bc1e3250f\" not found" Dec 13 16:07:55.492964 kubelet[2197]: I1213 16:07:55.492908 2197 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.493682 kubelet[2197]: E1213 16:07:55.493643 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.91:6443/api/v1/nodes\": dial tcp 147.28.180.91:6443: connect: connection refused" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.499831 kubelet[2197]: I1213 16:07:55.499746 2197 topology_manager.go:215] "Topology Admit Handler" podUID="0e2cf49945e4abb45e79041ca5457f8e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.503092 kubelet[2197]: I1213 16:07:55.503048 2197 topology_manager.go:215] "Topology Admit Handler" podUID="700f2f89a8e966729116c5bd73549294" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.506557 kubelet[2197]: I1213 16:07:55.506513 2197 topology_manager.go:215] "Topology Admit Handler" podUID="7820fde63a96ff07a677a60767ff461b" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.519698 systemd[1]: Created slice kubepods-burstable-pod0e2cf49945e4abb45e79041ca5457f8e.slice. Dec 13 16:07:55.537890 systemd[1]: Created slice kubepods-burstable-pod700f2f89a8e966729116c5bd73549294.slice. Dec 13 16:07:55.560462 systemd[1]: Created slice kubepods-burstable-pod7820fde63a96ff07a677a60767ff461b.slice. Dec 13 16:07:55.590351 kubelet[2197]: E1213 16:07:55.590254 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-6bc1e3250f?timeout=10s\": dial tcp 147.28.180.91:6443: connect: connection refused" interval="400ms" Dec 13 16:07:55.689831 kubelet[2197]: I1213 16:07:55.689720 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e2cf49945e4abb45e79041ca5457f8e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" (UID: \"0e2cf49945e4abb45e79041ca5457f8e\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.690131 kubelet[2197]: I1213 16:07:55.689891 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e2cf49945e4abb45e79041ca5457f8e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" (UID: \"0e2cf49945e4abb45e79041ca5457f8e\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.690131 kubelet[2197]: I1213 16:07:55.689993 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.690131 kubelet[2197]: I1213 16:07:55.690087 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.690504 kubelet[2197]: I1213 16:07:55.690170 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.690504 kubelet[2197]: I1213 16:07:55.690311 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e2cf49945e4abb45e79041ca5457f8e-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" (UID: \"0e2cf49945e4abb45e79041ca5457f8e\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.690741 kubelet[2197]: I1213 16:07:55.690554 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.690741 kubelet[2197]: I1213 16:07:55.690688 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7820fde63a96ff07a677a60767ff461b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-6bc1e3250f\" (UID: \"7820fde63a96ff07a677a60767ff461b\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.690970 kubelet[2197]: I1213 16:07:55.690829 2197 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.698530 kubelet[2197]: I1213 16:07:55.698463 2197 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.699218 kubelet[2197]: E1213 16:07:55.699180 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.91:6443/api/v1/nodes\": dial tcp 147.28.180.91:6443: connect: connection refused" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:55.835958 env[1563]: time="2024-12-13T16:07:55.835754911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-6bc1e3250f,Uid:0e2cf49945e4abb45e79041ca5457f8e,Namespace:kube-system,Attempt:0,}" Dec 13 16:07:55.856248 env[1563]: time="2024-12-13T16:07:55.856155637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-6bc1e3250f,Uid:700f2f89a8e966729116c5bd73549294,Namespace:kube-system,Attempt:0,}" Dec 13 16:07:55.866513 env[1563]: time="2024-12-13T16:07:55.866428611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-6bc1e3250f,Uid:7820fde63a96ff07a677a60767ff461b,Namespace:kube-system,Attempt:0,}" Dec 13 16:07:55.991868 kubelet[2197]: E1213 16:07:55.991767 2197 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.28.180.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-6bc1e3250f?timeout=10s\": dial tcp 147.28.180.91:6443: connect: connection refused" interval="800ms" Dec 13 16:07:56.103919 kubelet[2197]: I1213 16:07:56.103722 2197 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:56.104596 kubelet[2197]: E1213 16:07:56.104516 2197 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://147.28.180.91:6443/api/v1/nodes\": dial tcp 147.28.180.91:6443: connect: connection refused" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:56.227517 kubelet[2197]: W1213 16:07:56.227343 2197 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://147.28.180.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:56.227517 kubelet[2197]: E1213 16:07:56.227521 2197 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://147.28.180.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 147.28.180.91:6443: connect: connection refused Dec 13 16:07:56.426727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376243323.mount: Deactivated successfully. Dec 13 16:07:56.427844 env[1563]: time="2024-12-13T16:07:56.427826566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.428753 env[1563]: time="2024-12-13T16:07:56.428711885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.429297 env[1563]: time="2024-12-13T16:07:56.429257101Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.429982 env[1563]: time="2024-12-13T16:07:56.429943371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.430726 env[1563]: time="2024-12-13T16:07:56.430687532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.431511 env[1563]: time="2024-12-13T16:07:56.431460564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.433297 env[1563]: time="2024-12-13T16:07:56.433257510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.434653 env[1563]: time="2024-12-13T16:07:56.434611898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.435098 env[1563]: time="2024-12-13T16:07:56.435057762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.436255 env[1563]: time="2024-12-13T16:07:56.436236118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.437234 env[1563]: time="2024-12-13T16:07:56.437195603Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.437699 env[1563]: time="2024-12-13T16:07:56.437633426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:07:56.442026 env[1563]: time="2024-12-13T16:07:56.441989544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:07:56.442026 env[1563]: time="2024-12-13T16:07:56.442015220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:07:56.442026 env[1563]: time="2024-12-13T16:07:56.442022806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:07:56.442147 env[1563]: time="2024-12-13T16:07:56.442092400Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d593698b426feacecbfd7742be3d990bfca25c3c3021d030d781f36d3853f963 pid=2254 runtime=io.containerd.runc.v2 Dec 13 16:07:56.442179 env[1563]: time="2024-12-13T16:07:56.442141283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:07:56.442179 env[1563]: time="2024-12-13T16:07:56.442159478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:07:56.442179 env[1563]: time="2024-12-13T16:07:56.442166877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:07:56.442264 env[1563]: time="2024-12-13T16:07:56.442244963Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc45858a2ea3e615eea9dfe38d018c69ee092cb6b450d41dafa39cca2fd8acd8 pid=2253 runtime=io.containerd.runc.v2 Dec 13 16:07:56.444669 env[1563]: time="2024-12-13T16:07:56.444631994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:07:56.444669 env[1563]: time="2024-12-13T16:07:56.444655836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:07:56.444669 env[1563]: time="2024-12-13T16:07:56.444663191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:07:56.444812 env[1563]: time="2024-12-13T16:07:56.444730819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3faae11996500f0deb8d85f68d588d76eb7dd778f643345da4fef74d6ed0678e pid=2279 runtime=io.containerd.runc.v2 Dec 13 16:07:56.449832 systemd[1]: Started cri-containerd-d593698b426feacecbfd7742be3d990bfca25c3c3021d030d781f36d3853f963.scope. Dec 13 16:07:56.450562 systemd[1]: Started cri-containerd-dc45858a2ea3e615eea9dfe38d018c69ee092cb6b450d41dafa39cca2fd8acd8.scope. Dec 13 16:07:56.452482 systemd[1]: Started cri-containerd-3faae11996500f0deb8d85f68d588d76eb7dd778f643345da4fef74d6ed0678e.scope. Dec 13 16:07:56.473125 env[1563]: time="2024-12-13T16:07:56.473098156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-6bc1e3250f,Uid:700f2f89a8e966729116c5bd73549294,Namespace:kube-system,Attempt:0,} returns sandbox id \"d593698b426feacecbfd7742be3d990bfca25c3c3021d030d781f36d3853f963\"" Dec 13 16:07:56.474103 env[1563]: time="2024-12-13T16:07:56.474086340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-6bc1e3250f,Uid:0e2cf49945e4abb45e79041ca5457f8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc45858a2ea3e615eea9dfe38d018c69ee092cb6b450d41dafa39cca2fd8acd8\"" Dec 13 16:07:56.475171 env[1563]: time="2024-12-13T16:07:56.475157564Z" level=info msg="CreateContainer within sandbox \"d593698b426feacecbfd7742be3d990bfca25c3c3021d030d781f36d3853f963\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 16:07:56.475216 env[1563]: time="2024-12-13T16:07:56.475170032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-6bc1e3250f,Uid:7820fde63a96ff07a677a60767ff461b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3faae11996500f0deb8d85f68d588d76eb7dd778f643345da4fef74d6ed0678e\"" Dec 13 16:07:56.475240 env[1563]: time="2024-12-13T16:07:56.475222270Z" level=info msg="CreateContainer within sandbox \"dc45858a2ea3e615eea9dfe38d018c69ee092cb6b450d41dafa39cca2fd8acd8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 16:07:56.477514 env[1563]: time="2024-12-13T16:07:56.477492438Z" level=info msg="CreateContainer within sandbox \"3faae11996500f0deb8d85f68d588d76eb7dd778f643345da4fef74d6ed0678e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 16:07:56.481776 env[1563]: time="2024-12-13T16:07:56.481728070Z" level=info msg="CreateContainer within sandbox \"d593698b426feacecbfd7742be3d990bfca25c3c3021d030d781f36d3853f963\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"07dc967d232faab30aecd9e5c116a1a84eee5d6f658c91815b0bd5cbb21d3565\"" Dec 13 16:07:56.482141 env[1563]: time="2024-12-13T16:07:56.482098109Z" level=info msg="StartContainer for \"07dc967d232faab30aecd9e5c116a1a84eee5d6f658c91815b0bd5cbb21d3565\"" Dec 13 16:07:56.483003 env[1563]: time="2024-12-13T16:07:56.482970076Z" level=info msg="CreateContainer within sandbox \"dc45858a2ea3e615eea9dfe38d018c69ee092cb6b450d41dafa39cca2fd8acd8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a89b34114dcfbb248c5ffe40610febb4b44604a64a538c646288c6aacfe639c\"" Dec 13 16:07:56.483214 env[1563]: time="2024-12-13T16:07:56.483202100Z" level=info msg="StartContainer for \"7a89b34114dcfbb248c5ffe40610febb4b44604a64a538c646288c6aacfe639c\"" Dec 13 16:07:56.485697 env[1563]: time="2024-12-13T16:07:56.485651194Z" level=info msg="CreateContainer within sandbox \"3faae11996500f0deb8d85f68d588d76eb7dd778f643345da4fef74d6ed0678e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1261279bbaf61a0334395bdf780cc3600d63c428b041fe4c3e66c21c903d24cd\"" Dec 13 16:07:56.486015 env[1563]: time="2024-12-13T16:07:56.485977110Z" level=info msg="StartContainer for \"1261279bbaf61a0334395bdf780cc3600d63c428b041fe4c3e66c21c903d24cd\"" Dec 13 16:07:56.490823 systemd[1]: Started cri-containerd-07dc967d232faab30aecd9e5c116a1a84eee5d6f658c91815b0bd5cbb21d3565.scope. Dec 13 16:07:56.492201 systemd[1]: Started cri-containerd-7a89b34114dcfbb248c5ffe40610febb4b44604a64a538c646288c6aacfe639c.scope. Dec 13 16:07:56.493886 systemd[1]: Started cri-containerd-1261279bbaf61a0334395bdf780cc3600d63c428b041fe4c3e66c21c903d24cd.scope. Dec 13 16:07:56.517329 env[1563]: time="2024-12-13T16:07:56.517297999Z" level=info msg="StartContainer for \"07dc967d232faab30aecd9e5c116a1a84eee5d6f658c91815b0bd5cbb21d3565\" returns successfully" Dec 13 16:07:56.517447 env[1563]: time="2024-12-13T16:07:56.517430230Z" level=info msg="StartContainer for \"7a89b34114dcfbb248c5ffe40610febb4b44604a64a538c646288c6aacfe639c\" returns successfully" Dec 13 16:07:56.520045 env[1563]: time="2024-12-13T16:07:56.520022427Z" level=info msg="StartContainer for \"1261279bbaf61a0334395bdf780cc3600d63c428b041fe4c3e66c21c903d24cd\" returns successfully" Dec 13 16:07:56.906670 kubelet[2197]: I1213 16:07:56.906318 2197 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:56.949927 kubelet[2197]: E1213 16:07:56.949908 2197 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-6bc1e3250f\" not found" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:57.052850 kubelet[2197]: I1213 16:07:57.052824 2197 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:57.369189 kubelet[2197]: I1213 16:07:57.369091 2197 apiserver.go:52] "Watching apiserver" Dec 13 16:07:57.389793 kubelet[2197]: I1213 16:07:57.389708 2197 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 16:07:57.416292 kubelet[2197]: E1213 16:07:57.416205 2197 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:57.416292 kubelet[2197]: E1213 16:07:57.416212 2197 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:57.416292 kubelet[2197]: E1213 16:07:57.416275 2197 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.6-a-6bc1e3250f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:07:58.424904 kubelet[2197]: W1213 16:07:58.424821 2197 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:08:00.144872 systemd[1]: Reloading. Dec 13 16:08:00.196250 /usr/lib/systemd/system-generators/torcx-generator[2531]: time="2024-12-13T16:08:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 16:08:00.196265 /usr/lib/systemd/system-generators/torcx-generator[2531]: time="2024-12-13T16:08:00Z" level=info msg="torcx already run" Dec 13 16:08:00.248255 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 16:08:00.248264 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 16:08:00.259732 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 16:08:00.326919 kubelet[2197]: I1213 16:08:00.326874 2197 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 16:08:00.326941 systemd[1]: Stopping kubelet.service... Dec 13 16:08:00.346010 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 16:08:00.346111 systemd[1]: Stopped kubelet.service. Dec 13 16:08:00.346135 systemd[1]: kubelet.service: Consumed 1.026s CPU time. Dec 13 16:08:00.347004 systemd[1]: Starting kubelet.service... Dec 13 16:08:00.940191 systemd[1]: Started kubelet.service. Dec 13 16:08:00.967384 kubelet[2596]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:08:00.967384 kubelet[2596]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 16:08:00.967384 kubelet[2596]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 16:08:00.967614 kubelet[2596]: I1213 16:08:00.967394 2596 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 16:08:00.969960 kubelet[2596]: I1213 16:08:00.969925 2596 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 16:08:00.969960 kubelet[2596]: I1213 16:08:00.969938 2596 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 16:08:00.970088 kubelet[2596]: I1213 16:08:00.970058 2596 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 16:08:00.970874 kubelet[2596]: I1213 16:08:00.970863 2596 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 16:08:00.971851 kubelet[2596]: I1213 16:08:00.971841 2596 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 16:08:00.991407 kubelet[2596]: I1213 16:08:00.991391 2596 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 16:08:00.991521 kubelet[2596]: I1213 16:08:00.991516 2596 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 16:08:00.991623 kubelet[2596]: I1213 16:08:00.991616 2596 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 16:08:00.991686 kubelet[2596]: I1213 16:08:00.991631 2596 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 16:08:00.991686 kubelet[2596]: I1213 16:08:00.991637 2596 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 16:08:00.991686 kubelet[2596]: I1213 16:08:00.991656 2596 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:08:00.991744 kubelet[2596]: I1213 16:08:00.991703 2596 kubelet.go:396] "Attempting to sync node with API server" Dec 13 16:08:00.991744 kubelet[2596]: I1213 16:08:00.991711 2596 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 16:08:00.991744 kubelet[2596]: I1213 16:08:00.991724 2596 kubelet.go:312] "Adding apiserver pod source" Dec 13 16:08:00.991744 kubelet[2596]: I1213 16:08:00.991732 2596 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 16:08:00.992088 kubelet[2596]: I1213 16:08:00.992080 2596 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 16:08:00.992182 kubelet[2596]: I1213 16:08:00.992177 2596 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 16:08:00.992393 kubelet[2596]: I1213 16:08:00.992386 2596 server.go:1256] "Started kubelet" Dec 13 16:08:00.992451 kubelet[2596]: I1213 16:08:00.992440 2596 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 16:08:00.992499 kubelet[2596]: I1213 16:08:00.992448 2596 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 16:08:00.992587 kubelet[2596]: I1213 16:08:00.992577 2596 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 16:08:00.993003 kubelet[2596]: I1213 16:08:00.992996 2596 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 16:08:00.993079 kubelet[2596]: I1213 16:08:00.993068 2596 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 16:08:00.993179 kubelet[2596]: E1213 16:08:00.993082 2596 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-6bc1e3250f\" not found" Dec 13 16:08:00.993224 kubelet[2596]: I1213 16:08:00.993099 2596 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 16:08:00.993266 kubelet[2596]: I1213 16:08:00.993255 2596 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 16:08:00.993387 kubelet[2596]: E1213 16:08:00.993377 2596 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 16:08:00.993554 kubelet[2596]: I1213 16:08:00.993543 2596 server.go:461] "Adding debug handlers to kubelet server" Dec 13 16:08:00.993603 kubelet[2596]: I1213 16:08:00.993550 2596 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 16:08:00.994368 kubelet[2596]: I1213 16:08:00.994351 2596 factory.go:221] Registration of the containerd container factory successfully Dec 13 16:08:00.994368 kubelet[2596]: I1213 16:08:00.994368 2596 factory.go:221] Registration of the systemd container factory successfully Dec 13 16:08:00.998647 kubelet[2596]: I1213 16:08:00.998628 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 16:08:00.999211 kubelet[2596]: I1213 16:08:00.999201 2596 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 16:08:00.999255 kubelet[2596]: I1213 16:08:00.999216 2596 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 16:08:00.999255 kubelet[2596]: I1213 16:08:00.999226 2596 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 16:08:00.999255 kubelet[2596]: E1213 16:08:00.999253 2596 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 16:08:01.008755 kubelet[2596]: I1213 16:08:01.008707 2596 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 16:08:01.008755 kubelet[2596]: I1213 16:08:01.008720 2596 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 16:08:01.008755 kubelet[2596]: I1213 16:08:01.008729 2596 state_mem.go:36] "Initialized new in-memory state store" Dec 13 16:08:01.008877 kubelet[2596]: I1213 16:08:01.008809 2596 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 16:08:01.008877 kubelet[2596]: I1213 16:08:01.008823 2596 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 16:08:01.008877 kubelet[2596]: I1213 16:08:01.008827 2596 policy_none.go:49] "None policy: Start" Dec 13 16:08:01.009096 kubelet[2596]: I1213 16:08:01.009058 2596 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 16:08:01.009096 kubelet[2596]: I1213 16:08:01.009068 2596 state_mem.go:35] "Initializing new in-memory state store" Dec 13 16:08:01.009196 kubelet[2596]: I1213 16:08:01.009157 2596 state_mem.go:75] "Updated machine memory state" Dec 13 16:08:01.011049 kubelet[2596]: I1213 16:08:01.011012 2596 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 16:08:01.011134 kubelet[2596]: I1213 16:08:01.011127 2596 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 16:08:01.039851 sudo[2639]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 16:08:01.040005 sudo[2639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 16:08:01.095217 kubelet[2596]: I1213 16:08:01.095199 2596 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.099964 kubelet[2596]: I1213 16:08:01.099921 2596 topology_manager.go:215] "Topology Admit Handler" podUID="0e2cf49945e4abb45e79041ca5457f8e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.100012 kubelet[2596]: I1213 16:08:01.099970 2596 topology_manager.go:215] "Topology Admit Handler" podUID="700f2f89a8e966729116c5bd73549294" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.100012 kubelet[2596]: I1213 16:08:01.099994 2596 topology_manager.go:215] "Topology Admit Handler" podUID="7820fde63a96ff07a677a60767ff461b" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.111886 kubelet[2596]: W1213 16:08:01.111873 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:08:01.111982 kubelet[2596]: W1213 16:08:01.111873 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:08:01.112633 kubelet[2596]: W1213 16:08:01.112626 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:08:01.112671 kubelet[2596]: E1213 16:08:01.112656 2596 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.114256 kubelet[2596]: I1213 16:08:01.114247 2596 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.114296 kubelet[2596]: I1213 16:08:01.114282 2596 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294555 kubelet[2596]: I1213 16:08:01.294472 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7820fde63a96ff07a677a60767ff461b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-6bc1e3250f\" (UID: \"7820fde63a96ff07a677a60767ff461b\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294555 kubelet[2596]: I1213 16:08:01.294515 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0e2cf49945e4abb45e79041ca5457f8e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" (UID: \"0e2cf49945e4abb45e79041ca5457f8e\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294555 kubelet[2596]: I1213 16:08:01.294529 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294555 kubelet[2596]: I1213 16:08:01.294543 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294555 kubelet[2596]: I1213 16:08:01.294558 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294715 kubelet[2596]: I1213 16:08:01.294590 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0e2cf49945e4abb45e79041ca5457f8e-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" (UID: \"0e2cf49945e4abb45e79041ca5457f8e\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294715 kubelet[2596]: I1213 16:08:01.294617 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0e2cf49945e4abb45e79041ca5457f8e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" (UID: \"0e2cf49945e4abb45e79041ca5457f8e\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294715 kubelet[2596]: I1213 16:08:01.294639 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.294715 kubelet[2596]: I1213 16:08:01.294658 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/700f2f89a8e966729116c5bd73549294-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-6bc1e3250f\" (UID: \"700f2f89a8e966729116c5bd73549294\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:01.378582 sudo[2639]: pam_unix(sudo:session): session closed for user root Dec 13 16:08:01.993073 kubelet[2596]: I1213 16:08:01.992960 2596 apiserver.go:52] "Watching apiserver" Dec 13 16:08:02.012673 kubelet[2596]: W1213 16:08:02.012619 2596 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 16:08:02.012908 kubelet[2596]: E1213 16:08:02.012790 2596 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-6bc1e3250f\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" Dec 13 16:08:02.072443 kubelet[2596]: I1213 16:08:02.072375 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-6bc1e3250f" podStartSLOduration=1.072240633 podStartE2EDuration="1.072240633s" podCreationTimestamp="2024-12-13 16:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:08:02.072116693 +0000 UTC m=+1.125914599" watchObservedRunningTime="2024-12-13 16:08:02.072240633 +0000 UTC m=+1.126038496" Dec 13 16:08:02.094091 kubelet[2596]: I1213 16:08:02.094000 2596 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 16:08:02.104409 kubelet[2596]: I1213 16:08:02.104344 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-6bc1e3250f" podStartSLOduration=4.104220625 podStartE2EDuration="4.104220625s" podCreationTimestamp="2024-12-13 16:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:08:02.087608117 +0000 UTC m=+1.141406005" watchObservedRunningTime="2024-12-13 16:08:02.104220625 +0000 UTC m=+1.158018534" Dec 13 16:08:02.104832 kubelet[2596]: I1213 16:08:02.104613 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-6bc1e3250f" podStartSLOduration=1.104547025 podStartE2EDuration="1.104547025s" podCreationTimestamp="2024-12-13 16:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:08:02.104449038 +0000 UTC m=+1.158246929" watchObservedRunningTime="2024-12-13 16:08:02.104547025 +0000 UTC m=+1.158344910" Dec 13 16:08:02.746302 sudo[1733]: pam_unix(sudo:session): session closed for user root Dec 13 16:08:02.749258 sshd[1730]: pam_unix(sshd:session): session closed for user core Dec 13 16:08:02.755661 systemd[1]: sshd@6-147.28.180.91:22-139.178.89.65:52856.service: Deactivated successfully. Dec 13 16:08:02.757463 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 16:08:02.757876 systemd[1]: session-9.scope: Consumed 3.165s CPU time. Dec 13 16:08:02.759275 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Dec 13 16:08:02.761560 systemd-logind[1555]: Removed session 9. Dec 13 16:08:04.982741 update_engine[1557]: I1213 16:08:04.982630 1557 update_attempter.cc:509] Updating boot flags... Dec 13 16:08:13.631650 kubelet[2596]: I1213 16:08:13.631589 2596 topology_manager.go:215] "Topology Admit Handler" podUID="c95ff500-35d3-4c98-b1ee-5d82130b3414" podNamespace="kube-system" podName="kube-proxy-shqt2" Dec 13 16:08:13.636148 kubelet[2596]: I1213 16:08:13.636103 2596 topology_manager.go:215] "Topology Admit Handler" podUID="fe53fc47-65c9-4783-ba4b-23933ba63b0f" podNamespace="kube-system" podName="cilium-jgdw5" Dec 13 16:08:13.639525 kubelet[2596]: I1213 16:08:13.639494 2596 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 16:08:13.639996 env[1563]: time="2024-12-13T16:08:13.639946445Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 16:08:13.640352 kubelet[2596]: I1213 16:08:13.640182 2596 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 16:08:13.641824 systemd[1]: Created slice kubepods-besteffort-podc95ff500_35d3_4c98_b1ee_5d82130b3414.slice. Dec 13 16:08:13.661655 systemd[1]: Created slice kubepods-burstable-podfe53fc47_65c9_4783_ba4b_23933ba63b0f.slice. Dec 13 16:08:13.672219 kubelet[2596]: I1213 16:08:13.672172 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-bpf-maps\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672219 kubelet[2596]: I1213 16:08:13.672202 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cni-path\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672219 kubelet[2596]: I1213 16:08:13.672217 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe53fc47-65c9-4783-ba4b-23933ba63b0f-hubble-tls\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672356 kubelet[2596]: I1213 16:08:13.672231 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c95ff500-35d3-4c98-b1ee-5d82130b3414-xtables-lock\") pod \"kube-proxy-shqt2\" (UID: \"c95ff500-35d3-4c98-b1ee-5d82130b3414\") " pod="kube-system/kube-proxy-shqt2" Dec 13 16:08:13.672356 kubelet[2596]: I1213 16:08:13.672243 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-hostproc\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672356 kubelet[2596]: I1213 16:08:13.672265 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-etc-cni-netd\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672356 kubelet[2596]: I1213 16:08:13.672298 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-host-proc-sys-kernel\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672356 kubelet[2596]: I1213 16:08:13.672320 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-cgroup\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672356 kubelet[2596]: I1213 16:08:13.672337 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-host-proc-sys-net\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672515 kubelet[2596]: I1213 16:08:13.672362 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-lib-modules\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672515 kubelet[2596]: I1213 16:08:13.672378 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-xtables-lock\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672515 kubelet[2596]: I1213 16:08:13.672391 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-run\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672515 kubelet[2596]: I1213 16:08:13.672410 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c95ff500-35d3-4c98-b1ee-5d82130b3414-kube-proxy\") pod \"kube-proxy-shqt2\" (UID: \"c95ff500-35d3-4c98-b1ee-5d82130b3414\") " pod="kube-system/kube-proxy-shqt2" Dec 13 16:08:13.672515 kubelet[2596]: I1213 16:08:13.672429 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe53fc47-65c9-4783-ba4b-23933ba63b0f-clustermesh-secrets\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672515 kubelet[2596]: I1213 16:08:13.672446 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-config-path\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672645 kubelet[2596]: I1213 16:08:13.672460 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgpld\" (UniqueName: \"kubernetes.io/projected/c95ff500-35d3-4c98-b1ee-5d82130b3414-kube-api-access-zgpld\") pod \"kube-proxy-shqt2\" (UID: \"c95ff500-35d3-4c98-b1ee-5d82130b3414\") " pod="kube-system/kube-proxy-shqt2" Dec 13 16:08:13.672645 kubelet[2596]: I1213 16:08:13.672480 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvjrr\" (UniqueName: \"kubernetes.io/projected/fe53fc47-65c9-4783-ba4b-23933ba63b0f-kube-api-access-gvjrr\") pod \"cilium-jgdw5\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " pod="kube-system/cilium-jgdw5" Dec 13 16:08:13.672645 kubelet[2596]: I1213 16:08:13.672493 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c95ff500-35d3-4c98-b1ee-5d82130b3414-lib-modules\") pod \"kube-proxy-shqt2\" (UID: \"c95ff500-35d3-4c98-b1ee-5d82130b3414\") " pod="kube-system/kube-proxy-shqt2" Dec 13 16:08:13.908741 kubelet[2596]: I1213 16:08:13.908585 2596 topology_manager.go:215] "Topology Admit Handler" podUID="b756a862-45cd-4910-993c-daecb4fbcf09" podNamespace="kube-system" podName="cilium-operator-5cc964979-pwfwl" Dec 13 16:08:13.918224 systemd[1]: Created slice kubepods-besteffort-podb756a862_45cd_4910_993c_daecb4fbcf09.slice. Dec 13 16:08:13.961844 env[1563]: time="2024-12-13T16:08:13.961705364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-shqt2,Uid:c95ff500-35d3-4c98-b1ee-5d82130b3414,Namespace:kube-system,Attempt:0,}" Dec 13 16:08:13.965173 env[1563]: time="2024-12-13T16:08:13.964959778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgdw5,Uid:fe53fc47-65c9-4783-ba4b-23933ba63b0f,Namespace:kube-system,Attempt:0,}" Dec 13 16:08:13.976853 kubelet[2596]: I1213 16:08:13.976744 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b756a862-45cd-4910-993c-daecb4fbcf09-cilium-config-path\") pod \"cilium-operator-5cc964979-pwfwl\" (UID: \"b756a862-45cd-4910-993c-daecb4fbcf09\") " pod="kube-system/cilium-operator-5cc964979-pwfwl" Dec 13 16:08:13.977156 kubelet[2596]: I1213 16:08:13.976993 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w446n\" (UniqueName: \"kubernetes.io/projected/b756a862-45cd-4910-993c-daecb4fbcf09-kube-api-access-w446n\") pod \"cilium-operator-5cc964979-pwfwl\" (UID: \"b756a862-45cd-4910-993c-daecb4fbcf09\") " pod="kube-system/cilium-operator-5cc964979-pwfwl" Dec 13 16:08:13.988101 env[1563]: time="2024-12-13T16:08:13.987931058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:08:13.988101 env[1563]: time="2024-12-13T16:08:13.988033667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:08:13.988101 env[1563]: time="2024-12-13T16:08:13.988074792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:08:13.988665 env[1563]: time="2024-12-13T16:08:13.988460466Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4821ff0272d5005071bba90b05d985fad9c237fb4da1909996565863ef8796c5 pid=2766 runtime=io.containerd.runc.v2 Dec 13 16:08:13.990202 env[1563]: time="2024-12-13T16:08:13.990039085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:08:13.990202 env[1563]: time="2024-12-13T16:08:13.990145118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:08:13.990202 env[1563]: time="2024-12-13T16:08:13.990184635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:08:13.990733 env[1563]: time="2024-12-13T16:08:13.990579086Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88 pid=2773 runtime=io.containerd.runc.v2 Dec 13 16:08:14.014861 systemd[1]: Started cri-containerd-0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88.scope. Dec 13 16:08:14.016935 systemd[1]: Started cri-containerd-4821ff0272d5005071bba90b05d985fad9c237fb4da1909996565863ef8796c5.scope. Dec 13 16:08:14.035498 env[1563]: time="2024-12-13T16:08:14.035442576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-shqt2,Uid:c95ff500-35d3-4c98-b1ee-5d82130b3414,Namespace:kube-system,Attempt:0,} returns sandbox id \"4821ff0272d5005071bba90b05d985fad9c237fb4da1909996565863ef8796c5\"" Dec 13 16:08:14.035852 env[1563]: time="2024-12-13T16:08:14.035820262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgdw5,Uid:fe53fc47-65c9-4783-ba4b-23933ba63b0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\"" Dec 13 16:08:14.036965 env[1563]: time="2024-12-13T16:08:14.036939313Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 16:08:14.037575 env[1563]: time="2024-12-13T16:08:14.037549117Z" level=info msg="CreateContainer within sandbox \"4821ff0272d5005071bba90b05d985fad9c237fb4da1909996565863ef8796c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 16:08:14.045519 env[1563]: time="2024-12-13T16:08:14.045486188Z" level=info msg="CreateContainer within sandbox \"4821ff0272d5005071bba90b05d985fad9c237fb4da1909996565863ef8796c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95d917e23cc7c8d6c83a747988cb72690ef43cf5aa6fecbd8399e418e5fadab3\"" Dec 13 16:08:14.045910 env[1563]: time="2024-12-13T16:08:14.045866321Z" level=info msg="StartContainer for \"95d917e23cc7c8d6c83a747988cb72690ef43cf5aa6fecbd8399e418e5fadab3\"" Dec 13 16:08:14.059990 systemd[1]: Started cri-containerd-95d917e23cc7c8d6c83a747988cb72690ef43cf5aa6fecbd8399e418e5fadab3.scope. Dec 13 16:08:14.081988 env[1563]: time="2024-12-13T16:08:14.081927150Z" level=info msg="StartContainer for \"95d917e23cc7c8d6c83a747988cb72690ef43cf5aa6fecbd8399e418e5fadab3\" returns successfully" Dec 13 16:08:14.222745 env[1563]: time="2024-12-13T16:08:14.222604485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pwfwl,Uid:b756a862-45cd-4910-993c-daecb4fbcf09,Namespace:kube-system,Attempt:0,}" Dec 13 16:08:14.248336 env[1563]: time="2024-12-13T16:08:14.248155540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:08:14.248336 env[1563]: time="2024-12-13T16:08:14.248254754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:08:14.248336 env[1563]: time="2024-12-13T16:08:14.248294449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:08:14.248866 env[1563]: time="2024-12-13T16:08:14.248713795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4 pid=2920 runtime=io.containerd.runc.v2 Dec 13 16:08:14.278044 systemd[1]: Started cri-containerd-53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4.scope. Dec 13 16:08:14.341270 env[1563]: time="2024-12-13T16:08:14.341218530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-pwfwl,Uid:b756a862-45cd-4910-993c-daecb4fbcf09,Namespace:kube-system,Attempt:0,} returns sandbox id \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\"" Dec 13 16:08:19.184121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880042599.mount: Deactivated successfully. Dec 13 16:08:20.881147 env[1563]: time="2024-12-13T16:08:20.881102970Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:08:20.881772 env[1563]: time="2024-12-13T16:08:20.881707961Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:08:20.882697 env[1563]: time="2024-12-13T16:08:20.882648787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:08:20.883334 env[1563]: time="2024-12-13T16:08:20.883298959Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 16:08:20.883687 env[1563]: time="2024-12-13T16:08:20.883672252Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 16:08:20.884390 env[1563]: time="2024-12-13T16:08:20.884348827Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 16:08:20.889077 env[1563]: time="2024-12-13T16:08:20.889034413Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\"" Dec 13 16:08:20.889413 env[1563]: time="2024-12-13T16:08:20.889402340Z" level=info msg="StartContainer for \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\"" Dec 13 16:08:20.897980 systemd[1]: Started cri-containerd-cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3.scope. Dec 13 16:08:20.909248 env[1563]: time="2024-12-13T16:08:20.909225917Z" level=info msg="StartContainer for \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\" returns successfully" Dec 13 16:08:20.914901 systemd[1]: cri-containerd-cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3.scope: Deactivated successfully. Dec 13 16:08:21.016683 kubelet[2596]: I1213 16:08:21.016666 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-shqt2" podStartSLOduration=8.01663975 podStartE2EDuration="8.01663975s" podCreationTimestamp="2024-12-13 16:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:08:15.059913177 +0000 UTC m=+14.113711089" watchObservedRunningTime="2024-12-13 16:08:21.01663975 +0000 UTC m=+20.070437559" Dec 13 16:08:21.892524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3-rootfs.mount: Deactivated successfully. Dec 13 16:08:21.988727 env[1563]: time="2024-12-13T16:08:21.988584952Z" level=info msg="shim disconnected" id=cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3 Dec 13 16:08:21.988727 env[1563]: time="2024-12-13T16:08:21.988683324Z" level=warning msg="cleaning up after shim disconnected" id=cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3 namespace=k8s.io Dec 13 16:08:21.988727 env[1563]: time="2024-12-13T16:08:21.988710849Z" level=info msg="cleaning up dead shim" Dec 13 16:08:22.004136 env[1563]: time="2024-12-13T16:08:22.004026402Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:08:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3090 runtime=io.containerd.runc.v2\n" Dec 13 16:08:22.058621 env[1563]: time="2024-12-13T16:08:22.058428252Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 16:08:22.072897 env[1563]: time="2024-12-13T16:08:22.072774793Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\"" Dec 13 16:08:22.073741 env[1563]: time="2024-12-13T16:08:22.073637431Z" level=info msg="StartContainer for \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\"" Dec 13 16:08:22.110657 systemd[1]: Started cri-containerd-4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88.scope. Dec 13 16:08:22.131432 env[1563]: time="2024-12-13T16:08:22.131396263Z" level=info msg="StartContainer for \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\" returns successfully" Dec 13 16:08:22.143047 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 16:08:22.143302 systemd[1]: Stopped systemd-sysctl.service. Dec 13 16:08:22.143501 systemd[1]: Stopping systemd-sysctl.service... Dec 13 16:08:22.144846 systemd[1]: Starting systemd-sysctl.service... Dec 13 16:08:22.145268 systemd[1]: cri-containerd-4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88.scope: Deactivated successfully. Dec 13 16:08:22.151476 systemd[1]: Finished systemd-sysctl.service. Dec 13 16:08:22.173071 env[1563]: time="2024-12-13T16:08:22.173008385Z" level=info msg="shim disconnected" id=4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88 Dec 13 16:08:22.173071 env[1563]: time="2024-12-13T16:08:22.173053083Z" level=warning msg="cleaning up after shim disconnected" id=4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88 namespace=k8s.io Dec 13 16:08:22.173071 env[1563]: time="2024-12-13T16:08:22.173064269Z" level=info msg="cleaning up dead shim" Dec 13 16:08:22.180111 env[1563]: time="2024-12-13T16:08:22.180073752Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:08:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3153 runtime=io.containerd.runc.v2\n" Dec 13 16:08:22.889187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88-rootfs.mount: Deactivated successfully. Dec 13 16:08:23.065036 env[1563]: time="2024-12-13T16:08:23.064946483Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 16:08:23.076657 env[1563]: time="2024-12-13T16:08:23.076635386Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\"" Dec 13 16:08:23.076972 env[1563]: time="2024-12-13T16:08:23.076958484Z" level=info msg="StartContainer for \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\"" Dec 13 16:08:23.085819 systemd[1]: Started cri-containerd-10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d.scope. Dec 13 16:08:23.098259 env[1563]: time="2024-12-13T16:08:23.098231978Z" level=info msg="StartContainer for \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\" returns successfully" Dec 13 16:08:23.099517 systemd[1]: cri-containerd-10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d.scope: Deactivated successfully. Dec 13 16:08:23.109524 env[1563]: time="2024-12-13T16:08:23.109451375Z" level=info msg="shim disconnected" id=10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d Dec 13 16:08:23.109524 env[1563]: time="2024-12-13T16:08:23.109481766Z" level=warning msg="cleaning up after shim disconnected" id=10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d namespace=k8s.io Dec 13 16:08:23.109524 env[1563]: time="2024-12-13T16:08:23.109488023Z" level=info msg="cleaning up dead shim" Dec 13 16:08:23.113122 env[1563]: time="2024-12-13T16:08:23.113081219Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:08:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3207 runtime=io.containerd.runc.v2\n" Dec 13 16:08:23.892877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d-rootfs.mount: Deactivated successfully. Dec 13 16:08:24.072510 env[1563]: time="2024-12-13T16:08:24.072405254Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 16:08:24.086763 env[1563]: time="2024-12-13T16:08:24.086646121Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\"" Dec 13 16:08:24.087627 env[1563]: time="2024-12-13T16:08:24.087528562Z" level=info msg="StartContainer for \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\"" Dec 13 16:08:24.119858 systemd[1]: Started cri-containerd-5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f.scope. Dec 13 16:08:24.141954 env[1563]: time="2024-12-13T16:08:24.141879160Z" level=info msg="StartContainer for \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\" returns successfully" Dec 13 16:08:24.142617 systemd[1]: cri-containerd-5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f.scope: Deactivated successfully. Dec 13 16:08:24.182282 env[1563]: time="2024-12-13T16:08:24.182211139Z" level=info msg="shim disconnected" id=5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f Dec 13 16:08:24.182602 env[1563]: time="2024-12-13T16:08:24.182285241Z" level=warning msg="cleaning up after shim disconnected" id=5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f namespace=k8s.io Dec 13 16:08:24.182602 env[1563]: time="2024-12-13T16:08:24.182311802Z" level=info msg="cleaning up dead shim" Dec 13 16:08:24.192674 env[1563]: time="2024-12-13T16:08:24.192611710Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:08:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3261 runtime=io.containerd.runc.v2\n" Dec 13 16:08:24.893335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f-rootfs.mount: Deactivated successfully. Dec 13 16:08:25.081603 env[1563]: time="2024-12-13T16:08:25.081453885Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 16:08:25.100650 env[1563]: time="2024-12-13T16:08:25.100524640Z" level=info msg="CreateContainer within sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\"" Dec 13 16:08:25.101567 env[1563]: time="2024-12-13T16:08:25.101442985Z" level=info msg="StartContainer for \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\"" Dec 13 16:08:25.104877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887631285.mount: Deactivated successfully. Dec 13 16:08:25.111738 systemd[1]: Started cri-containerd-c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573.scope. Dec 13 16:08:25.130231 env[1563]: time="2024-12-13T16:08:25.130174548Z" level=info msg="StartContainer for \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\" returns successfully" Dec 13 16:08:25.156358 kubelet[2596]: I1213 16:08:25.156317 2596 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 16:08:25.169491 kubelet[2596]: I1213 16:08:25.169467 2596 topology_manager.go:215] "Topology Admit Handler" podUID="b9ad6381-8729-4113-899e-d51010de08f6" podNamespace="kube-system" podName="coredns-76f75df574-p7fpn" Dec 13 16:08:25.170354 kubelet[2596]: I1213 16:08:25.170342 2596 topology_manager.go:215] "Topology Admit Handler" podUID="d7be7431-f417-4c00-834e-e2d456f73cae" podNamespace="kube-system" podName="coredns-76f75df574-2zb9w" Dec 13 16:08:25.172591 systemd[1]: Created slice kubepods-burstable-podb9ad6381_8729_4113_899e_d51010de08f6.slice. Dec 13 16:08:25.174751 systemd[1]: Created slice kubepods-burstable-podd7be7431_f417_4c00_834e_e2d456f73cae.slice. Dec 13 16:08:25.178523 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 16:08:25.350614 kubelet[2596]: I1213 16:08:25.350574 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9ad6381-8729-4113-899e-d51010de08f6-config-volume\") pod \"coredns-76f75df574-p7fpn\" (UID: \"b9ad6381-8729-4113-899e-d51010de08f6\") " pod="kube-system/coredns-76f75df574-p7fpn" Dec 13 16:08:25.350614 kubelet[2596]: I1213 16:08:25.350605 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7be7431-f417-4c00-834e-e2d456f73cae-config-volume\") pod \"coredns-76f75df574-2zb9w\" (UID: \"d7be7431-f417-4c00-834e-e2d456f73cae\") " pod="kube-system/coredns-76f75df574-2zb9w" Dec 13 16:08:25.350734 kubelet[2596]: I1213 16:08:25.350623 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p5qk\" (UniqueName: \"kubernetes.io/projected/d7be7431-f417-4c00-834e-e2d456f73cae-kube-api-access-4p5qk\") pod \"coredns-76f75df574-2zb9w\" (UID: \"d7be7431-f417-4c00-834e-e2d456f73cae\") " pod="kube-system/coredns-76f75df574-2zb9w" Dec 13 16:08:25.350734 kubelet[2596]: I1213 16:08:25.350639 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94rdw\" (UniqueName: \"kubernetes.io/projected/b9ad6381-8729-4113-899e-d51010de08f6-kube-api-access-94rdw\") pod \"coredns-76f75df574-p7fpn\" (UID: \"b9ad6381-8729-4113-899e-d51010de08f6\") " pod="kube-system/coredns-76f75df574-p7fpn" Dec 13 16:08:25.388548 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 16:08:25.474154 env[1563]: time="2024-12-13T16:08:25.474101127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p7fpn,Uid:b9ad6381-8729-4113-899e-d51010de08f6,Namespace:kube-system,Attempt:0,}" Dec 13 16:08:25.476505 env[1563]: time="2024-12-13T16:08:25.476463317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2zb9w,Uid:d7be7431-f417-4c00-834e-e2d456f73cae,Namespace:kube-system,Attempt:0,}" Dec 13 16:08:26.103125 kubelet[2596]: I1213 16:08:26.103080 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jgdw5" podStartSLOduration=6.256148415 podStartE2EDuration="13.10304264s" podCreationTimestamp="2024-12-13 16:08:13 +0000 UTC" firstStartedPulling="2024-12-13 16:08:14.036578086 +0000 UTC m=+13.090375912" lastFinishedPulling="2024-12-13 16:08:20.883472324 +0000 UTC m=+19.937270137" observedRunningTime="2024-12-13 16:08:26.102772316 +0000 UTC m=+25.156570137" watchObservedRunningTime="2024-12-13 16:08:26.10304264 +0000 UTC m=+25.156840450" Dec 13 16:08:26.792491 env[1563]: time="2024-12-13T16:08:26.792459375Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:08:26.793029 env[1563]: time="2024-12-13T16:08:26.793017527Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:08:26.793661 env[1563]: time="2024-12-13T16:08:26.793649734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 16:08:26.794310 env[1563]: time="2024-12-13T16:08:26.794253352Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 16:08:26.795331 env[1563]: time="2024-12-13T16:08:26.795318097Z" level=info msg="CreateContainer within sandbox \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 16:08:26.800420 env[1563]: time="2024-12-13T16:08:26.800403864Z" level=info msg="CreateContainer within sandbox \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\"" Dec 13 16:08:26.800687 env[1563]: time="2024-12-13T16:08:26.800657970Z" level=info msg="StartContainer for \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\"" Dec 13 16:08:26.808750 systemd[1]: Started cri-containerd-2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9.scope. Dec 13 16:08:26.820427 env[1563]: time="2024-12-13T16:08:26.820402777Z" level=info msg="StartContainer for \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\" returns successfully" Dec 13 16:08:27.102572 kubelet[2596]: I1213 16:08:27.102476 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-pwfwl" podStartSLOduration=1.649731886 podStartE2EDuration="14.102416248s" podCreationTimestamp="2024-12-13 16:08:13 +0000 UTC" firstStartedPulling="2024-12-13 16:08:14.341827178 +0000 UTC m=+13.395624995" lastFinishedPulling="2024-12-13 16:08:26.794511545 +0000 UTC m=+25.848309357" observedRunningTime="2024-12-13 16:08:27.101918334 +0000 UTC m=+26.155716165" watchObservedRunningTime="2024-12-13 16:08:27.102416248 +0000 UTC m=+26.156214071" Dec 13 16:08:30.794439 systemd-networkd[1311]: cilium_host: Link UP Dec 13 16:08:30.794540 systemd-networkd[1311]: cilium_net: Link UP Dec 13 16:08:30.801570 systemd-networkd[1311]: cilium_net: Gained carrier Dec 13 16:08:30.808791 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 16:08:30.808853 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 16:08:30.808897 systemd-networkd[1311]: cilium_host: Gained carrier Dec 13 16:08:30.855128 systemd-networkd[1311]: cilium_vxlan: Link UP Dec 13 16:08:30.855132 systemd-networkd[1311]: cilium_vxlan: Gained carrier Dec 13 16:08:30.988511 kernel: NET: Registered PF_ALG protocol family Dec 13 16:08:31.470027 systemd-networkd[1311]: lxc_health: Link UP Dec 13 16:08:31.488312 systemd-networkd[1311]: lxc_health: Gained carrier Dec 13 16:08:31.488484 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 16:08:31.752557 systemd-networkd[1311]: cilium_host: Gained IPv6LL Dec 13 16:08:31.816578 systemd-networkd[1311]: cilium_net: Gained IPv6LL Dec 13 16:08:32.016066 systemd-networkd[1311]: lxc6dbe13a508b7: Link UP Dec 13 16:08:32.052486 kernel: eth0: renamed from tmp97c19 Dec 13 16:08:32.070570 kernel: eth0: renamed from tmpc00f0 Dec 13 16:08:32.081049 systemd-networkd[1311]: lxc13517908a5ba: Link UP Dec 13 16:08:32.095233 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 16:08:32.095268 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc13517908a5ba: link becomes ready Dec 13 16:08:32.095287 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 16:08:32.109357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6dbe13a508b7: link becomes ready Dec 13 16:08:32.109863 systemd-networkd[1311]: lxc13517908a5ba: Gained carrier Dec 13 16:08:32.110037 systemd-networkd[1311]: lxc6dbe13a508b7: Gained carrier Dec 13 16:08:32.712572 systemd-networkd[1311]: cilium_vxlan: Gained IPv6LL Dec 13 16:08:32.840629 systemd-networkd[1311]: lxc_health: Gained IPv6LL Dec 13 16:08:33.289602 systemd-networkd[1311]: lxc13517908a5ba: Gained IPv6LL Dec 13 16:08:34.120549 systemd-networkd[1311]: lxc6dbe13a508b7: Gained IPv6LL Dec 13 16:08:34.416544 env[1563]: time="2024-12-13T16:08:34.416453360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:08:34.416544 env[1563]: time="2024-12-13T16:08:34.416488015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:08:34.416544 env[1563]: time="2024-12-13T16:08:34.416496755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:08:34.416776 env[1563]: time="2024-12-13T16:08:34.416571774Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c00f050cefc1ad7748071ebc7a3d80ed16e6b3a73ae936e41096e6676a89d5aa pid=4005 runtime=io.containerd.runc.v2 Dec 13 16:08:34.416776 env[1563]: time="2024-12-13T16:08:34.416569567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:08:34.416776 env[1563]: time="2024-12-13T16:08:34.416589859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:08:34.416776 env[1563]: time="2024-12-13T16:08:34.416599380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:08:34.416776 env[1563]: time="2024-12-13T16:08:34.416664356Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97c192063158e7aa98428078518f5a4bc06782357201deaf9c2eab5a36fc3da6 pid=4006 runtime=io.containerd.runc.v2 Dec 13 16:08:34.424852 systemd[1]: Started cri-containerd-97c192063158e7aa98428078518f5a4bc06782357201deaf9c2eab5a36fc3da6.scope. Dec 13 16:08:34.425598 systemd[1]: Started cri-containerd-c00f050cefc1ad7748071ebc7a3d80ed16e6b3a73ae936e41096e6676a89d5aa.scope. Dec 13 16:08:34.446241 env[1563]: time="2024-12-13T16:08:34.446205303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2zb9w,Uid:d7be7431-f417-4c00-834e-e2d456f73cae,Namespace:kube-system,Attempt:0,} returns sandbox id \"c00f050cefc1ad7748071ebc7a3d80ed16e6b3a73ae936e41096e6676a89d5aa\"" Dec 13 16:08:34.446432 env[1563]: time="2024-12-13T16:08:34.446414206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-p7fpn,Uid:b9ad6381-8729-4113-899e-d51010de08f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"97c192063158e7aa98428078518f5a4bc06782357201deaf9c2eab5a36fc3da6\"" Dec 13 16:08:34.447444 env[1563]: time="2024-12-13T16:08:34.447427772Z" level=info msg="CreateContainer within sandbox \"c00f050cefc1ad7748071ebc7a3d80ed16e6b3a73ae936e41096e6676a89d5aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 16:08:34.447484 env[1563]: time="2024-12-13T16:08:34.447438358Z" level=info msg="CreateContainer within sandbox \"97c192063158e7aa98428078518f5a4bc06782357201deaf9c2eab5a36fc3da6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 16:08:34.452212 env[1563]: time="2024-12-13T16:08:34.452189670Z" level=info msg="CreateContainer within sandbox \"97c192063158e7aa98428078518f5a4bc06782357201deaf9c2eab5a36fc3da6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0156c709b712603a1a136eb20b1f12b6746b7168442350f233a81ad308068f90\"" Dec 13 16:08:34.452437 env[1563]: time="2024-12-13T16:08:34.452422944Z" level=info msg="StartContainer for \"0156c709b712603a1a136eb20b1f12b6746b7168442350f233a81ad308068f90\"" Dec 13 16:08:34.453171 env[1563]: time="2024-12-13T16:08:34.453157957Z" level=info msg="CreateContainer within sandbox \"c00f050cefc1ad7748071ebc7a3d80ed16e6b3a73ae936e41096e6676a89d5aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5de9966a857e2351e9fe3bb178cc14fa79a1ec03f40aaa0017be48892408838b\"" Dec 13 16:08:34.453307 env[1563]: time="2024-12-13T16:08:34.453296858Z" level=info msg="StartContainer for \"5de9966a857e2351e9fe3bb178cc14fa79a1ec03f40aaa0017be48892408838b\"" Dec 13 16:08:34.503912 systemd[1]: Started cri-containerd-0156c709b712603a1a136eb20b1f12b6746b7168442350f233a81ad308068f90.scope. Dec 13 16:08:34.506416 systemd[1]: Started cri-containerd-5de9966a857e2351e9fe3bb178cc14fa79a1ec03f40aaa0017be48892408838b.scope. Dec 13 16:08:34.555360 env[1563]: time="2024-12-13T16:08:34.555254993Z" level=info msg="StartContainer for \"5de9966a857e2351e9fe3bb178cc14fa79a1ec03f40aaa0017be48892408838b\" returns successfully" Dec 13 16:08:34.555360 env[1563]: time="2024-12-13T16:08:34.555255699Z" level=info msg="StartContainer for \"0156c709b712603a1a136eb20b1f12b6746b7168442350f233a81ad308068f90\" returns successfully" Dec 13 16:08:35.145230 kubelet[2596]: I1213 16:08:35.145161 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-p7fpn" podStartSLOduration=22.145068738 podStartE2EDuration="22.145068738s" podCreationTimestamp="2024-12-13 16:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:08:35.144253353 +0000 UTC m=+34.198051229" watchObservedRunningTime="2024-12-13 16:08:35.145068738 +0000 UTC m=+34.198866605" Dec 13 16:08:35.167954 kubelet[2596]: I1213 16:08:35.167936 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2zb9w" podStartSLOduration=22.167911306 podStartE2EDuration="22.167911306s" podCreationTimestamp="2024-12-13 16:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:08:35.167625576 +0000 UTC m=+34.221423395" watchObservedRunningTime="2024-12-13 16:08:35.167911306 +0000 UTC m=+34.221709116" Dec 13 16:08:39.742654 kubelet[2596]: I1213 16:08:39.742531 2596 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 16:10:35.081741 update_engine[1557]: I1213 16:10:35.081629 1557 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 16:10:35.081741 update_engine[1557]: I1213 16:10:35.081708 1557 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 16:10:35.082969 update_engine[1557]: I1213 16:10:35.082396 1557 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 16:10:35.083498 update_engine[1557]: I1213 16:10:35.083398 1557 omaha_request_params.cc:62] Current group set to lts Dec 13 16:10:35.083843 update_engine[1557]: I1213 16:10:35.083722 1557 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 16:10:35.083843 update_engine[1557]: I1213 16:10:35.083743 1557 update_attempter.cc:643] Scheduling an action processor start. Dec 13 16:10:35.083843 update_engine[1557]: I1213 16:10:35.083776 1557 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 16:10:35.083843 update_engine[1557]: I1213 16:10:35.083846 1557 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 16:10:35.084382 update_engine[1557]: I1213 16:10:35.083994 1557 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 16:10:35.084382 update_engine[1557]: I1213 16:10:35.084012 1557 omaha_request_action.cc:271] Request: Dec 13 16:10:35.084382 update_engine[1557]: Dec 13 16:10:35.084382 update_engine[1557]: Dec 13 16:10:35.084382 update_engine[1557]: Dec 13 16:10:35.084382 update_engine[1557]: Dec 13 16:10:35.084382 update_engine[1557]: Dec 13 16:10:35.084382 update_engine[1557]: Dec 13 16:10:35.084382 update_engine[1557]: Dec 13 16:10:35.084382 update_engine[1557]: Dec 13 16:10:35.084382 update_engine[1557]: I1213 16:10:35.084023 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:10:35.085450 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 16:10:35.087177 update_engine[1557]: I1213 16:10:35.087103 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:10:35.087389 update_engine[1557]: E1213 16:10:35.087338 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:10:35.087552 update_engine[1557]: I1213 16:10:35.087527 1557 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 16:10:44.989733 update_engine[1557]: I1213 16:10:44.989607 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:10:44.990692 update_engine[1557]: I1213 16:10:44.990139 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:10:44.990692 update_engine[1557]: E1213 16:10:44.990346 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:10:44.990692 update_engine[1557]: I1213 16:10:44.990543 1557 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 16:10:54.924433 systemd[1]: Started sshd@7-147.28.180.91:22-218.92.0.204:49134.service. Dec 13 16:10:54.990648 update_engine[1557]: I1213 16:10:54.990530 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:10:54.991461 update_engine[1557]: I1213 16:10:54.991040 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:10:54.991461 update_engine[1557]: E1213 16:10:54.991254 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:10:54.991461 update_engine[1557]: I1213 16:10:54.991411 1557 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 16:11:04.990736 update_engine[1557]: I1213 16:11:04.990617 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:11:04.991693 update_engine[1557]: I1213 16:11:04.991132 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:11:04.991693 update_engine[1557]: E1213 16:11:04.991354 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:11:04.991693 update_engine[1557]: I1213 16:11:04.991539 1557 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 16:11:04.991693 update_engine[1557]: I1213 16:11:04.991556 1557 omaha_request_action.cc:621] Omaha request response: Dec 13 16:11:04.992123 update_engine[1557]: E1213 16:11:04.991729 1557 omaha_request_action.cc:640] Omaha request network transfer failed. Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.991760 1557 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.991770 1557 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.991779 1557 update_attempter.cc:306] Processing Done. Dec 13 16:11:04.992123 update_engine[1557]: E1213 16:11:04.991804 1557 update_attempter.cc:619] Update failed. Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.991813 1557 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.991822 1557 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.991832 1557 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.991986 1557 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.992038 1557 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.992048 1557 omaha_request_action.cc:271] Request: Dec 13 16:11:04.992123 update_engine[1557]: Dec 13 16:11:04.992123 update_engine[1557]: Dec 13 16:11:04.992123 update_engine[1557]: Dec 13 16:11:04.992123 update_engine[1557]: Dec 13 16:11:04.992123 update_engine[1557]: Dec 13 16:11:04.992123 update_engine[1557]: Dec 13 16:11:04.992123 update_engine[1557]: I1213 16:11:04.992058 1557 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 16:11:04.994119 update_engine[1557]: I1213 16:11:04.992410 1557 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 16:11:04.994119 update_engine[1557]: E1213 16:11:04.992595 1557 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 16:11:04.994119 update_engine[1557]: I1213 16:11:04.992728 1557 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 16:11:04.994119 update_engine[1557]: I1213 16:11:04.992743 1557 omaha_request_action.cc:621] Omaha request response: Dec 13 16:11:04.994119 update_engine[1557]: I1213 16:11:04.992753 1557 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 16:11:04.994119 update_engine[1557]: I1213 16:11:04.992761 1557 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 16:11:04.994119 update_engine[1557]: I1213 16:11:04.992769 1557 update_attempter.cc:306] Processing Done. Dec 13 16:11:04.994119 update_engine[1557]: I1213 16:11:04.992777 1557 update_attempter.cc:310] Error event sent. Dec 13 16:11:04.994119 update_engine[1557]: I1213 16:11:04.992798 1557 update_check_scheduler.cc:74] Next update check in 42m28s Dec 13 16:11:04.994991 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 16:11:04.994991 locksmithd[1601]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 16:12:08.781523 sshd[4200]: Connection reset by 218.92.0.204 port 49134 [preauth] Dec 13 16:12:08.783360 systemd[1]: sshd@7-147.28.180.91:22-218.92.0.204:49134.service: Deactivated successfully. Dec 13 16:14:02.677417 systemd[1]: Started sshd@8-147.28.180.91:22-139.178.89.65:37596.service. Dec 13 16:14:02.711672 sshd[4229]: Accepted publickey for core from 139.178.89.65 port 37596 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:02.715149 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:02.726373 systemd-logind[1555]: New session 10 of user core. Dec 13 16:14:02.728877 systemd[1]: Started session-10.scope. Dec 13 16:14:02.835681 sshd[4229]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:02.837191 systemd[1]: sshd@8-147.28.180.91:22-139.178.89.65:37596.service: Deactivated successfully. Dec 13 16:14:02.837661 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 16:14:02.838071 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Dec 13 16:14:02.838502 systemd-logind[1555]: Removed session 10. Dec 13 16:14:07.845270 systemd[1]: Started sshd@9-147.28.180.91:22-139.178.89.65:37608.service. Dec 13 16:14:07.877916 sshd[4257]: Accepted publickey for core from 139.178.89.65 port 37608 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:07.878946 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:07.882509 systemd-logind[1555]: New session 11 of user core. Dec 13 16:14:07.883346 systemd[1]: Started session-11.scope. Dec 13 16:14:07.965793 sshd[4257]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:07.967179 systemd[1]: sshd@9-147.28.180.91:22-139.178.89.65:37608.service: Deactivated successfully. Dec 13 16:14:07.967608 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 16:14:07.968036 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Dec 13 16:14:07.968467 systemd-logind[1555]: Removed session 11. Dec 13 16:14:12.975401 systemd[1]: Started sshd@10-147.28.180.91:22-139.178.89.65:60372.service. Dec 13 16:14:13.007836 sshd[4284]: Accepted publickey for core from 139.178.89.65 port 60372 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:13.008966 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:13.012716 systemd-logind[1555]: New session 12 of user core. Dec 13 16:14:13.013520 systemd[1]: Started session-12.scope. Dec 13 16:14:13.105634 sshd[4284]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:13.107102 systemd[1]: sshd@10-147.28.180.91:22-139.178.89.65:60372.service: Deactivated successfully. Dec 13 16:14:13.107540 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 16:14:13.107974 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Dec 13 16:14:13.108416 systemd-logind[1555]: Removed session 12. Dec 13 16:14:18.115444 systemd[1]: Started sshd@11-147.28.180.91:22-139.178.89.65:53716.service. Dec 13 16:14:18.148265 sshd[4312]: Accepted publickey for core from 139.178.89.65 port 53716 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:18.149097 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:18.152093 systemd-logind[1555]: New session 13 of user core. Dec 13 16:14:18.152685 systemd[1]: Started session-13.scope. Dec 13 16:14:18.238686 sshd[4312]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:18.240251 systemd[1]: sshd@11-147.28.180.91:22-139.178.89.65:53716.service: Deactivated successfully. Dec 13 16:14:18.240703 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 16:14:18.241040 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Dec 13 16:14:18.241462 systemd-logind[1555]: Removed session 13. Dec 13 16:14:23.247846 systemd[1]: Started sshd@12-147.28.180.91:22-139.178.89.65:53730.service. Dec 13 16:14:23.280273 sshd[4339]: Accepted publickey for core from 139.178.89.65 port 53730 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:23.281398 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:23.284701 systemd-logind[1555]: New session 14 of user core. Dec 13 16:14:23.285604 systemd[1]: Started session-14.scope. Dec 13 16:14:23.376149 sshd[4339]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:23.377965 systemd[1]: sshd@12-147.28.180.91:22-139.178.89.65:53730.service: Deactivated successfully. Dec 13 16:14:23.378303 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 16:14:23.378715 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Dec 13 16:14:23.379271 systemd[1]: Started sshd@13-147.28.180.91:22-139.178.89.65:53746.service. Dec 13 16:14:23.379774 systemd-logind[1555]: Removed session 14. Dec 13 16:14:23.412266 sshd[4364]: Accepted publickey for core from 139.178.89.65 port 53746 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:23.415605 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:23.426404 systemd-logind[1555]: New session 15 of user core. Dec 13 16:14:23.429228 systemd[1]: Started session-15.scope. Dec 13 16:14:23.548252 sshd[4364]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:23.550149 systemd[1]: sshd@13-147.28.180.91:22-139.178.89.65:53746.service: Deactivated successfully. Dec 13 16:14:23.550551 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 16:14:23.550915 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Dec 13 16:14:23.551560 systemd[1]: Started sshd@14-147.28.180.91:22-139.178.89.65:53758.service. Dec 13 16:14:23.552044 systemd-logind[1555]: Removed session 15. Dec 13 16:14:23.585712 sshd[4387]: Accepted publickey for core from 139.178.89.65 port 53758 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:23.589255 sshd[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:23.600284 systemd-logind[1555]: New session 16 of user core. Dec 13 16:14:23.603086 systemd[1]: Started session-16.scope. Dec 13 16:14:23.748633 sshd[4387]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:23.750168 systemd[1]: sshd@14-147.28.180.91:22-139.178.89.65:53758.service: Deactivated successfully. Dec 13 16:14:23.750601 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 16:14:23.751040 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Dec 13 16:14:23.751471 systemd-logind[1555]: Removed session 16. Dec 13 16:14:28.758384 systemd[1]: Started sshd@15-147.28.180.91:22-139.178.89.65:54138.service. Dec 13 16:14:28.791619 sshd[4412]: Accepted publickey for core from 139.178.89.65 port 54138 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:28.794998 sshd[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:28.806079 systemd-logind[1555]: New session 17 of user core. Dec 13 16:14:28.808726 systemd[1]: Started session-17.scope. Dec 13 16:14:28.909498 sshd[4412]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:28.910820 systemd[1]: sshd@15-147.28.180.91:22-139.178.89.65:54138.service: Deactivated successfully. Dec 13 16:14:28.911234 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 16:14:28.911537 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Dec 13 16:14:28.912076 systemd-logind[1555]: Removed session 17. Dec 13 16:14:33.918961 systemd[1]: Started sshd@16-147.28.180.91:22-139.178.89.65:54140.service. Dec 13 16:14:33.951386 sshd[4438]: Accepted publickey for core from 139.178.89.65 port 54140 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:33.952486 sshd[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:33.955929 systemd-logind[1555]: New session 18 of user core. Dec 13 16:14:33.956700 systemd[1]: Started session-18.scope. Dec 13 16:14:34.046336 sshd[4438]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:34.048184 systemd[1]: sshd@16-147.28.180.91:22-139.178.89.65:54140.service: Deactivated successfully. Dec 13 16:14:34.048542 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 16:14:34.048942 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Dec 13 16:14:34.049541 systemd[1]: Started sshd@17-147.28.180.91:22-139.178.89.65:54142.service. Dec 13 16:14:34.050057 systemd-logind[1555]: Removed session 18. Dec 13 16:14:34.083202 sshd[4462]: Accepted publickey for core from 139.178.89.65 port 54142 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:34.086522 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:34.097116 systemd-logind[1555]: New session 19 of user core. Dec 13 16:14:34.099659 systemd[1]: Started session-19.scope. Dec 13 16:14:34.214166 sshd[4462]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:34.215937 systemd[1]: sshd@17-147.28.180.91:22-139.178.89.65:54142.service: Deactivated successfully. Dec 13 16:14:34.216266 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 16:14:34.216667 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Dec 13 16:14:34.217288 systemd[1]: Started sshd@18-147.28.180.91:22-139.178.89.65:54148.service. Dec 13 16:14:34.217812 systemd-logind[1555]: Removed session 19. Dec 13 16:14:34.250171 sshd[4484]: Accepted publickey for core from 139.178.89.65 port 54148 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:34.251210 sshd[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:34.255069 systemd-logind[1555]: New session 20 of user core. Dec 13 16:14:34.255901 systemd[1]: Started session-20.scope. Dec 13 16:14:35.360792 sshd[4484]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:35.363418 systemd[1]: sshd@18-147.28.180.91:22-139.178.89.65:54148.service: Deactivated successfully. Dec 13 16:14:35.364059 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 16:14:35.364764 systemd-logind[1555]: Session 20 logged out. Waiting for processes to exit. Dec 13 16:14:35.365941 systemd[1]: Started sshd@19-147.28.180.91:22-139.178.89.65:54150.service. Dec 13 16:14:35.366719 systemd-logind[1555]: Removed session 20. Dec 13 16:14:35.405536 sshd[4517]: Accepted publickey for core from 139.178.89.65 port 54150 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:35.407203 sshd[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:35.412861 systemd-logind[1555]: New session 21 of user core. Dec 13 16:14:35.414164 systemd[1]: Started session-21.scope. Dec 13 16:14:35.604498 sshd[4517]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:35.606825 systemd[1]: sshd@19-147.28.180.91:22-139.178.89.65:54150.service: Deactivated successfully. Dec 13 16:14:35.607308 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 16:14:35.607775 systemd-logind[1555]: Session 21 logged out. Waiting for processes to exit. Dec 13 16:14:35.608574 systemd[1]: Started sshd@20-147.28.180.91:22-139.178.89.65:54158.service. Dec 13 16:14:35.609185 systemd-logind[1555]: Removed session 21. Dec 13 16:14:35.644122 sshd[4543]: Accepted publickey for core from 139.178.89.65 port 54158 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:35.645628 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:35.651600 systemd-logind[1555]: New session 22 of user core. Dec 13 16:14:35.654178 systemd[1]: Started session-22.scope. Dec 13 16:14:35.804427 sshd[4543]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:35.805849 systemd[1]: sshd@20-147.28.180.91:22-139.178.89.65:54158.service: Deactivated successfully. Dec 13 16:14:35.806316 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 16:14:35.806689 systemd-logind[1555]: Session 22 logged out. Waiting for processes to exit. Dec 13 16:14:35.807236 systemd-logind[1555]: Removed session 22. Dec 13 16:14:40.813446 systemd[1]: Started sshd@21-147.28.180.91:22-139.178.89.65:55386.service. Dec 13 16:14:40.846077 sshd[4573]: Accepted publickey for core from 139.178.89.65 port 55386 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:40.847114 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:40.850424 systemd-logind[1555]: New session 23 of user core. Dec 13 16:14:40.851267 systemd[1]: Started session-23.scope. Dec 13 16:14:40.937232 sshd[4573]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:40.938767 systemd[1]: sshd@21-147.28.180.91:22-139.178.89.65:55386.service: Deactivated successfully. Dec 13 16:14:40.939251 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 16:14:40.939631 systemd-logind[1555]: Session 23 logged out. Waiting for processes to exit. Dec 13 16:14:40.940056 systemd-logind[1555]: Removed session 23. Dec 13 16:14:45.950397 systemd[1]: Started sshd@22-147.28.180.91:22-139.178.89.65:55392.service. Dec 13 16:14:45.986942 sshd[4600]: Accepted publickey for core from 139.178.89.65 port 55392 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:45.987768 sshd[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:45.990867 systemd-logind[1555]: New session 24 of user core. Dec 13 16:14:45.991587 systemd[1]: Started session-24.scope. Dec 13 16:14:46.081805 sshd[4600]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:46.083174 systemd[1]: sshd@22-147.28.180.91:22-139.178.89.65:55392.service: Deactivated successfully. Dec 13 16:14:46.083627 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 16:14:46.084024 systemd-logind[1555]: Session 24 logged out. Waiting for processes to exit. Dec 13 16:14:46.084436 systemd-logind[1555]: Removed session 24. Dec 13 16:14:51.091589 systemd[1]: Started sshd@23-147.28.180.91:22-139.178.89.65:40742.service. Dec 13 16:14:51.124101 sshd[4623]: Accepted publickey for core from 139.178.89.65 port 40742 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:51.125128 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:51.128507 systemd-logind[1555]: New session 25 of user core. Dec 13 16:14:51.129317 systemd[1]: Started session-25.scope. Dec 13 16:14:51.215604 sshd[4623]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:51.217137 systemd[1]: sshd@23-147.28.180.91:22-139.178.89.65:40742.service: Deactivated successfully. Dec 13 16:14:51.217586 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 16:14:51.218000 systemd-logind[1555]: Session 25 logged out. Waiting for processes to exit. Dec 13 16:14:51.218436 systemd-logind[1555]: Removed session 25. Dec 13 16:14:56.224705 systemd[1]: Started sshd@24-147.28.180.91:22-139.178.89.65:40748.service. Dec 13 16:14:56.257965 sshd[4647]: Accepted publickey for core from 139.178.89.65 port 40748 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:56.261362 sshd[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:56.272374 systemd-logind[1555]: New session 26 of user core. Dec 13 16:14:56.274973 systemd[1]: Started session-26.scope. Dec 13 16:14:56.376862 sshd[4647]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:56.378734 systemd[1]: sshd@24-147.28.180.91:22-139.178.89.65:40748.service: Deactivated successfully. Dec 13 16:14:56.379101 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 16:14:56.379413 systemd-logind[1555]: Session 26 logged out. Waiting for processes to exit. Dec 13 16:14:56.380042 systemd[1]: Started sshd@25-147.28.180.91:22-139.178.89.65:40754.service. Dec 13 16:14:56.380578 systemd-logind[1555]: Removed session 26. Dec 13 16:14:56.413807 sshd[4671]: Accepted publickey for core from 139.178.89.65 port 40754 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:56.417148 sshd[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:56.427736 systemd-logind[1555]: New session 27 of user core. Dec 13 16:14:56.430317 systemd[1]: Started session-27.scope. Dec 13 16:14:57.798371 env[1563]: time="2024-12-13T16:14:57.798345166Z" level=info msg="StopContainer for \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\" with timeout 30 (s)" Dec 13 16:14:57.798620 env[1563]: time="2024-12-13T16:14:57.798566638Z" level=info msg="Stop container \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\" with signal terminated" Dec 13 16:14:57.802934 systemd[1]: cri-containerd-2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9.scope: Deactivated successfully. Dec 13 16:14:57.807721 env[1563]: time="2024-12-13T16:14:57.807669799Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 16:14:57.810549 env[1563]: time="2024-12-13T16:14:57.810531502Z" level=info msg="StopContainer for \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\" with timeout 2 (s)" Dec 13 16:14:57.810669 env[1563]: time="2024-12-13T16:14:57.810655356Z" level=info msg="Stop container \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\" with signal terminated" Dec 13 16:14:57.811584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9-rootfs.mount: Deactivated successfully. Dec 13 16:14:57.813837 systemd-networkd[1311]: lxc_health: Link DOWN Dec 13 16:14:57.813840 systemd-networkd[1311]: lxc_health: Lost carrier Dec 13 16:14:57.840992 env[1563]: time="2024-12-13T16:14:57.840936377Z" level=info msg="shim disconnected" id=2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9 Dec 13 16:14:57.840992 env[1563]: time="2024-12-13T16:14:57.840966388Z" level=warning msg="cleaning up after shim disconnected" id=2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9 namespace=k8s.io Dec 13 16:14:57.840992 env[1563]: time="2024-12-13T16:14:57.840973801Z" level=info msg="cleaning up dead shim" Dec 13 16:14:57.845223 env[1563]: time="2024-12-13T16:14:57.845180181Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4739 runtime=io.containerd.runc.v2\n" Dec 13 16:14:57.846182 env[1563]: time="2024-12-13T16:14:57.846134118Z" level=info msg="StopContainer for \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\" returns successfully" Dec 13 16:14:57.846591 env[1563]: time="2024-12-13T16:14:57.846540510Z" level=info msg="StopPodSandbox for \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\"" Dec 13 16:14:57.846591 env[1563]: time="2024-12-13T16:14:57.846581994Z" level=info msg="Container to stop \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:57.848536 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4-shm.mount: Deactivated successfully. Dec 13 16:14:57.851956 systemd[1]: cri-containerd-53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4.scope: Deactivated successfully. Dec 13 16:14:57.864588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4-rootfs.mount: Deactivated successfully. Dec 13 16:14:57.903294 systemd[1]: cri-containerd-c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573.scope: Deactivated successfully. Dec 13 16:14:57.903889 systemd[1]: cri-containerd-c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573.scope: Consumed 6.473s CPU time. Dec 13 16:14:57.905524 env[1563]: time="2024-12-13T16:14:57.905372351Z" level=info msg="shim disconnected" id=53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4 Dec 13 16:14:57.905874 env[1563]: time="2024-12-13T16:14:57.905549920Z" level=warning msg="cleaning up after shim disconnected" id=53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4 namespace=k8s.io Dec 13 16:14:57.905874 env[1563]: time="2024-12-13T16:14:57.905608780Z" level=info msg="cleaning up dead shim" Dec 13 16:14:57.922157 env[1563]: time="2024-12-13T16:14:57.922068124Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4772 runtime=io.containerd.runc.v2\n" Dec 13 16:14:57.922947 env[1563]: time="2024-12-13T16:14:57.922876340Z" level=info msg="TearDown network for sandbox \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\" successfully" Dec 13 16:14:57.923202 env[1563]: time="2024-12-13T16:14:57.922936780Z" level=info msg="StopPodSandbox for \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\" returns successfully" Dec 13 16:14:57.943689 env[1563]: time="2024-12-13T16:14:57.943585608Z" level=info msg="shim disconnected" id=c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573 Dec 13 16:14:57.944056 env[1563]: time="2024-12-13T16:14:57.943687272Z" level=warning msg="cleaning up after shim disconnected" id=c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573 namespace=k8s.io Dec 13 16:14:57.944056 env[1563]: time="2024-12-13T16:14:57.943716686Z" level=info msg="cleaning up dead shim" Dec 13 16:14:57.956569 env[1563]: time="2024-12-13T16:14:57.956501179Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4796 runtime=io.containerd.runc.v2\n" Dec 13 16:14:57.958442 env[1563]: time="2024-12-13T16:14:57.958356072Z" level=info msg="StopContainer for \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\" returns successfully" Dec 13 16:14:57.959131 env[1563]: time="2024-12-13T16:14:57.959053239Z" level=info msg="StopPodSandbox for \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\"" Dec 13 16:14:57.959272 env[1563]: time="2024-12-13T16:14:57.959146989Z" level=info msg="Container to stop \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:57.959272 env[1563]: time="2024-12-13T16:14:57.959177632Z" level=info msg="Container to stop \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:57.959272 env[1563]: time="2024-12-13T16:14:57.959198511Z" level=info msg="Container to stop \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:57.959272 env[1563]: time="2024-12-13T16:14:57.959219018Z" level=info msg="Container to stop \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:57.959272 env[1563]: time="2024-12-13T16:14:57.959237585Z" level=info msg="Container to stop \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 16:14:57.968394 systemd[1]: cri-containerd-0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88.scope: Deactivated successfully. Dec 13 16:14:57.983915 env[1563]: time="2024-12-13T16:14:57.983867684Z" level=info msg="shim disconnected" id=0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88 Dec 13 16:14:57.984046 env[1563]: time="2024-12-13T16:14:57.983915771Z" level=warning msg="cleaning up after shim disconnected" id=0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88 namespace=k8s.io Dec 13 16:14:57.984046 env[1563]: time="2024-12-13T16:14:57.983925606Z" level=info msg="cleaning up dead shim" Dec 13 16:14:57.988873 env[1563]: time="2024-12-13T16:14:57.988847403Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:14:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4827 runtime=io.containerd.runc.v2\n" Dec 13 16:14:57.989103 env[1563]: time="2024-12-13T16:14:57.989063444Z" level=info msg="TearDown network for sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" successfully" Dec 13 16:14:57.989103 env[1563]: time="2024-12-13T16:14:57.989081551Z" level=info msg="StopPodSandbox for \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" returns successfully" Dec 13 16:14:58.014614 kubelet[2596]: I1213 16:14:58.014591 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b756a862-45cd-4910-993c-daecb4fbcf09-cilium-config-path\") pod \"b756a862-45cd-4910-993c-daecb4fbcf09\" (UID: \"b756a862-45cd-4910-993c-daecb4fbcf09\") " Dec 13 16:14:58.014902 kubelet[2596]: I1213 16:14:58.014625 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w446n\" (UniqueName: \"kubernetes.io/projected/b756a862-45cd-4910-993c-daecb4fbcf09-kube-api-access-w446n\") pod \"b756a862-45cd-4910-993c-daecb4fbcf09\" (UID: \"b756a862-45cd-4910-993c-daecb4fbcf09\") " Dec 13 16:14:58.016189 kubelet[2596]: I1213 16:14:58.016171 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b756a862-45cd-4910-993c-daecb4fbcf09-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b756a862-45cd-4910-993c-daecb4fbcf09" (UID: "b756a862-45cd-4910-993c-daecb4fbcf09"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 16:14:58.016816 kubelet[2596]: I1213 16:14:58.016770 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b756a862-45cd-4910-993c-daecb4fbcf09-kube-api-access-w446n" (OuterVolumeSpecName: "kube-api-access-w446n") pod "b756a862-45cd-4910-993c-daecb4fbcf09" (UID: "b756a862-45cd-4910-993c-daecb4fbcf09"). InnerVolumeSpecName "kube-api-access-w446n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:14:58.115723 kubelet[2596]: I1213 16:14:58.115529 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-cgroup\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.115723 kubelet[2596]: I1213 16:14:58.115650 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvjrr\" (UniqueName: \"kubernetes.io/projected/fe53fc47-65c9-4783-ba4b-23933ba63b0f-kube-api-access-gvjrr\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.115723 kubelet[2596]: I1213 16:14:58.115713 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-host-proc-sys-kernel\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.116247 kubelet[2596]: I1213 16:14:58.115727 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.116247 kubelet[2596]: I1213 16:14:58.115770 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-run\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.116247 kubelet[2596]: I1213 16:14:58.115819 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.116247 kubelet[2596]: I1213 16:14:58.115924 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-config-path\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.116247 kubelet[2596]: I1213 16:14:58.115906 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.117073 kubelet[2596]: I1213 16:14:58.115995 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-hostproc\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.117073 kubelet[2596]: I1213 16:14:58.116056 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-bpf-maps\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.117073 kubelet[2596]: I1213 16:14:58.116051 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-hostproc" (OuterVolumeSpecName: "hostproc") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.117073 kubelet[2596]: I1213 16:14:58.116112 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-lib-modules\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.117073 kubelet[2596]: I1213 16:14:58.116168 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-etc-cni-netd\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.117073 kubelet[2596]: I1213 16:14:58.116159 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.118068 kubelet[2596]: I1213 16:14:58.116176 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.118068 kubelet[2596]: I1213 16:14:58.116228 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-host-proc-sys-net\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.118068 kubelet[2596]: I1213 16:14:58.116255 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.118068 kubelet[2596]: I1213 16:14:58.116289 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-xtables-lock\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.118068 kubelet[2596]: I1213 16:14:58.116345 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cni-path\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.118916 kubelet[2596]: I1213 16:14:58.116337 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.118916 kubelet[2596]: I1213 16:14:58.116413 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe53fc47-65c9-4783-ba4b-23933ba63b0f-clustermesh-secrets\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.118916 kubelet[2596]: I1213 16:14:58.116409 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.118916 kubelet[2596]: I1213 16:14:58.116443 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cni-path" (OuterVolumeSpecName: "cni-path") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:14:58.118916 kubelet[2596]: I1213 16:14:58.116515 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe53fc47-65c9-4783-ba4b-23933ba63b0f-hubble-tls\") pod \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\" (UID: \"fe53fc47-65c9-4783-ba4b-23933ba63b0f\") " Dec 13 16:14:58.119588 kubelet[2596]: I1213 16:14:58.116694 2596 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cni-path\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.119588 kubelet[2596]: I1213 16:14:58.116752 2596 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-cgroup\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.119588 kubelet[2596]: I1213 16:14:58.116791 2596 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b756a862-45cd-4910-993c-daecb4fbcf09-cilium-config-path\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.119588 kubelet[2596]: I1213 16:14:58.116830 2596 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.119588 kubelet[2596]: I1213 16:14:58.116864 2596 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-run\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.119588 kubelet[2596]: I1213 16:14:58.116895 2596 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-hostproc\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.119588 kubelet[2596]: I1213 16:14:58.116934 2596 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w446n\" (UniqueName: \"kubernetes.io/projected/b756a862-45cd-4910-993c-daecb4fbcf09-kube-api-access-w446n\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.119588 kubelet[2596]: I1213 16:14:58.116987 2596 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-bpf-maps\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.121084 kubelet[2596]: I1213 16:14:58.117046 2596 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-lib-modules\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.121084 kubelet[2596]: I1213 16:14:58.117082 2596 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-etc-cni-netd\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.121084 kubelet[2596]: I1213 16:14:58.117116 2596 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-host-proc-sys-net\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.121084 kubelet[2596]: I1213 16:14:58.117148 2596 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe53fc47-65c9-4783-ba4b-23933ba63b0f-xtables-lock\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.123027 kubelet[2596]: I1213 16:14:58.122942 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe53fc47-65c9-4783-ba4b-23933ba63b0f-kube-api-access-gvjrr" (OuterVolumeSpecName: "kube-api-access-gvjrr") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "kube-api-access-gvjrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:14:58.123338 kubelet[2596]: I1213 16:14:58.123054 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe53fc47-65c9-4783-ba4b-23933ba63b0f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:14:58.123338 kubelet[2596]: I1213 16:14:58.123203 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe53fc47-65c9-4783-ba4b-23933ba63b0f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:14:58.123651 kubelet[2596]: I1213 16:14:58.123327 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fe53fc47-65c9-4783-ba4b-23933ba63b0f" (UID: "fe53fc47-65c9-4783-ba4b-23933ba63b0f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 16:14:58.217898 kubelet[2596]: I1213 16:14:58.217788 2596 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe53fc47-65c9-4783-ba4b-23933ba63b0f-hubble-tls\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.217898 kubelet[2596]: I1213 16:14:58.217870 2596 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe53fc47-65c9-4783-ba4b-23933ba63b0f-clustermesh-secrets\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.217898 kubelet[2596]: I1213 16:14:58.217910 2596 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe53fc47-65c9-4783-ba4b-23933ba63b0f-cilium-config-path\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.218433 kubelet[2596]: I1213 16:14:58.217947 2596 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gvjrr\" (UniqueName: \"kubernetes.io/projected/fe53fc47-65c9-4783-ba4b-23933ba63b0f-kube-api-access-gvjrr\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:14:58.224592 kubelet[2596]: I1213 16:14:58.224535 2596 scope.go:117] "RemoveContainer" containerID="2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9" Dec 13 16:14:58.227387 env[1563]: time="2024-12-13T16:14:58.227298389Z" level=info msg="RemoveContainer for \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\"" Dec 13 16:14:58.233638 env[1563]: time="2024-12-13T16:14:58.233565380Z" level=info msg="RemoveContainer for \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\" returns successfully" Dec 13 16:14:58.234129 kubelet[2596]: I1213 16:14:58.234075 2596 scope.go:117] "RemoveContainer" containerID="2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9" Dec 13 16:14:58.234738 env[1563]: time="2024-12-13T16:14:58.234561542Z" level=error msg="ContainerStatus for \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\": not found" Dec 13 16:14:58.235324 kubelet[2596]: E1213 16:14:58.235263 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\": not found" containerID="2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9" Dec 13 16:14:58.235627 systemd[1]: Removed slice kubepods-besteffort-podb756a862_45cd_4910_993c_daecb4fbcf09.slice. Dec 13 16:14:58.236012 kubelet[2596]: I1213 16:14:58.235616 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9"} err="failed to get container status \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e850839432f46c0ce0c2219af7eec63bc5373821d7179e16eabdb4b8b451fe9\": not found" Dec 13 16:14:58.236012 kubelet[2596]: I1213 16:14:58.235686 2596 scope.go:117] "RemoveContainer" containerID="c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573" Dec 13 16:14:58.238601 env[1563]: time="2024-12-13T16:14:58.238479330Z" level=info msg="RemoveContainer for \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\"" Dec 13 16:14:58.241479 systemd[1]: Removed slice kubepods-burstable-podfe53fc47_65c9_4783_ba4b_23933ba63b0f.slice. Dec 13 16:14:58.241810 systemd[1]: kubepods-burstable-podfe53fc47_65c9_4783_ba4b_23933ba63b0f.slice: Consumed 6.543s CPU time. Dec 13 16:14:58.242949 env[1563]: time="2024-12-13T16:14:58.242839760Z" level=info msg="RemoveContainer for \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\" returns successfully" Dec 13 16:14:58.243201 kubelet[2596]: I1213 16:14:58.243170 2596 scope.go:117] "RemoveContainer" containerID="5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f" Dec 13 16:14:58.245683 env[1563]: time="2024-12-13T16:14:58.245610035Z" level=info msg="RemoveContainer for \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\"" Dec 13 16:14:58.249980 env[1563]: time="2024-12-13T16:14:58.249876578Z" level=info msg="RemoveContainer for \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\" returns successfully" Dec 13 16:14:58.250332 kubelet[2596]: I1213 16:14:58.250277 2596 scope.go:117] "RemoveContainer" containerID="10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d" Dec 13 16:14:58.253004 env[1563]: time="2024-12-13T16:14:58.252933623Z" level=info msg="RemoveContainer for \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\"" Dec 13 16:14:58.257621 env[1563]: time="2024-12-13T16:14:58.257517557Z" level=info msg="RemoveContainer for \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\" returns successfully" Dec 13 16:14:58.257977 kubelet[2596]: I1213 16:14:58.257897 2596 scope.go:117] "RemoveContainer" containerID="4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88" Dec 13 16:14:58.260585 env[1563]: time="2024-12-13T16:14:58.260512324Z" level=info msg="RemoveContainer for \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\"" Dec 13 16:14:58.264368 env[1563]: time="2024-12-13T16:14:58.264295032Z" level=info msg="RemoveContainer for \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\" returns successfully" Dec 13 16:14:58.264714 kubelet[2596]: I1213 16:14:58.264658 2596 scope.go:117] "RemoveContainer" containerID="cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3" Dec 13 16:14:58.267180 env[1563]: time="2024-12-13T16:14:58.267062745Z" level=info msg="RemoveContainer for \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\"" Dec 13 16:14:58.271858 env[1563]: time="2024-12-13T16:14:58.271779839Z" level=info msg="RemoveContainer for \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\" returns successfully" Dec 13 16:14:58.272192 kubelet[2596]: I1213 16:14:58.272148 2596 scope.go:117] "RemoveContainer" containerID="c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573" Dec 13 16:14:58.272893 env[1563]: time="2024-12-13T16:14:58.272642292Z" level=error msg="ContainerStatus for \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\": not found" Dec 13 16:14:58.273256 kubelet[2596]: E1213 16:14:58.273181 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\": not found" containerID="c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573" Dec 13 16:14:58.273429 kubelet[2596]: I1213 16:14:58.273296 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573"} err="failed to get container status \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\": rpc error: code = NotFound desc = an error occurred when try to find container \"c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573\": not found" Dec 13 16:14:58.273429 kubelet[2596]: I1213 16:14:58.273337 2596 scope.go:117] "RemoveContainer" containerID="5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f" Dec 13 16:14:58.273993 env[1563]: time="2024-12-13T16:14:58.273816387Z" level=error msg="ContainerStatus for \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\": not found" Dec 13 16:14:58.274277 kubelet[2596]: E1213 16:14:58.274174 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\": not found" containerID="5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f" Dec 13 16:14:58.274277 kubelet[2596]: I1213 16:14:58.274250 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f"} err="failed to get container status \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\": rpc error: code = NotFound desc = an error occurred when try to find container \"5422d0c1499633d1b70552b777becc86c06892d5d1a53ecf218493e65a54632f\": not found" Dec 13 16:14:58.274277 kubelet[2596]: I1213 16:14:58.274282 2596 scope.go:117] "RemoveContainer" containerID="10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d" Dec 13 16:14:58.275002 env[1563]: time="2024-12-13T16:14:58.274794926Z" level=error msg="ContainerStatus for \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\": not found" Dec 13 16:14:58.275279 kubelet[2596]: E1213 16:14:58.275236 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\": not found" containerID="10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d" Dec 13 16:14:58.275514 kubelet[2596]: I1213 16:14:58.275322 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d"} err="failed to get container status \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\": rpc error: code = NotFound desc = an error occurred when try to find container \"10aa8c22d51ed9bc2469b82726b2bf56037780de7754301daf8cb72db355d94d\": not found" Dec 13 16:14:58.275514 kubelet[2596]: I1213 16:14:58.275362 2596 scope.go:117] "RemoveContainer" containerID="4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88" Dec 13 16:14:58.276030 env[1563]: time="2024-12-13T16:14:58.275875322Z" level=error msg="ContainerStatus for \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\": not found" Dec 13 16:14:58.276300 kubelet[2596]: E1213 16:14:58.276256 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\": not found" containerID="4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88" Dec 13 16:14:58.276422 kubelet[2596]: I1213 16:14:58.276362 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88"} err="failed to get container status \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e5ea44ea670aad8805fee4f76f124687e2e22509cafd8404fa29ede1c7d1e88\": not found" Dec 13 16:14:58.276422 kubelet[2596]: I1213 16:14:58.276405 2596 scope.go:117] "RemoveContainer" containerID="cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3" Dec 13 16:14:58.277070 env[1563]: time="2024-12-13T16:14:58.276896139Z" level=error msg="ContainerStatus for \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\": not found" Dec 13 16:14:58.277287 kubelet[2596]: E1213 16:14:58.277225 2596 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\": not found" containerID="cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3" Dec 13 16:14:58.277435 kubelet[2596]: I1213 16:14:58.277292 2596 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3"} err="failed to get container status \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"cef2810c14e65337de6553528f7de2fc90d98efbd222ebc34c22cf865ef392e3\": not found" Dec 13 16:14:58.805069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c15fdef7da9c8fdab5331e1ecc64dc0b5e7aaa924345de2c912d124049349573-rootfs.mount: Deactivated successfully. Dec 13 16:14:58.805123 systemd[1]: var-lib-kubelet-pods-b756a862\x2d45cd\x2d4910\x2d993c\x2ddaecb4fbcf09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw446n.mount: Deactivated successfully. Dec 13 16:14:58.805161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88-rootfs.mount: Deactivated successfully. Dec 13 16:14:58.805193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88-shm.mount: Deactivated successfully. Dec 13 16:14:58.805226 systemd[1]: var-lib-kubelet-pods-fe53fc47\x2d65c9\x2d4783\x2dba4b\x2d23933ba63b0f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgvjrr.mount: Deactivated successfully. Dec 13 16:14:58.805257 systemd[1]: var-lib-kubelet-pods-fe53fc47\x2d65c9\x2d4783\x2dba4b\x2d23933ba63b0f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 16:14:58.805286 systemd[1]: var-lib-kubelet-pods-fe53fc47\x2d65c9\x2d4783\x2dba4b\x2d23933ba63b0f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 16:14:59.005801 kubelet[2596]: I1213 16:14:59.005701 2596 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b756a862-45cd-4910-993c-daecb4fbcf09" path="/var/lib/kubelet/pods/b756a862-45cd-4910-993c-daecb4fbcf09/volumes" Dec 13 16:14:59.006994 kubelet[2596]: I1213 16:14:59.006912 2596 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fe53fc47-65c9-4783-ba4b-23933ba63b0f" path="/var/lib/kubelet/pods/fe53fc47-65c9-4783-ba4b-23933ba63b0f/volumes" Dec 13 16:14:59.766354 sshd[4671]: pam_unix(sshd:session): session closed for user core Dec 13 16:14:59.773669 systemd[1]: sshd@25-147.28.180.91:22-139.178.89.65:40754.service: Deactivated successfully. Dec 13 16:14:59.774154 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 16:14:59.774449 systemd-logind[1555]: Session 27 logged out. Waiting for processes to exit. Dec 13 16:14:59.775080 systemd[1]: Started sshd@26-147.28.180.91:22-139.178.89.65:58694.service. Dec 13 16:14:59.775453 systemd-logind[1555]: Removed session 27. Dec 13 16:14:59.808201 sshd[4845]: Accepted publickey for core from 139.178.89.65 port 58694 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:14:59.811585 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:14:59.822342 systemd-logind[1555]: New session 28 of user core. Dec 13 16:14:59.824784 systemd[1]: Started session-28.scope. Dec 13 16:15:00.368610 sshd[4845]: pam_unix(sshd:session): session closed for user core Dec 13 16:15:00.377630 systemd[1]: sshd@26-147.28.180.91:22-139.178.89.65:58694.service: Deactivated successfully. Dec 13 16:15:00.380298 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 16:15:00.382345 systemd-logind[1555]: Session 28 logged out. Waiting for processes to exit. Dec 13 16:15:00.386766 systemd[1]: Started sshd@27-147.28.180.91:22-139.178.89.65:58702.service. Dec 13 16:15:00.388216 kubelet[2596]: I1213 16:15:00.388150 2596 topology_manager.go:215] "Topology Admit Handler" podUID="a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" podNamespace="kube-system" podName="cilium-bhmqk" Dec 13 16:15:00.389243 kubelet[2596]: E1213 16:15:00.388294 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe53fc47-65c9-4783-ba4b-23933ba63b0f" containerName="mount-bpf-fs" Dec 13 16:15:00.389243 kubelet[2596]: E1213 16:15:00.388345 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe53fc47-65c9-4783-ba4b-23933ba63b0f" containerName="clean-cilium-state" Dec 13 16:15:00.389243 kubelet[2596]: E1213 16:15:00.388385 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe53fc47-65c9-4783-ba4b-23933ba63b0f" containerName="cilium-agent" Dec 13 16:15:00.389243 kubelet[2596]: E1213 16:15:00.388423 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b756a862-45cd-4910-993c-daecb4fbcf09" containerName="cilium-operator" Dec 13 16:15:00.389243 kubelet[2596]: E1213 16:15:00.388497 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe53fc47-65c9-4783-ba4b-23933ba63b0f" containerName="mount-cgroup" Dec 13 16:15:00.389243 kubelet[2596]: E1213 16:15:00.388546 2596 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe53fc47-65c9-4783-ba4b-23933ba63b0f" containerName="apply-sysctl-overwrites" Dec 13 16:15:00.389243 kubelet[2596]: I1213 16:15:00.388647 2596 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe53fc47-65c9-4783-ba4b-23933ba63b0f" containerName="cilium-agent" Dec 13 16:15:00.389243 kubelet[2596]: I1213 16:15:00.388687 2596 memory_manager.go:354] "RemoveStaleState removing state" podUID="b756a862-45cd-4910-993c-daecb4fbcf09" containerName="cilium-operator" Dec 13 16:15:00.389857 systemd-logind[1555]: Removed session 28. Dec 13 16:15:00.399340 systemd[1]: Created slice kubepods-burstable-poda9f4c2df_e85f_45ed_bc0f_d2cfb1c6b969.slice. Dec 13 16:15:00.437381 sshd[4869]: Accepted publickey for core from 139.178.89.65 port 58702 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:15:00.440900 sshd[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:15:00.451667 systemd-logind[1555]: New session 29 of user core. Dec 13 16:15:00.454278 systemd[1]: Started session-29.scope. Dec 13 16:15:00.533666 kubelet[2596]: I1213 16:15:00.533565 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cni-path\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.533666 kubelet[2596]: I1213 16:15:00.533674 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-config-path\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534032 kubelet[2596]: I1213 16:15:00.533785 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-etc-cni-netd\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534032 kubelet[2596]: I1213 16:15:00.534012 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-host-proc-sys-net\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534268 kubelet[2596]: I1213 16:15:00.534112 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-bpf-maps\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534378 kubelet[2596]: I1213 16:15:00.534271 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-hostproc\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534525 kubelet[2596]: I1213 16:15:00.534443 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5s67\" (UniqueName: \"kubernetes.io/projected/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-kube-api-access-f5s67\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534644 kubelet[2596]: I1213 16:15:00.534608 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-host-proc-sys-kernel\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534749 kubelet[2596]: I1213 16:15:00.534693 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-cgroup\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534879 kubelet[2596]: I1213 16:15:00.534853 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-run\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.534997 kubelet[2596]: I1213 16:15:00.534953 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-xtables-lock\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.535105 kubelet[2596]: I1213 16:15:00.535067 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-ipsec-secrets\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.535250 kubelet[2596]: I1213 16:15:00.535163 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-hubble-tls\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.535381 kubelet[2596]: I1213 16:15:00.535253 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-lib-modules\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.535381 kubelet[2596]: I1213 16:15:00.535331 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-clustermesh-secrets\") pod \"cilium-bhmqk\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " pod="kube-system/cilium-bhmqk" Dec 13 16:15:00.604766 sshd[4869]: pam_unix(sshd:session): session closed for user core Dec 13 16:15:00.606518 systemd[1]: sshd@27-147.28.180.91:22-139.178.89.65:58702.service: Deactivated successfully. Dec 13 16:15:00.606894 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 16:15:00.607281 systemd-logind[1555]: Session 29 logged out. Waiting for processes to exit. Dec 13 16:15:00.608025 systemd[1]: Started sshd@28-147.28.180.91:22-139.178.89.65:58716.service. Dec 13 16:15:00.608430 systemd-logind[1555]: Removed session 29. Dec 13 16:15:00.612794 kubelet[2596]: E1213 16:15:00.612778 2596 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-f5s67 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-bhmqk" podUID="a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" Dec 13 16:15:00.641549 sshd[4895]: Accepted publickey for core from 139.178.89.65 port 58716 ssh2: RSA SHA256:izw9nsjJfNoYQ+ckx4ezJW8umjt8Ms6p+cEtPdzXVWQ Dec 13 16:15:00.645420 sshd[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 16:15:00.674883 systemd-logind[1555]: New session 30 of user core. Dec 13 16:15:00.676226 systemd[1]: Started session-30.scope. Dec 13 16:15:01.008296 env[1563]: time="2024-12-13T16:15:01.008273778Z" level=info msg="StopPodSandbox for \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\"" Dec 13 16:15:01.008509 env[1563]: time="2024-12-13T16:15:01.008326625Z" level=info msg="TearDown network for sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" successfully" Dec 13 16:15:01.008509 env[1563]: time="2024-12-13T16:15:01.008349428Z" level=info msg="StopPodSandbox for \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" returns successfully" Dec 13 16:15:01.008560 env[1563]: time="2024-12-13T16:15:01.008527979Z" level=info msg="RemovePodSandbox for \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\"" Dec 13 16:15:01.008581 env[1563]: time="2024-12-13T16:15:01.008549595Z" level=info msg="Forcibly stopping sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\"" Dec 13 16:15:01.008604 env[1563]: time="2024-12-13T16:15:01.008589539Z" level=info msg="TearDown network for sandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" successfully" Dec 13 16:15:01.009703 env[1563]: time="2024-12-13T16:15:01.009691024Z" level=info msg="RemovePodSandbox \"0d481a07aec50c12dc106e4e50948f87b8af59108127fe1005c9d08eddc8ac88\" returns successfully" Dec 13 16:15:01.009888 env[1563]: time="2024-12-13T16:15:01.009875375Z" level=info msg="StopPodSandbox for \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\"" Dec 13 16:15:01.009932 env[1563]: time="2024-12-13T16:15:01.009913094Z" level=info msg="TearDown network for sandbox \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\" successfully" Dec 13 16:15:01.009960 env[1563]: time="2024-12-13T16:15:01.009930996Z" level=info msg="StopPodSandbox for \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\" returns successfully" Dec 13 16:15:01.010101 env[1563]: time="2024-12-13T16:15:01.010088095Z" level=info msg="RemovePodSandbox for \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\"" Dec 13 16:15:01.010128 env[1563]: time="2024-12-13T16:15:01.010104508Z" level=info msg="Forcibly stopping sandbox \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\"" Dec 13 16:15:01.010148 env[1563]: time="2024-12-13T16:15:01.010139931Z" level=info msg="TearDown network for sandbox \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\" successfully" Dec 13 16:15:01.011225 env[1563]: time="2024-12-13T16:15:01.011211812Z" level=info msg="RemovePodSandbox \"53b556d65a51bbf55258ed7466ecc4e62b74b32db957bad49c0e4178f78176e4\" returns successfully" Dec 13 16:15:01.137333 kubelet[2596]: E1213 16:15:01.137242 2596 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 16:15:01.341541 kubelet[2596]: I1213 16:15:01.341336 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-config-path\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.341541 kubelet[2596]: I1213 16:15:01.341435 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-xtables-lock\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.341541 kubelet[2596]: I1213 16:15:01.341548 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-ipsec-secrets\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.342254 kubelet[2596]: I1213 16:15:01.341616 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-clustermesh-secrets\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.342254 kubelet[2596]: I1213 16:15:01.341622 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.342254 kubelet[2596]: I1213 16:15:01.341673 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-host-proc-sys-net\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.342254 kubelet[2596]: I1213 16:15:01.341739 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-hostproc\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.342254 kubelet[2596]: I1213 16:15:01.341754 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.343155 kubelet[2596]: I1213 16:15:01.341798 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-host-proc-sys-kernel\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.343155 kubelet[2596]: I1213 16:15:01.341822 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-hostproc" (OuterVolumeSpecName: "hostproc") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.343155 kubelet[2596]: I1213 16:15:01.341854 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-cgroup\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.343155 kubelet[2596]: I1213 16:15:01.341874 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.343155 kubelet[2596]: I1213 16:15:01.341919 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cni-path\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.343949 kubelet[2596]: I1213 16:15:01.341933 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.343949 kubelet[2596]: I1213 16:15:01.342023 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5s67\" (UniqueName: \"kubernetes.io/projected/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-kube-api-access-f5s67\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.343949 kubelet[2596]: I1213 16:15:01.342117 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-bpf-maps\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.343949 kubelet[2596]: I1213 16:15:01.342093 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cni-path" (OuterVolumeSpecName: "cni-path") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.343949 kubelet[2596]: I1213 16:15:01.342207 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-lib-modules\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.344499 kubelet[2596]: I1213 16:15:01.342224 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.344499 kubelet[2596]: I1213 16:15:01.342314 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-hubble-tls\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.344499 kubelet[2596]: I1213 16:15:01.342397 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-etc-cni-netd\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.344499 kubelet[2596]: I1213 16:15:01.342362 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.344499 kubelet[2596]: I1213 16:15:01.342455 2596 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-run\") pod \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\" (UID: \"a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969\") " Dec 13 16:15:01.344499 kubelet[2596]: I1213 16:15:01.342587 2596 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-bpf-maps\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.345123 kubelet[2596]: I1213 16:15:01.342565 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.345123 kubelet[2596]: I1213 16:15:01.342633 2596 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-lib-modules\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.345123 kubelet[2596]: I1213 16:15:01.342583 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 16:15:01.345123 kubelet[2596]: I1213 16:15:01.342697 2596 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-xtables-lock\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.345123 kubelet[2596]: I1213 16:15:01.342749 2596 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-cgroup\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.345123 kubelet[2596]: I1213 16:15:01.342784 2596 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-host-proc-sys-net\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.345123 kubelet[2596]: I1213 16:15:01.342815 2596 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-hostproc\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.345827 kubelet[2596]: I1213 16:15:01.342850 2596 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.345827 kubelet[2596]: I1213 16:15:01.342880 2596 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cni-path\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.347083 kubelet[2596]: I1213 16:15:01.347043 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 16:15:01.347248 kubelet[2596]: I1213 16:15:01.347190 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:15:01.347248 kubelet[2596]: I1213 16:15:01.347205 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-kube-api-access-f5s67" (OuterVolumeSpecName: "kube-api-access-f5s67") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "kube-api-access-f5s67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:15:01.347316 kubelet[2596]: I1213 16:15:01.347262 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 16:15:01.347591 kubelet[2596]: I1213 16:15:01.347550 2596 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" (UID: "a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 16:15:01.348164 systemd[1]: var-lib-kubelet-pods-a9f4c2df\x2de85f\x2d45ed\x2dbc0f\x2dd2cfb1c6b969-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df5s67.mount: Deactivated successfully. Dec 13 16:15:01.348216 systemd[1]: var-lib-kubelet-pods-a9f4c2df\x2de85f\x2d45ed\x2dbc0f\x2dd2cfb1c6b969-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 16:15:01.348252 systemd[1]: var-lib-kubelet-pods-a9f4c2df\x2de85f\x2d45ed\x2dbc0f\x2dd2cfb1c6b969-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 16:15:01.350421 systemd[1]: var-lib-kubelet-pods-a9f4c2df\x2de85f\x2d45ed\x2dbc0f\x2dd2cfb1c6b969-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 16:15:01.443125 kubelet[2596]: I1213 16:15:01.443063 2596 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f5s67\" (UniqueName: \"kubernetes.io/projected/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-kube-api-access-f5s67\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.443125 kubelet[2596]: I1213 16:15:01.443124 2596 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-hubble-tls\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.444368 kubelet[2596]: I1213 16:15:01.443161 2596 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-etc-cni-netd\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.444368 kubelet[2596]: I1213 16:15:01.443196 2596 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-run\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.444368 kubelet[2596]: I1213 16:15:01.443229 2596 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-config-path\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.444368 kubelet[2596]: I1213 16:15:01.443260 2596 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:01.444368 kubelet[2596]: I1213 16:15:01.443294 2596 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969-clustermesh-secrets\") on node \"ci-3510.3.6-a-6bc1e3250f\" DevicePath \"\"" Dec 13 16:15:02.255100 systemd[1]: Removed slice kubepods-burstable-poda9f4c2df_e85f_45ed_bc0f_d2cfb1c6b969.slice. Dec 13 16:15:02.314204 kubelet[2596]: I1213 16:15:02.314138 2596 topology_manager.go:215] "Topology Admit Handler" podUID="90f2c999-1c29-4719-9df2-143166ef6081" podNamespace="kube-system" podName="cilium-2t255" Dec 13 16:15:02.328151 systemd[1]: Created slice kubepods-burstable-pod90f2c999_1c29_4719_9df2_143166ef6081.slice. Dec 13 16:15:02.450723 kubelet[2596]: I1213 16:15:02.450600 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90f2c999-1c29-4719-9df2-143166ef6081-hubble-tls\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.450723 kubelet[2596]: I1213 16:15:02.450717 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/90f2c999-1c29-4719-9df2-143166ef6081-cilium-ipsec-secrets\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.451708 kubelet[2596]: I1213 16:15:02.450942 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-hostproc\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.451708 kubelet[2596]: I1213 16:15:02.451149 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbqg\" (UniqueName: \"kubernetes.io/projected/90f2c999-1c29-4719-9df2-143166ef6081-kube-api-access-rpbqg\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.451708 kubelet[2596]: I1213 16:15:02.451336 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-bpf-maps\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.451708 kubelet[2596]: I1213 16:15:02.451442 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-cilium-cgroup\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.451708 kubelet[2596]: I1213 16:15:02.451576 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-etc-cni-netd\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.451708 kubelet[2596]: I1213 16:15:02.451669 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-lib-modules\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.452367 kubelet[2596]: I1213 16:15:02.451796 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90f2c999-1c29-4719-9df2-143166ef6081-clustermesh-secrets\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.452367 kubelet[2596]: I1213 16:15:02.451902 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-host-proc-sys-net\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.452367 kubelet[2596]: I1213 16:15:02.451965 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-host-proc-sys-kernel\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.452367 kubelet[2596]: I1213 16:15:02.452029 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-cni-path\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.452367 kubelet[2596]: I1213 16:15:02.452088 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-cilium-run\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.452367 kubelet[2596]: I1213 16:15:02.452220 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90f2c999-1c29-4719-9df2-143166ef6081-xtables-lock\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.452991 kubelet[2596]: I1213 16:15:02.452333 2596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90f2c999-1c29-4719-9df2-143166ef6081-cilium-config-path\") pod \"cilium-2t255\" (UID: \"90f2c999-1c29-4719-9df2-143166ef6081\") " pod="kube-system/cilium-2t255" Dec 13 16:15:02.640166 env[1563]: time="2024-12-13T16:15:02.639869829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2t255,Uid:90f2c999-1c29-4719-9df2-143166ef6081,Namespace:kube-system,Attempt:0,}" Dec 13 16:15:02.662506 env[1563]: time="2024-12-13T16:15:02.662340540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 16:15:02.662506 env[1563]: time="2024-12-13T16:15:02.662437938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 16:15:02.662857 env[1563]: time="2024-12-13T16:15:02.662499589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 16:15:02.663141 env[1563]: time="2024-12-13T16:15:02.663005768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3 pid=4938 runtime=io.containerd.runc.v2 Dec 13 16:15:02.690716 systemd[1]: Started cri-containerd-f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3.scope. Dec 13 16:15:02.721527 env[1563]: time="2024-12-13T16:15:02.721425211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2t255,Uid:90f2c999-1c29-4719-9df2-143166ef6081,Namespace:kube-system,Attempt:0,} returns sandbox id \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\"" Dec 13 16:15:02.724408 env[1563]: time="2024-12-13T16:15:02.724368763Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 16:15:02.733175 env[1563]: time="2024-12-13T16:15:02.733094857Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"38906fc37bc36af656cf6c29883351f2d1a402fa653d0740d59aea2c7dfebc4f\"" Dec 13 16:15:02.733645 env[1563]: time="2024-12-13T16:15:02.733572716Z" level=info msg="StartContainer for \"38906fc37bc36af656cf6c29883351f2d1a402fa653d0740d59aea2c7dfebc4f\"" Dec 13 16:15:02.752441 systemd[1]: Started cri-containerd-38906fc37bc36af656cf6c29883351f2d1a402fa653d0740d59aea2c7dfebc4f.scope. Dec 13 16:15:02.782620 env[1563]: time="2024-12-13T16:15:02.782536097Z" level=info msg="StartContainer for \"38906fc37bc36af656cf6c29883351f2d1a402fa653d0740d59aea2c7dfebc4f\" returns successfully" Dec 13 16:15:02.794776 systemd[1]: cri-containerd-38906fc37bc36af656cf6c29883351f2d1a402fa653d0740d59aea2c7dfebc4f.scope: Deactivated successfully. Dec 13 16:15:02.847259 env[1563]: time="2024-12-13T16:15:02.847177820Z" level=info msg="shim disconnected" id=38906fc37bc36af656cf6c29883351f2d1a402fa653d0740d59aea2c7dfebc4f Dec 13 16:15:02.847259 env[1563]: time="2024-12-13T16:15:02.847257334Z" level=warning msg="cleaning up after shim disconnected" id=38906fc37bc36af656cf6c29883351f2d1a402fa653d0740d59aea2c7dfebc4f namespace=k8s.io Dec 13 16:15:02.847651 env[1563]: time="2024-12-13T16:15:02.847278869Z" level=info msg="cleaning up dead shim" Dec 13 16:15:02.858715 env[1563]: time="2024-12-13T16:15:02.858624731Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:15:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5025 runtime=io.containerd.runc.v2\n" Dec 13 16:15:03.006637 kubelet[2596]: I1213 16:15:03.006525 2596 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969" path="/var/lib/kubelet/pods/a9f4c2df-e85f-45ed-bc0f-d2cfb1c6b969/volumes" Dec 13 16:15:03.257306 env[1563]: time="2024-12-13T16:15:03.257099059Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 16:15:03.269714 env[1563]: time="2024-12-13T16:15:03.269622603Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6e796666bc7193886ce12e2fb0d0f26f1503af7b1acb06cd254b3633c8bdc387\"" Dec 13 16:15:03.270693 env[1563]: time="2024-12-13T16:15:03.270590864Z" level=info msg="StartContainer for \"6e796666bc7193886ce12e2fb0d0f26f1503af7b1acb06cd254b3633c8bdc387\"" Dec 13 16:15:03.305923 systemd[1]: Started cri-containerd-6e796666bc7193886ce12e2fb0d0f26f1503af7b1acb06cd254b3633c8bdc387.scope. Dec 13 16:15:03.345399 env[1563]: time="2024-12-13T16:15:03.345335186Z" level=info msg="StartContainer for \"6e796666bc7193886ce12e2fb0d0f26f1503af7b1acb06cd254b3633c8bdc387\" returns successfully" Dec 13 16:15:03.358253 systemd[1]: cri-containerd-6e796666bc7193886ce12e2fb0d0f26f1503af7b1acb06cd254b3633c8bdc387.scope: Deactivated successfully. Dec 13 16:15:03.386417 env[1563]: time="2024-12-13T16:15:03.386326237Z" level=info msg="shim disconnected" id=6e796666bc7193886ce12e2fb0d0f26f1503af7b1acb06cd254b3633c8bdc387 Dec 13 16:15:03.386417 env[1563]: time="2024-12-13T16:15:03.386414418Z" level=warning msg="cleaning up after shim disconnected" id=6e796666bc7193886ce12e2fb0d0f26f1503af7b1acb06cd254b3633c8bdc387 namespace=k8s.io Dec 13 16:15:03.386799 env[1563]: time="2024-12-13T16:15:03.386437249Z" level=info msg="cleaning up dead shim" Dec 13 16:15:03.398786 env[1563]: time="2024-12-13T16:15:03.398721875Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:15:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5086 runtime=io.containerd.runc.v2\n" Dec 13 16:15:04.264181 env[1563]: time="2024-12-13T16:15:04.264085671Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 16:15:04.283258 env[1563]: time="2024-12-13T16:15:04.283234792Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"83c8e822f873e816054d467c4d2bfd9295853a618c5dd2eadd567aa7bfe9a399\"" Dec 13 16:15:04.283605 env[1563]: time="2024-12-13T16:15:04.283570585Z" level=info msg="StartContainer for \"83c8e822f873e816054d467c4d2bfd9295853a618c5dd2eadd567aa7bfe9a399\"" Dec 13 16:15:04.284193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount431903992.mount: Deactivated successfully. Dec 13 16:15:04.292909 systemd[1]: Started cri-containerd-83c8e822f873e816054d467c4d2bfd9295853a618c5dd2eadd567aa7bfe9a399.scope. Dec 13 16:15:04.305469 env[1563]: time="2024-12-13T16:15:04.305412942Z" level=info msg="StartContainer for \"83c8e822f873e816054d467c4d2bfd9295853a618c5dd2eadd567aa7bfe9a399\" returns successfully" Dec 13 16:15:04.306886 systemd[1]: cri-containerd-83c8e822f873e816054d467c4d2bfd9295853a618c5dd2eadd567aa7bfe9a399.scope: Deactivated successfully. Dec 13 16:15:04.317057 env[1563]: time="2024-12-13T16:15:04.316998444Z" level=info msg="shim disconnected" id=83c8e822f873e816054d467c4d2bfd9295853a618c5dd2eadd567aa7bfe9a399 Dec 13 16:15:04.317057 env[1563]: time="2024-12-13T16:15:04.317024779Z" level=warning msg="cleaning up after shim disconnected" id=83c8e822f873e816054d467c4d2bfd9295853a618c5dd2eadd567aa7bfe9a399 namespace=k8s.io Dec 13 16:15:04.317057 env[1563]: time="2024-12-13T16:15:04.317030883Z" level=info msg="cleaning up dead shim" Dec 13 16:15:04.320722 env[1563]: time="2024-12-13T16:15:04.320701782Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:15:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5142 runtime=io.containerd.runc.v2\n" Dec 13 16:15:04.566334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83c8e822f873e816054d467c4d2bfd9295853a618c5dd2eadd567aa7bfe9a399-rootfs.mount: Deactivated successfully. Dec 13 16:15:05.272332 env[1563]: time="2024-12-13T16:15:05.272231536Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 16:15:05.281730 env[1563]: time="2024-12-13T16:15:05.281707333Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"44f0bd65e464814540c88e3a4dcd5a913ba294a15849df72be3814b476f24a53\"" Dec 13 16:15:05.282105 env[1563]: time="2024-12-13T16:15:05.282093525Z" level=info msg="StartContainer for \"44f0bd65e464814540c88e3a4dcd5a913ba294a15849df72be3814b476f24a53\"" Dec 13 16:15:05.283095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418635781.mount: Deactivated successfully. Dec 13 16:15:05.290919 systemd[1]: Started cri-containerd-44f0bd65e464814540c88e3a4dcd5a913ba294a15849df72be3814b476f24a53.scope. Dec 13 16:15:05.302434 env[1563]: time="2024-12-13T16:15:05.302409996Z" level=info msg="StartContainer for \"44f0bd65e464814540c88e3a4dcd5a913ba294a15849df72be3814b476f24a53\" returns successfully" Dec 13 16:15:05.302592 systemd[1]: cri-containerd-44f0bd65e464814540c88e3a4dcd5a913ba294a15849df72be3814b476f24a53.scope: Deactivated successfully. Dec 13 16:15:05.311757 env[1563]: time="2024-12-13T16:15:05.311687635Z" level=info msg="shim disconnected" id=44f0bd65e464814540c88e3a4dcd5a913ba294a15849df72be3814b476f24a53 Dec 13 16:15:05.311757 env[1563]: time="2024-12-13T16:15:05.311715381Z" level=warning msg="cleaning up after shim disconnected" id=44f0bd65e464814540c88e3a4dcd5a913ba294a15849df72be3814b476f24a53 namespace=k8s.io Dec 13 16:15:05.311757 env[1563]: time="2024-12-13T16:15:05.311721430Z" level=info msg="cleaning up dead shim" Dec 13 16:15:05.315452 env[1563]: time="2024-12-13T16:15:05.315433800Z" level=warning msg="cleanup warnings time=\"2024-12-13T16:15:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5195 runtime=io.containerd.runc.v2\n" Dec 13 16:15:05.565980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44f0bd65e464814540c88e3a4dcd5a913ba294a15849df72be3814b476f24a53-rootfs.mount: Deactivated successfully. Dec 13 16:15:06.138986 kubelet[2596]: E1213 16:15:06.138881 2596 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 16:15:06.288071 env[1563]: time="2024-12-13T16:15:06.287954398Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 16:15:06.310127 env[1563]: time="2024-12-13T16:15:06.310017519Z" level=info msg="CreateContainer within sandbox \"f78ea981d1f4df2a0155550f5ea1cc1cc9a497f356498f73652d289f23d2a7f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef45861c304d21505f9389bf32fc1604439355274bd86ea84d0b47b82bc20d7d\"" Dec 13 16:15:06.311101 env[1563]: time="2024-12-13T16:15:06.311022214Z" level=info msg="StartContainer for \"ef45861c304d21505f9389bf32fc1604439355274bd86ea84d0b47b82bc20d7d\"" Dec 13 16:15:06.335379 systemd[1]: Started cri-containerd-ef45861c304d21505f9389bf32fc1604439355274bd86ea84d0b47b82bc20d7d.scope. Dec 13 16:15:06.366472 env[1563]: time="2024-12-13T16:15:06.366381370Z" level=info msg="StartContainer for \"ef45861c304d21505f9389bf32fc1604439355274bd86ea84d0b47b82bc20d7d\" returns successfully" Dec 13 16:15:06.569497 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 16:15:07.320062 kubelet[2596]: I1213 16:15:07.319995 2596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2t255" podStartSLOduration=5.319886135 podStartE2EDuration="5.319886135s" podCreationTimestamp="2024-12-13 16:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 16:15:07.319582636 +0000 UTC m=+426.373380527" watchObservedRunningTime="2024-12-13 16:15:07.319886135 +0000 UTC m=+426.373684015" Dec 13 16:15:09.822539 systemd-networkd[1311]: lxc_health: Link UP Dec 13 16:15:09.840489 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 16:15:09.840544 systemd-networkd[1311]: lxc_health: Gained carrier Dec 13 16:15:10.860028 kubelet[2596]: I1213 16:15:10.859924 2596 setters.go:568] "Node became not ready" node="ci-3510.3.6-a-6bc1e3250f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T16:15:10Z","lastTransitionTime":"2024-12-13T16:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 16:15:11.816610 systemd-networkd[1311]: lxc_health: Gained IPv6LL Dec 13 16:15:15.214451 sshd[4895]: pam_unix(sshd:session): session closed for user core Dec 13 16:15:15.215953 systemd[1]: sshd@28-147.28.180.91:22-139.178.89.65:58716.service: Deactivated successfully. Dec 13 16:15:15.216377 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 16:15:15.216789 systemd-logind[1555]: Session 30 logged out. Waiting for processes to exit. Dec 13 16:15:15.217266 systemd-logind[1555]: Removed session 30.