Dec 13 02:31:40.569513 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:31:40.569525 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:31:40.569532 kernel: BIOS-provided physical RAM map: Dec 13 02:31:40.569536 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 02:31:40.569539 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 02:31:40.569543 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 02:31:40.569548 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 02:31:40.569552 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 02:31:40.569556 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cbfff] usable Dec 13 02:31:40.569559 kernel: BIOS-e820: [mem 0x00000000819cc000-0x00000000819ccfff] ACPI NVS Dec 13 02:31:40.569564 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] reserved Dec 13 02:31:40.569568 kernel: BIOS-e820: [mem 0x00000000819ce000-0x000000008afccfff] usable Dec 13 02:31:40.569572 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Dec 13 02:31:40.569575 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Dec 13 02:31:40.569580 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Dec 13 02:31:40.569585 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Dec 13 02:31:40.569589 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Dec 13 02:31:40.569594 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Dec 13 02:31:40.569598 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 02:31:40.569602 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 02:31:40.569606 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 02:31:40.569610 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 02:31:40.569614 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 02:31:40.569618 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Dec 13 02:31:40.569622 kernel: NX (Execute Disable) protection: active Dec 13 02:31:40.569627 kernel: SMBIOS 3.2.1 present. Dec 13 02:31:40.569631 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Dec 13 02:31:40.569636 kernel: tsc: Detected 3400.000 MHz processor Dec 13 02:31:40.569640 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 02:31:40.569644 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:31:40.569649 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:31:40.569653 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Dec 13 02:31:40.569657 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:31:40.569662 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Dec 13 02:31:40.569666 kernel: Using GB pages for direct mapping Dec 13 02:31:40.569670 kernel: ACPI: Early table checksum verification disabled Dec 13 02:31:40.569675 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 02:31:40.569680 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 02:31:40.569684 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Dec 13 02:31:40.569688 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 02:31:40.569694 kernel: ACPI: FACS 0x000000008C66CF80 000040 Dec 13 02:31:40.569699 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Dec 13 02:31:40.569704 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Dec 13 02:31:40.569709 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 02:31:40.569714 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 02:31:40.569718 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 02:31:40.569723 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 02:31:40.569727 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 02:31:40.569732 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 02:31:40.569737 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:31:40.569742 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 02:31:40.569747 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 02:31:40.569751 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:31:40.569756 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:31:40.569760 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 02:31:40.569765 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 02:31:40.569770 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:31:40.569774 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:31:40.569780 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 02:31:40.569784 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Dec 13 02:31:40.569789 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 02:31:40.569793 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 02:31:40.569798 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 02:31:40.569803 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Dec 13 02:31:40.569807 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 02:31:40.569812 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 02:31:40.569816 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 02:31:40.569822 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 02:31:40.569826 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 02:31:40.569831 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Dec 13 02:31:40.569836 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Dec 13 02:31:40.569840 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Dec 13 02:31:40.569845 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Dec 13 02:31:40.569849 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Dec 13 02:31:40.569854 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Dec 13 02:31:40.569859 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Dec 13 02:31:40.569864 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Dec 13 02:31:40.569868 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Dec 13 02:31:40.569873 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Dec 13 02:31:40.569878 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Dec 13 02:31:40.569882 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Dec 13 02:31:40.569887 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Dec 13 02:31:40.569891 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Dec 13 02:31:40.569896 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Dec 13 02:31:40.569901 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Dec 13 02:31:40.569906 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Dec 13 02:31:40.569910 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Dec 13 02:31:40.569915 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Dec 13 02:31:40.569919 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Dec 13 02:31:40.569924 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Dec 13 02:31:40.569928 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Dec 13 02:31:40.569933 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Dec 13 02:31:40.569938 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Dec 13 02:31:40.569943 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Dec 13 02:31:40.569948 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Dec 13 02:31:40.569952 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Dec 13 02:31:40.569957 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Dec 13 02:31:40.569962 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Dec 13 02:31:40.569966 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Dec 13 02:31:40.569971 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Dec 13 02:31:40.569975 kernel: No NUMA configuration found Dec 13 02:31:40.569980 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Dec 13 02:31:40.569985 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Dec 13 02:31:40.569990 kernel: Zone ranges: Dec 13 02:31:40.569995 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:31:40.569999 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:31:40.570004 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Dec 13 02:31:40.570009 kernel: Movable zone start for each node Dec 13 02:31:40.570013 kernel: Early memory node ranges Dec 13 02:31:40.570018 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 02:31:40.570022 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 02:31:40.570027 kernel: node 0: [mem 0x0000000040400000-0x00000000819cbfff] Dec 13 02:31:40.570032 kernel: node 0: [mem 0x00000000819ce000-0x000000008afccfff] Dec 13 02:31:40.570037 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Dec 13 02:31:40.570042 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Dec 13 02:31:40.570046 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Dec 13 02:31:40.570051 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Dec 13 02:31:40.570055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:31:40.570063 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 02:31:40.570069 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 02:31:40.570074 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 02:31:40.570079 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Dec 13 02:31:40.570085 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Dec 13 02:31:40.570090 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Dec 13 02:31:40.570095 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Dec 13 02:31:40.570100 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 02:31:40.570105 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 02:31:40.570110 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 02:31:40.570115 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 02:31:40.570121 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 02:31:40.570126 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 02:31:40.570131 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 02:31:40.570135 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 02:31:40.570140 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 02:31:40.570145 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 02:31:40.570150 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 02:31:40.570155 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 02:31:40.570160 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 02:31:40.570166 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 02:31:40.570170 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 02:31:40.570175 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 02:31:40.570180 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 02:31:40.570185 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 02:31:40.570190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:31:40.570195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:31:40.570200 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:31:40.570205 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:31:40.570211 kernel: TSC deadline timer available Dec 13 02:31:40.570215 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 02:31:40.570220 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Dec 13 02:31:40.570225 kernel: Booting paravirtualized kernel on bare hardware Dec 13 02:31:40.570230 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:31:40.570236 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 02:31:40.570240 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 02:31:40.570245 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 02:31:40.570250 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 02:31:40.570256 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Dec 13 02:31:40.570261 kernel: Policy zone: Normal Dec 13 02:31:40.570266 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:31:40.570272 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:31:40.570277 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 02:31:40.570282 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 02:31:40.570287 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:31:40.570292 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 730116K reserved, 0K cma-reserved) Dec 13 02:31:40.570297 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 02:31:40.570302 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:31:40.570307 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:31:40.570312 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:31:40.570337 kernel: rcu: RCU event tracing is enabled. Dec 13 02:31:40.570342 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 02:31:40.570347 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:31:40.570352 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:31:40.570372 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:31:40.570377 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 02:31:40.570382 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 02:31:40.570387 kernel: random: crng init done Dec 13 02:31:40.570391 kernel: Console: colour dummy device 80x25 Dec 13 02:31:40.570396 kernel: printk: console [tty0] enabled Dec 13 02:31:40.570401 kernel: printk: console [ttyS1] enabled Dec 13 02:31:40.570406 kernel: ACPI: Core revision 20210730 Dec 13 02:31:40.570411 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Dec 13 02:31:40.570416 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:31:40.570422 kernel: DMAR: Host address width 39 Dec 13 02:31:40.570427 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 02:31:40.570432 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 02:31:40.570437 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Dec 13 02:31:40.570442 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Dec 13 02:31:40.570447 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 02:31:40.570452 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 02:31:40.570456 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 02:31:40.570461 kernel: x2apic enabled Dec 13 02:31:40.570467 kernel: Switched APIC routing to cluster x2apic. Dec 13 02:31:40.570472 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 02:31:40.570477 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 02:31:40.570482 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 02:31:40.570487 kernel: process: using mwait in idle threads Dec 13 02:31:40.570492 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:31:40.570497 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:31:40.570502 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:31:40.570506 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 02:31:40.570512 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 02:31:40.570517 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 02:31:40.570522 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 02:31:40.570527 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:31:40.570532 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 02:31:40.570537 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 02:31:40.570541 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:31:40.570546 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:31:40.570551 kernel: TAA: Mitigation: TSX disabled Dec 13 02:31:40.570556 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 02:31:40.570561 kernel: SRBDS: Mitigation: Microcode Dec 13 02:31:40.570566 kernel: GDS: Vulnerable: No microcode Dec 13 02:31:40.570572 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:31:40.570576 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:31:40.570581 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:31:40.570586 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:31:40.570591 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:31:40.570596 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:31:40.570600 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:31:40.570605 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:31:40.570610 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 02:31:40.570615 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:31:40.570621 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:31:40.570625 kernel: LSM: Security Framework initializing Dec 13 02:31:40.570630 kernel: SELinux: Initializing. Dec 13 02:31:40.570635 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:31:40.570640 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:31:40.570645 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 02:31:40.570650 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 02:31:40.570655 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 02:31:40.570660 kernel: ... version: 4 Dec 13 02:31:40.570665 kernel: ... bit width: 48 Dec 13 02:31:40.570669 kernel: ... generic registers: 4 Dec 13 02:31:40.570675 kernel: ... value mask: 0000ffffffffffff Dec 13 02:31:40.570680 kernel: ... max period: 00007fffffffffff Dec 13 02:31:40.570685 kernel: ... fixed-purpose events: 3 Dec 13 02:31:40.570690 kernel: ... event mask: 000000070000000f Dec 13 02:31:40.570695 kernel: signal: max sigframe size: 2032 Dec 13 02:31:40.570700 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:31:40.570705 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 02:31:40.570710 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:31:40.570714 kernel: x86: Booting SMP configuration: Dec 13 02:31:40.570720 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 02:31:40.570725 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:31:40.570730 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 02:31:40.570735 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 02:31:40.570740 kernel: smpboot: Max logical packages: 1 Dec 13 02:31:40.570745 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 02:31:40.570750 kernel: devtmpfs: initialized Dec 13 02:31:40.570755 kernel: x86/mm: Memory block size: 128MB Dec 13 02:31:40.570760 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cc000-0x819ccfff] (4096 bytes) Dec 13 02:31:40.570765 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Dec 13 02:31:40.570771 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:31:40.570775 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 02:31:40.570780 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:31:40.570785 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:31:40.570790 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:31:40.570795 kernel: audit: type=2000 audit(1734057095.041:1): state=initialized audit_enabled=0 res=1 Dec 13 02:31:40.570800 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:31:40.570805 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:31:40.570810 kernel: cpuidle: using governor menu Dec 13 02:31:40.570815 kernel: ACPI: bus type PCI registered Dec 13 02:31:40.570820 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:31:40.570825 kernel: dca service started, version 1.12.1 Dec 13 02:31:40.570830 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 02:31:40.570835 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 02:31:40.570840 kernel: PCI: Using configuration type 1 for base access Dec 13 02:31:40.570845 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 02:31:40.570850 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:31:40.570855 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:31:40.570860 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:31:40.570865 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:31:40.570870 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:31:40.570875 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:31:40.570880 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:31:40.570885 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:31:40.570890 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:31:40.570895 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:31:40.570900 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 02:31:40.570905 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:31:40.570910 kernel: ACPI: SSDT 0xFFFF9DCF00218E00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 02:31:40.570915 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 02:31:40.570920 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:31:40.570925 kernel: ACPI: SSDT 0xFFFF9DCF01AE5800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 02:31:40.570930 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:31:40.570935 kernel: ACPI: SSDT 0xFFFF9DCF01A5E000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 02:31:40.570939 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:31:40.570945 kernel: ACPI: SSDT 0xFFFF9DCF01B48800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 02:31:40.570950 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:31:40.570955 kernel: ACPI: SSDT 0xFFFF9DCF0014A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 02:31:40.570960 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:31:40.570964 kernel: ACPI: SSDT 0xFFFF9DCF01AE5000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 02:31:40.570969 kernel: ACPI: Interpreter enabled Dec 13 02:31:40.570974 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:31:40.570979 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:31:40.570984 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 02:31:40.570990 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 02:31:40.570994 kernel: HEST: Table parsing has been initialized. Dec 13 02:31:40.570999 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 02:31:40.571004 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:31:40.571009 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 02:31:40.571014 kernel: ACPI: PM: Power Resource [USBC] Dec 13 02:31:40.571019 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 02:31:40.571024 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 02:31:40.571029 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 02:31:40.571034 kernel: ACPI: PM: Power Resource [WRST] Dec 13 02:31:40.571039 kernel: ACPI: PM: Power Resource [FN00] Dec 13 02:31:40.571044 kernel: ACPI: PM: Power Resource [FN01] Dec 13 02:31:40.571049 kernel: ACPI: PM: Power Resource [FN02] Dec 13 02:31:40.571054 kernel: ACPI: PM: Power Resource [FN03] Dec 13 02:31:40.571058 kernel: ACPI: PM: Power Resource [FN04] Dec 13 02:31:40.571063 kernel: ACPI: PM: Power Resource [PIN] Dec 13 02:31:40.571068 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 02:31:40.571134 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:31:40.571181 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 02:31:40.571223 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 02:31:40.571230 kernel: PCI host bridge to bus 0000:00 Dec 13 02:31:40.571273 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:31:40.571311 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:31:40.571384 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:31:40.571421 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Dec 13 02:31:40.571458 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 02:31:40.571495 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 02:31:40.571546 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 02:31:40.571595 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 02:31:40.571639 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 02:31:40.571687 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 02:31:40.571732 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Dec 13 02:31:40.571777 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 02:31:40.571820 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Dec 13 02:31:40.571865 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 02:31:40.571908 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Dec 13 02:31:40.571952 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 02:31:40.571999 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 02:31:40.572041 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Dec 13 02:31:40.572083 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Dec 13 02:31:40.572129 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 02:31:40.572171 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 02:31:40.572219 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 02:31:40.572262 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 02:31:40.572308 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 02:31:40.572382 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Dec 13 02:31:40.572423 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 02:31:40.572469 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 02:31:40.572512 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Dec 13 02:31:40.572553 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 02:31:40.572599 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 02:31:40.572642 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Dec 13 02:31:40.572683 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 02:31:40.572728 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 02:31:40.572770 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Dec 13 02:31:40.572813 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Dec 13 02:31:40.572860 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Dec 13 02:31:40.572904 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Dec 13 02:31:40.572945 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Dec 13 02:31:40.572987 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Dec 13 02:31:40.573028 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 02:31:40.573073 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 02:31:40.573117 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 02:31:40.573163 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 02:31:40.573208 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 02:31:40.573257 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 02:31:40.573299 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 02:31:40.573383 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 02:31:40.573425 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 02:31:40.573473 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Dec 13 02:31:40.573515 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Dec 13 02:31:40.573562 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 02:31:40.573604 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 02:31:40.573651 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 02:31:40.573697 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 02:31:40.573739 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Dec 13 02:31:40.573781 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 02:31:40.573828 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 02:31:40.573871 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 02:31:40.573921 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 02:31:40.573966 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 02:31:40.574009 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Dec 13 02:31:40.574053 kernel: pci 0000:01:00.0: PME# supported from D3cold Dec 13 02:31:40.574095 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 02:31:40.574139 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 02:31:40.574186 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 02:31:40.574232 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 02:31:40.574276 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Dec 13 02:31:40.574341 kernel: pci 0000:01:00.1: PME# supported from D3cold Dec 13 02:31:40.574386 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 02:31:40.574430 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 02:31:40.574474 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 02:31:40.574516 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 02:31:40.574561 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 02:31:40.574604 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 02:31:40.574652 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Dec 13 02:31:40.574697 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Dec 13 02:31:40.574741 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Dec 13 02:31:40.574785 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 02:31:40.574830 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Dec 13 02:31:40.574875 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 02:31:40.574919 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 02:31:40.574962 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 02:31:40.575006 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 02:31:40.575054 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 02:31:40.575128 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 02:31:40.575194 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Dec 13 02:31:40.575237 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 02:31:40.575283 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Dec 13 02:31:40.575330 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 02:31:40.575374 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 02:31:40.575417 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 02:31:40.575460 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 02:31:40.575503 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 02:31:40.575550 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 02:31:40.575596 kernel: pci 0000:06:00.0: enabling Extended Tags Dec 13 02:31:40.575642 kernel: pci 0000:06:00.0: supports D1 D2 Dec 13 02:31:40.575686 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 02:31:40.575728 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 02:31:40.575770 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 02:31:40.575813 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 02:31:40.575862 kernel: pci_bus 0000:07: extended config space not accessible Dec 13 02:31:40.575912 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 02:31:40.575961 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Dec 13 02:31:40.576009 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Dec 13 02:31:40.576055 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 02:31:40.576102 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:31:40.576147 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 02:31:40.576193 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 02:31:40.576239 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 02:31:40.576285 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 02:31:40.576331 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 02:31:40.576339 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 02:31:40.576345 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 02:31:40.576350 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 02:31:40.576356 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 02:31:40.576361 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 02:31:40.576366 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 02:31:40.576372 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 02:31:40.576378 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 02:31:40.576384 kernel: iommu: Default domain type: Translated Dec 13 02:31:40.576389 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:31:40.576435 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Dec 13 02:31:40.576480 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:31:40.576528 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Dec 13 02:31:40.576536 kernel: vgaarb: loaded Dec 13 02:31:40.576543 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:31:40.576549 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:31:40.576554 kernel: PTP clock support registered Dec 13 02:31:40.576560 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:31:40.576565 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:31:40.576571 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 02:31:40.576576 kernel: e820: reserve RAM buffer [mem 0x819cc000-0x83ffffff] Dec 13 02:31:40.576581 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Dec 13 02:31:40.576586 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Dec 13 02:31:40.576592 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Dec 13 02:31:40.576598 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Dec 13 02:31:40.576603 kernel: clocksource: Switched to clocksource tsc-early Dec 13 02:31:40.576608 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:31:40.576614 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:31:40.576619 kernel: pnp: PnP ACPI init Dec 13 02:31:40.576663 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 02:31:40.576708 kernel: pnp 00:02: [dma 0 disabled] Dec 13 02:31:40.576749 kernel: pnp 00:03: [dma 0 disabled] Dec 13 02:31:40.576794 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 02:31:40.576833 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 02:31:40.576873 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 02:31:40.576915 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 02:31:40.576953 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 02:31:40.576992 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 02:31:40.577031 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 02:31:40.577069 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 02:31:40.577107 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 02:31:40.577145 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 02:31:40.577182 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 02:31:40.577224 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 02:31:40.577263 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 02:31:40.577302 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 02:31:40.577361 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 02:31:40.577399 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 02:31:40.577436 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 02:31:40.577473 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 02:31:40.577513 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 02:31:40.577521 kernel: pnp: PnP ACPI: found 10 devices Dec 13 02:31:40.577527 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:31:40.577533 kernel: NET: Registered PF_INET protocol family Dec 13 02:31:40.577538 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:31:40.577543 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:31:40.577549 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:31:40.577554 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:31:40.577560 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:31:40.577565 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 02:31:40.577570 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:31:40.577577 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:31:40.577582 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:31:40.577587 kernel: NET: Registered PF_XDP protocol family Dec 13 02:31:40.577631 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Dec 13 02:31:40.577672 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Dec 13 02:31:40.577715 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Dec 13 02:31:40.577758 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 02:31:40.577803 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 02:31:40.577847 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 02:31:40.577891 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 02:31:40.577932 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 02:31:40.577975 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 02:31:40.578017 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 02:31:40.578059 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 02:31:40.578103 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 02:31:40.578145 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 02:31:40.578187 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 02:31:40.578230 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 02:31:40.578272 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 02:31:40.578315 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 02:31:40.578379 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 02:31:40.578426 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 02:31:40.578470 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 02:31:40.578514 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 02:31:40.578558 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 02:31:40.578601 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 02:31:40.578644 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 02:31:40.578683 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 02:31:40.578721 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:31:40.578758 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:31:40.578798 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:31:40.578835 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Dec 13 02:31:40.578873 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 02:31:40.578918 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Dec 13 02:31:40.578958 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 02:31:40.579005 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Dec 13 02:31:40.579047 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Dec 13 02:31:40.579091 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 02:31:40.579130 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Dec 13 02:31:40.579174 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Dec 13 02:31:40.579213 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Dec 13 02:31:40.579256 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 02:31:40.579297 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Dec 13 02:31:40.579306 kernel: PCI: CLS 64 bytes, default 64 Dec 13 02:31:40.579312 kernel: DMAR: No ATSR found Dec 13 02:31:40.579319 kernel: DMAR: No SATC found Dec 13 02:31:40.579325 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 02:31:40.579368 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 02:31:40.579413 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 02:31:40.579456 kernel: pci 0000:00:08.0: Adding to iommu group 2 Dec 13 02:31:40.579500 kernel: pci 0000:00:12.0: Adding to iommu group 3 Dec 13 02:31:40.579544 kernel: pci 0000:00:14.0: Adding to iommu group 4 Dec 13 02:31:40.579588 kernel: pci 0000:00:14.2: Adding to iommu group 4 Dec 13 02:31:40.579629 kernel: pci 0000:00:15.0: Adding to iommu group 5 Dec 13 02:31:40.579672 kernel: pci 0000:00:15.1: Adding to iommu group 5 Dec 13 02:31:40.579714 kernel: pci 0000:00:16.0: Adding to iommu group 6 Dec 13 02:31:40.579756 kernel: pci 0000:00:16.1: Adding to iommu group 6 Dec 13 02:31:40.579799 kernel: pci 0000:00:16.4: Adding to iommu group 6 Dec 13 02:31:40.579841 kernel: pci 0000:00:17.0: Adding to iommu group 7 Dec 13 02:31:40.579885 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Dec 13 02:31:40.579929 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Dec 13 02:31:40.579973 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Dec 13 02:31:40.580016 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Dec 13 02:31:40.580057 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Dec 13 02:31:40.580100 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Dec 13 02:31:40.580142 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Dec 13 02:31:40.580185 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Dec 13 02:31:40.580227 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Dec 13 02:31:40.580274 kernel: pci 0000:01:00.0: Adding to iommu group 1 Dec 13 02:31:40.580321 kernel: pci 0000:01:00.1: Adding to iommu group 1 Dec 13 02:31:40.580367 kernel: pci 0000:03:00.0: Adding to iommu group 15 Dec 13 02:31:40.580411 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 02:31:40.580456 kernel: pci 0000:06:00.0: Adding to iommu group 17 Dec 13 02:31:40.580502 kernel: pci 0000:07:00.0: Adding to iommu group 17 Dec 13 02:31:40.580510 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 02:31:40.580515 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:31:40.580522 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Dec 13 02:31:40.580528 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Dec 13 02:31:40.580533 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 02:31:40.580539 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 02:31:40.580544 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 02:31:40.580608 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 02:31:40.580616 kernel: Initialise system trusted keyrings Dec 13 02:31:40.580621 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 02:31:40.580628 kernel: Key type asymmetric registered Dec 13 02:31:40.580633 kernel: Asymmetric key parser 'x509' registered Dec 13 02:31:40.580638 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:31:40.580644 kernel: io scheduler mq-deadline registered Dec 13 02:31:40.580649 kernel: io scheduler kyber registered Dec 13 02:31:40.580654 kernel: io scheduler bfq registered Dec 13 02:31:40.580697 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Dec 13 02:31:40.580738 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Dec 13 02:31:40.580783 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Dec 13 02:31:40.580828 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Dec 13 02:31:40.580871 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Dec 13 02:31:40.580912 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Dec 13 02:31:40.580959 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 02:31:40.580967 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 02:31:40.580973 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 02:31:40.580978 kernel: pstore: Registered erst as persistent store backend Dec 13 02:31:40.580985 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:31:40.580990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:31:40.580995 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:31:40.581001 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:31:40.581006 kernel: hpet_acpi_add: no address or irqs in _CRS Dec 13 02:31:40.581050 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 02:31:40.581058 kernel: i8042: PNP: No PS/2 controller found. Dec 13 02:31:40.581095 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 02:31:40.581138 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 02:31:40.581176 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T02:31:39 UTC (1734057099) Dec 13 02:31:40.581216 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 02:31:40.581223 kernel: fail to initialize ptp_kvm Dec 13 02:31:40.581228 kernel: intel_pstate: Intel P-state driver initializing Dec 13 02:31:40.581234 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 02:31:40.581239 kernel: intel_pstate: HWP enabled Dec 13 02:31:40.581244 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 02:31:40.581250 kernel: vesafb: scrolling: redraw Dec 13 02:31:40.581256 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 02:31:40.581262 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000002279dfe8, using 768k, total 768k Dec 13 02:31:40.581267 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:31:40.581272 kernel: fb0: VESA VGA frame buffer device Dec 13 02:31:40.581278 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:31:40.581283 kernel: Segment Routing with IPv6 Dec 13 02:31:40.581288 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:31:40.581294 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:31:40.581299 kernel: Key type dns_resolver registered Dec 13 02:31:40.581305 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 02:31:40.581310 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 02:31:40.581340 kernel: IPI shorthand broadcast: enabled Dec 13 02:31:40.581345 kernel: sched_clock: Marking stable (1736597560, 1339496924)->(4518976750, -1442882266) Dec 13 02:31:40.581351 kernel: registered taskstats version 1 Dec 13 02:31:40.581356 kernel: Loading compiled-in X.509 certificates Dec 13 02:31:40.581381 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:31:40.581386 kernel: Key type .fscrypt registered Dec 13 02:31:40.581391 kernel: Key type fscrypt-provisioning registered Dec 13 02:31:40.581397 kernel: pstore: Using crash dump compression: deflate Dec 13 02:31:40.581403 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:31:40.581408 kernel: ima: No architecture policies found Dec 13 02:31:40.581413 kernel: clk: Disabling unused clocks Dec 13 02:31:40.581419 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:31:40.581424 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:31:40.581429 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:31:40.581435 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:31:40.581440 kernel: Run /init as init process Dec 13 02:31:40.581446 kernel: with arguments: Dec 13 02:31:40.581451 kernel: /init Dec 13 02:31:40.581457 kernel: with environment: Dec 13 02:31:40.581462 kernel: HOME=/ Dec 13 02:31:40.581467 kernel: TERM=linux Dec 13 02:31:40.581472 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:31:40.581479 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:31:40.581485 systemd[1]: Detected architecture x86-64. Dec 13 02:31:40.581492 systemd[1]: Running in initrd. Dec 13 02:31:40.581498 systemd[1]: No hostname configured, using default hostname. Dec 13 02:31:40.581503 systemd[1]: Hostname set to . Dec 13 02:31:40.581508 systemd[1]: Initializing machine ID from random generator. Dec 13 02:31:40.581514 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:31:40.581520 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:31:40.581525 systemd[1]: Reached target cryptsetup.target. Dec 13 02:31:40.581531 systemd[1]: Reached target paths.target. Dec 13 02:31:40.581537 systemd[1]: Reached target slices.target. Dec 13 02:31:40.581542 systemd[1]: Reached target swap.target. Dec 13 02:31:40.581548 systemd[1]: Reached target timers.target. Dec 13 02:31:40.581553 systemd[1]: Listening on iscsid.socket. Dec 13 02:31:40.581559 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:31:40.581564 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:31:40.581570 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:31:40.581576 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:31:40.581582 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Dec 13 02:31:40.581587 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Dec 13 02:31:40.581593 kernel: clocksource: Switched to clocksource tsc Dec 13 02:31:40.581598 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:31:40.581604 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:31:40.581609 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:31:40.581615 systemd[1]: Reached target sockets.target. Dec 13 02:31:40.581621 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:31:40.581627 systemd[1]: Finished network-cleanup.service. Dec 13 02:31:40.581633 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:31:40.581638 systemd[1]: Starting systemd-journald.service... Dec 13 02:31:40.581644 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:31:40.581651 systemd-journald[267]: Journal started Dec 13 02:31:40.581677 systemd-journald[267]: Runtime Journal (/run/log/journal/0e08b90489544d61bce19b16691add83) is 8.0M, max 640.1M, 632.1M free. Dec 13 02:31:40.583550 systemd-modules-load[268]: Inserted module 'overlay' Dec 13 02:31:40.588000 audit: BPF prog-id=6 op=LOAD Dec 13 02:31:40.607367 kernel: audit: type=1334 audit(1734057100.588:2): prog-id=6 op=LOAD Dec 13 02:31:40.607384 systemd[1]: Starting systemd-resolved.service... Dec 13 02:31:40.656355 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:31:40.656370 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:31:40.689353 kernel: Bridge firewalling registered Dec 13 02:31:40.689369 systemd[1]: Started systemd-journald.service. Dec 13 02:31:40.703388 systemd-modules-load[268]: Inserted module 'br_netfilter' Dec 13 02:31:40.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.705949 systemd-resolved[270]: Positive Trust Anchors: Dec 13 02:31:40.809054 kernel: audit: type=1130 audit(1734057100.710:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.809065 kernel: SCSI subsystem initialized Dec 13 02:31:40.809072 kernel: audit: type=1130 audit(1734057100.763:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.705954 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:31:40.917844 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:31:40.917856 kernel: audit: type=1130 audit(1734057100.834:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.917865 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:31:40.917871 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:31:40.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.705974 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:31:40.999532 kernel: audit: type=1130 audit(1734057100.934:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.707494 systemd-resolved[270]: Defaulting to hostname 'linux'. Dec 13 02:31:41.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.711590 systemd[1]: Started systemd-resolved.service. Dec 13 02:31:41.108433 kernel: audit: type=1130 audit(1734057101.006:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:41.108444 kernel: audit: type=1130 audit(1734057101.061:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:41.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:40.764489 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:31:40.835489 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:31:40.931606 systemd-modules-load[268]: Inserted module 'dm_multipath' Dec 13 02:31:40.935611 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:31:41.028050 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:31:41.062567 systemd[1]: Reached target nss-lookup.target. Dec 13 02:31:41.117920 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:31:41.137960 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:31:41.138259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:31:41.141119 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:31:41.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:41.141881 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:31:41.190393 kernel: audit: type=1130 audit(1734057101.139:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:41.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:41.203677 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:31:41.267422 kernel: audit: type=1130 audit(1734057101.202:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:41.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:41.259995 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:31:41.281385 dracut-cmdline[293]: dracut-dracut-053 Dec 13 02:31:41.281385 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 02:31:41.281385 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:31:41.351397 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:31:41.351409 kernel: iscsi: registered transport (tcp) Dec 13 02:31:41.408348 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:31:41.408392 kernel: QLogic iSCSI HBA Driver Dec 13 02:31:41.424385 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:31:41.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:41.424982 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:31:41.480351 kernel: raid6: avx2x4 gen() 48092 MB/s Dec 13 02:31:41.515356 kernel: raid6: avx2x4 xor() 14278 MB/s Dec 13 02:31:41.550378 kernel: raid6: avx2x2 gen() 51575 MB/s Dec 13 02:31:41.585349 kernel: raid6: avx2x2 xor() 32141 MB/s Dec 13 02:31:41.620383 kernel: raid6: avx2x1 gen() 44468 MB/s Dec 13 02:31:41.654348 kernel: raid6: avx2x1 xor() 27392 MB/s Dec 13 02:31:41.688380 kernel: raid6: sse2x4 gen() 20907 MB/s Dec 13 02:31:41.722353 kernel: raid6: sse2x4 xor() 11504 MB/s Dec 13 02:31:41.756349 kernel: raid6: sse2x2 gen() 21161 MB/s Dec 13 02:31:41.790348 kernel: raid6: sse2x2 xor() 13109 MB/s Dec 13 02:31:41.824348 kernel: raid6: sse2x1 gen() 17916 MB/s Dec 13 02:31:41.876256 kernel: raid6: sse2x1 xor() 8739 MB/s Dec 13 02:31:41.876272 kernel: raid6: using algorithm avx2x2 gen() 51575 MB/s Dec 13 02:31:41.876280 kernel: raid6: .... xor() 32141 MB/s, rmw enabled Dec 13 02:31:41.894554 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:31:41.941379 kernel: xor: automatically using best checksumming function avx Dec 13 02:31:42.020322 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:31:42.025280 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:31:42.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:42.024000 audit: BPF prog-id=7 op=LOAD Dec 13 02:31:42.024000 audit: BPF prog-id=8 op=LOAD Dec 13 02:31:42.026125 systemd[1]: Starting systemd-udevd.service... Dec 13 02:31:42.033627 systemd-udevd[474]: Using default interface naming scheme 'v252'. Dec 13 02:31:42.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:42.048752 systemd[1]: Started systemd-udevd.service. Dec 13 02:31:42.088432 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Dec 13 02:31:42.066247 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:31:42.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:42.093849 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:31:42.106209 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:31:42.175439 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:31:42.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:42.203324 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:31:42.246530 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:31:42.246579 kernel: AES CTR mode by8 optimization enabled Dec 13 02:31:42.247321 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 02:31:42.247340 kernel: libata version 3.00 loaded. Dec 13 02:31:42.247351 kernel: ACPI: bus type USB registered Dec 13 02:31:42.247359 kernel: usbcore: registered new interface driver usbfs Dec 13 02:31:42.247367 kernel: usbcore: registered new interface driver hub Dec 13 02:31:42.247375 kernel: usbcore: registered new device driver usb Dec 13 02:31:42.265321 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 02:31:42.317360 kernel: pps pps0: new PPS source ptp0 Dec 13 02:31:42.381060 kernel: igb 0000:03:00.0: added PHC on eth0 Dec 13 02:31:42.450703 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 02:31:42.450765 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:44 Dec 13 02:31:42.450820 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Dec 13 02:31:42.450873 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 02:31:42.486696 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 Dec 13 02:31:43.110175 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 02:31:43.110250 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 02:31:43.110304 kernel: pps pps1: new PPS source ptp1 Dec 13 02:31:43.110374 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 02:31:43.110426 kernel: igb 0000:04:00.0: added PHC on eth1 Dec 13 02:31:43.110479 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 02:31:43.110527 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 02:31:43.110579 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 02:31:43.110629 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 02:31:43.110678 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 02:31:43.110725 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 02:31:43.110771 kernel: hub 1-0:1.0: USB hub found Dec 13 02:31:43.110833 kernel: hub 1-0:1.0: 16 ports detected Dec 13 02:31:43.110885 kernel: hub 2-0:1.0: USB hub found Dec 13 02:31:43.110945 kernel: hub 2-0:1.0: 10 ports detected Dec 13 02:31:43.110998 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:45 Dec 13 02:31:43.111048 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Dec 13 02:31:43.111097 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Dec 13 02:31:43.111144 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 02:31:43.111193 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 02:31:43.111240 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Dec 13 02:31:43.111289 kernel: scsi host0: ahci Dec 13 02:31:43.111350 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 02:31:43.111401 kernel: scsi host1: ahci Dec 13 02:31:43.111454 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Dec 13 02:31:43.111506 kernel: scsi host2: ahci Dec 13 02:31:43.111559 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 02:31:43.112036 kernel: scsi host3: ahci Dec 13 02:31:43.112104 kernel: scsi host4: ahci Dec 13 02:31:43.112161 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 02:31:43.112214 kernel: scsi host5: ahci Dec 13 02:31:43.112267 kernel: scsi host6: ahci Dec 13 02:31:43.112323 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 138 Dec 13 02:31:43.112331 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 138 Dec 13 02:31:43.112337 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 138 Dec 13 02:31:43.112346 kernel: hub 1-14:1.0: USB hub found Dec 13 02:31:43.112406 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 138 Dec 13 02:31:43.112414 kernel: hub 1-14:1.0: 4 ports detected Dec 13 02:31:43.112468 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 138 Dec 13 02:31:43.112475 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 138 Dec 13 02:31:43.112482 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 138 Dec 13 02:31:43.112488 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 02:31:43.112538 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 Dec 13 02:31:43.658672 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 02:31:43.658745 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 02:31:43.658863 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 02:31:43.658872 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 02:31:43.658880 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 02:31:43.658888 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 02:31:43.658896 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 02:31:43.658905 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 02:31:43.658968 kernel: port_module: 9 callbacks suppressed Dec 13 02:31:43.658977 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Dec 13 02:31:43.659035 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 02:31:43.659043 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 02:31:43.659100 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Dec 13 02:31:43.659109 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 02:31:43.659116 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Dec 13 02:31:43.659126 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 02:31:43.659133 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 02:31:43.659141 kernel: ata1.00: Features: NCQ-prio Dec 13 02:31:43.659149 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 02:31:43.659207 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 02:31:43.678950 kernel: ata2.00: Features: NCQ-prio Dec 13 02:31:43.697318 kernel: ata1.00: configured for UDMA/133 Dec 13 02:31:43.697333 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Dec 13 02:31:43.716330 kernel: ata2.00: configured for UDMA/133 Dec 13 02:31:43.731362 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Dec 13 02:31:43.769320 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Dec 13 02:31:43.801571 kernel: usbcore: registered new interface driver usbhid Dec 13 02:31:43.801630 kernel: usbhid: USB HID core driver Dec 13 02:31:43.837319 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 02:31:43.837336 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:31:43.852440 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Dec 13 02:31:43.852515 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 02:31:43.885106 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 02:31:44.191449 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 02:31:44.350242 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 02:31:44.350346 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 02:31:44.350359 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 02:31:44.350471 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Dec 13 02:31:44.350573 kernel: sd 1:0:0:0: [sdb] Write Protect is off Dec 13 02:31:44.350644 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 02:31:44.350735 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 02:31:44.350828 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:31:44.350913 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 02:31:44.350972 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 02:31:44.350980 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 02:31:44.351040 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 02:31:44.351052 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:31:44.351142 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Dec 13 02:31:44.351236 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:31:44.351248 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:31:44.351260 kernel: GPT:9289727 != 937703087 Dec 13 02:31:44.351271 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:31:44.351282 kernel: GPT:9289727 != 937703087 Dec 13 02:31:44.351293 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:31:44.351306 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:31:44.351321 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:31:44.351333 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 02:31:44.380969 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:31:44.431514 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (675) Dec 13 02:31:44.410575 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:31:44.413264 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:31:44.449770 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:31:44.454500 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:31:44.518405 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:31:44.518445 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:31:44.478085 systemd[1]: Starting disk-uuid.service... Dec 13 02:31:44.532403 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:31:44.532571 disk-uuid[688]: Primary Header is updated. Dec 13 02:31:44.532571 disk-uuid[688]: Secondary Entries is updated. Dec 13 02:31:44.532571 disk-uuid[688]: Secondary Header is updated. Dec 13 02:31:44.575375 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:31:45.540187 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:31:45.560019 disk-uuid[689]: The operation has completed successfully. Dec 13 02:31:45.568662 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:31:45.603984 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:31:45.698920 kernel: audit: type=1130 audit(1734057105.610:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:45.698939 kernel: audit: type=1131 audit(1734057105.610:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:45.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:45.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:45.604027 systemd[1]: Finished disk-uuid.service. Dec 13 02:31:45.728427 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:31:45.612016 systemd[1]: Starting verity-setup.service... Dec 13 02:31:45.758840 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:31:45.768280 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:31:45.774522 systemd[1]: Finished verity-setup.service. Dec 13 02:31:45.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:45.842323 kernel: audit: type=1130 audit(1734057105.792:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:45.898368 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:31:45.898329 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:31:45.905604 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:31:45.982381 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:31:45.982395 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:31:45.982403 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:31:45.906008 systemd[1]: Starting ignition-setup.service... Dec 13 02:31:46.010416 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:31:45.930092 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:31:46.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:45.996550 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:31:46.121412 kernel: audit: type=1130 audit(1734057106.018:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.121430 kernel: audit: type=1130 audit(1734057106.073:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.019879 systemd[1]: Finished ignition-setup.service. Dec 13 02:31:46.128000 audit: BPF prog-id=9 op=LOAD Dec 13 02:31:46.151336 kernel: audit: type=1334 audit(1734057106.128:24): prog-id=9 op=LOAD Dec 13 02:31:46.074996 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:31:46.130277 systemd[1]: Starting systemd-networkd.service... Dec 13 02:31:46.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.166076 systemd-networkd[869]: lo: Link UP Dec 13 02:31:46.236524 kernel: audit: type=1130 audit(1734057106.173:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.166078 systemd-networkd[869]: lo: Gained carrier Dec 13 02:31:46.166366 systemd-networkd[869]: Enumeration completed Dec 13 02:31:46.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.166411 systemd[1]: Started systemd-networkd.service. Dec 13 02:31:46.319415 kernel: audit: type=1130 audit(1734057106.256:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.319427 iscsid[881]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:31:46.319427 iscsid[881]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:31:46.319427 iscsid[881]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:31:46.319427 iscsid[881]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:31:46.319427 iscsid[881]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:31:46.319427 iscsid[881]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:31:46.319427 iscsid[881]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:31:46.530522 kernel: audit: type=1130 audit(1734057106.325:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.530655 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 02:31:46.530860 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Dec 13 02:31:46.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.167140 systemd-networkd[869]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:31:46.416597 ignition[863]: Ignition 2.14.0 Dec 13 02:31:46.174642 systemd[1]: Reached target network.target. Dec 13 02:31:46.416602 ignition[863]: Stage: fetch-offline Dec 13 02:31:46.229881 systemd[1]: Starting iscsiuio.service... Dec 13 02:31:46.611511 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 02:31:46.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:46.416626 ignition[863]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:31:46.243482 systemd[1]: Started iscsiuio.service. Dec 13 02:31:46.416638 ignition[863]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:31:46.257906 systemd[1]: Starting iscsid.service... Dec 13 02:31:46.419520 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:31:46.312616 systemd[1]: Started iscsid.service. Dec 13 02:31:46.419584 ignition[863]: parsed url from cmdline: "" Dec 13 02:31:46.326815 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:31:46.419586 ignition[863]: no config URL provided Dec 13 02:31:46.394328 systemd-networkd[869]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:31:46.419589 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:31:46.407531 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:31:46.419600 ignition[863]: parsing config with SHA512: 5905fe0473eef39049cc0c8222a17b3c546e38d3296e431ebbd8941124c5b0c571481914419cdbf943c50eaa030029441ea3ec3e066e7e0d804c6942301146e5 Dec 13 02:31:46.424758 unknown[863]: fetched base config from "system" Dec 13 02:31:46.424986 ignition[863]: fetch-offline: fetch-offline passed Dec 13 02:31:46.424762 unknown[863]: fetched user config from "system" Dec 13 02:31:46.424989 ignition[863]: POST message to Packet Timeline Dec 13 02:31:46.468526 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:31:46.424993 ignition[863]: POST Status error: resource requires networking Dec 13 02:31:46.493563 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:31:46.425033 ignition[863]: Ignition finished successfully Dec 13 02:31:46.511463 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:31:46.562253 ignition[900]: Ignition 2.14.0 Dec 13 02:31:46.511513 systemd[1]: Reached target remote-fs.target. Dec 13 02:31:46.562257 ignition[900]: Stage: kargs Dec 13 02:31:46.539980 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:31:46.562327 ignition[900]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:31:46.556822 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 02:31:46.562338 ignition[900]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:31:46.557194 systemd[1]: Starting ignition-kargs.service... Dec 13 02:31:46.564748 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:31:46.574669 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:31:46.565354 ignition[900]: kargs: kargs passed Dec 13 02:31:46.598121 systemd-networkd[869]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:31:46.565358 ignition[900]: POST message to Packet Timeline Dec 13 02:31:46.626737 systemd-networkd[869]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:31:46.565369 ignition[900]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:31:46.655239 systemd-networkd[869]: enp1s0f1np1: Link UP Dec 13 02:31:46.566797 ignition[900]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34903->[::1]:53: read: connection refused Dec 13 02:31:46.655468 systemd-networkd[869]: enp1s0f1np1: Gained carrier Dec 13 02:31:46.767327 ignition[900]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 02:31:46.667720 systemd-networkd[869]: enp1s0f0np0: Link UP Dec 13 02:31:46.768649 ignition[900]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55082->[::1]:53: read: connection refused Dec 13 02:31:46.668008 systemd-networkd[869]: eno2: Link UP Dec 13 02:31:46.668271 systemd-networkd[869]: eno1: Link UP Dec 13 02:31:47.169305 ignition[900]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 02:31:47.170563 ignition[900]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:51287->[::1]:53: read: connection refused Dec 13 02:31:47.436897 systemd-networkd[869]: enp1s0f0np0: Gained carrier Dec 13 02:31:47.445532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Dec 13 02:31:47.468505 systemd-networkd[869]: enp1s0f0np0: DHCPv4 address 139.178.70.191/31, gateway 139.178.70.190 acquired from 145.40.83.140 Dec 13 02:31:47.863731 systemd-networkd[869]: enp1s0f1np1: Gained IPv6LL Dec 13 02:31:47.970983 ignition[900]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 02:31:47.972342 ignition[900]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44718->[::1]:53: read: connection refused Dec 13 02:31:48.631928 systemd-networkd[869]: enp1s0f0np0: Gained IPv6LL Dec 13 02:31:49.573653 ignition[900]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 02:31:49.574727 ignition[900]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54034->[::1]:53: read: connection refused Dec 13 02:31:52.778361 ignition[900]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 02:31:53.763145 ignition[900]: GET result: OK Dec 13 02:31:54.158306 ignition[900]: Ignition finished successfully Dec 13 02:31:54.162778 systemd[1]: Finished ignition-kargs.service. Dec 13 02:31:54.244917 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 02:31:54.244933 kernel: audit: type=1130 audit(1734057114.173:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:54.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:54.182636 ignition[914]: Ignition 2.14.0 Dec 13 02:31:54.176658 systemd[1]: Starting ignition-disks.service... Dec 13 02:31:54.182639 ignition[914]: Stage: disks Dec 13 02:31:54.182694 ignition[914]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:31:54.182703 ignition[914]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:31:54.184131 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:31:54.185581 ignition[914]: disks: disks passed Dec 13 02:31:54.185584 ignition[914]: POST message to Packet Timeline Dec 13 02:31:54.185595 ignition[914]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:31:54.641487 ignition[914]: GET result: OK Dec 13 02:31:55.252811 ignition[914]: Ignition finished successfully Dec 13 02:31:55.255798 systemd[1]: Finished ignition-disks.service. Dec 13 02:31:55.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:55.269851 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:31:55.344583 kernel: audit: type=1130 audit(1734057115.268:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:55.330535 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:31:55.330571 systemd[1]: Reached target local-fs.target. Dec 13 02:31:55.353532 systemd[1]: Reached target sysinit.target. Dec 13 02:31:55.367537 systemd[1]: Reached target basic.target. Dec 13 02:31:55.382232 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:31:55.403221 systemd-fsck[930]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:31:55.415837 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:31:55.502284 kernel: audit: type=1130 audit(1734057115.423:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:55.502299 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:31:55.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:55.430232 systemd[1]: Mounting sysroot.mount... Dec 13 02:31:55.510960 systemd[1]: Mounted sysroot.mount. Dec 13 02:31:55.526657 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:31:55.535351 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:31:55.557345 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 02:31:55.573069 systemd[1]: Starting flatcar-static-network.service... Dec 13 02:31:55.588563 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:31:55.588616 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:31:55.607452 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:31:55.630809 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:31:55.643616 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:31:55.768685 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Dec 13 02:31:55.768706 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:31:55.768715 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:31:55.768722 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:31:55.768730 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:31:55.705288 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:31:55.830423 kernel: audit: type=1130 audit(1734057115.776:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:55.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:55.830461 coreos-metadata[939]: Dec 13 02:31:55.707 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 02:31:55.851559 coreos-metadata[938]: Dec 13 02:31:55.707 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 02:31:55.870414 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:31:55.778611 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:31:55.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:55.920515 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:31:55.953550 kernel: audit: type=1130 audit(1734057115.886:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:55.838909 systemd[1]: Starting ignition-mount.service... Dec 13 02:31:55.961576 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:31:55.858873 systemd[1]: Starting sysroot-boot.service... Dec 13 02:31:55.978549 initrd-setup-root[973]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:31:55.878051 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:31:55.998477 ignition[1016]: INFO : Ignition 2.14.0 Dec 13 02:31:55.998477 ignition[1016]: INFO : Stage: mount Dec 13 02:31:55.998477 ignition[1016]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:31:55.998477 ignition[1016]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:31:55.998477 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:31:55.998477 ignition[1016]: INFO : mount: mount passed Dec 13 02:31:55.998477 ignition[1016]: INFO : POST message to Packet Timeline Dec 13 02:31:55.998477 ignition[1016]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:31:55.878090 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:31:55.878675 systemd[1]: Finished sysroot-boot.service. Dec 13 02:31:56.189634 coreos-metadata[938]: Dec 13 02:31:56.189 INFO Fetch successful Dec 13 02:31:56.264559 coreos-metadata[938]: Dec 13 02:31:56.264 INFO wrote hostname ci-3510.3.6-a-01e29b9675 to /sysroot/etc/hostname Dec 13 02:31:56.264978 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 02:31:56.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:56.341359 kernel: audit: type=1130 audit(1734057116.284:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:56.898040 coreos-metadata[939]: Dec 13 02:31:56.897 INFO Fetch successful Dec 13 02:31:56.970590 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 02:31:56.970648 systemd[1]: Finished flatcar-static-network.service. Dec 13 02:31:56.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:56.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:57.101876 kernel: audit: type=1130 audit(1734057116.986:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:57.101895 kernel: audit: type=1131 audit(1734057116.986:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:57.125243 ignition[1016]: INFO : GET result: OK Dec 13 02:31:57.577431 ignition[1016]: INFO : Ignition finished successfully Dec 13 02:31:57.580353 systemd[1]: Finished ignition-mount.service. Dec 13 02:31:57.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:57.596457 systemd[1]: Starting ignition-files.service... Dec 13 02:31:57.664519 kernel: audit: type=1130 audit(1734057117.593:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:31:57.659093 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:31:57.720992 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1031) Dec 13 02:31:57.721007 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:31:57.721015 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:31:57.744612 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:31:57.794324 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:31:57.795432 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:31:57.812480 ignition[1050]: INFO : Ignition 2.14.0 Dec 13 02:31:57.812480 ignition[1050]: INFO : Stage: files Dec 13 02:31:57.812480 ignition[1050]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:31:57.812480 ignition[1050]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:31:57.812480 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:31:57.892454 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1054) Dec 13 02:31:57.815865 unknown[1050]: wrote ssh authorized keys file for user: core Dec 13 02:31:57.900422 ignition[1050]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:31:57.900422 ignition[1050]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:31:57.900422 ignition[1050]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:31:57.900422 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1432988953" Dec 13 02:31:58.165651 ignition[1050]: CRITICAL : files: createFilesystemsFiles: createFiles: op(7): op(8): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1432988953": device or resource busy Dec 13 02:31:58.165651 ignition[1050]: ERROR : files: createFilesystemsFiles: createFiles: op(7): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1432988953", trying btrfs: device or resource busy Dec 13 02:31:58.165651 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1432988953" Dec 13 02:31:58.165651 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(9): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1432988953" Dec 13 02:31:58.165651 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [started] unmounting "/mnt/oem1432988953" Dec 13 02:31:58.165651 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): op(a): [finished] unmounting "/mnt/oem1432988953" Dec 13 02:31:58.165651 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 02:31:58.165651 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:31:58.165651 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:31:58.325405 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 02:31:58.470239 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:31:58.470239 ignition[1050]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:31:58.470239 ignition[1050]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:31:58.470239 ignition[1050]: INFO : files: op(d): [started] processing unit "packet-phone-home.service" Dec 13 02:31:58.470239 ignition[1050]: INFO : files: op(d): [finished] processing unit "packet-phone-home.service" Dec 13 02:31:58.470239 ignition[1050]: INFO : files: op(e): [started] processing unit "containerd.service" Dec 13 02:31:58.470239 ignition[1050]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:31:58.567648 ignition[1050]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:31:58.567648 ignition[1050]: INFO : files: op(e): [finished] processing unit "containerd.service" Dec 13 02:31:58.567648 ignition[1050]: INFO : files: op(10): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:31:58.567648 ignition[1050]: INFO : files: op(10): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:31:58.567648 ignition[1050]: INFO : files: op(11): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 02:31:58.567648 ignition[1050]: INFO : files: op(11): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 02:31:58.567648 ignition[1050]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:31:58.567648 ignition[1050]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:31:58.567648 ignition[1050]: INFO : files: files passed Dec 13 02:31:58.567648 ignition[1050]: INFO : POST message to Packet Timeline Dec 13 02:31:58.567648 ignition[1050]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:31:59.561354 ignition[1050]: INFO : GET result: OK Dec 13 02:32:00.054957 ignition[1050]: INFO : Ignition finished successfully Dec 13 02:32:00.058064 systemd[1]: Finished ignition-files.service. Dec 13 02:32:00.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.078380 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:32:00.151572 kernel: audit: type=1130 audit(1734057120.071:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.140568 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:32:00.175503 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:32:00.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.140958 systemd[1]: Starting ignition-quench.service... Dec 13 02:32:00.365370 kernel: audit: type=1130 audit(1734057120.185:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.365410 kernel: audit: type=1130 audit(1734057120.251:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.365418 kernel: audit: type=1131 audit(1734057120.251:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.158690 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:32:00.186737 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:32:00.186814 systemd[1]: Finished ignition-quench.service. Dec 13 02:32:00.521839 kernel: audit: type=1130 audit(1734057120.406:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.521855 kernel: audit: type=1131 audit(1734057120.406:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.252583 systemd[1]: Reached target ignition-complete.target. Dec 13 02:32:00.373924 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:32:00.395192 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:32:00.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.395240 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:32:00.645404 kernel: audit: type=1130 audit(1734057120.569:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.427695 systemd[1]: Reached target initrd-fs.target. Dec 13 02:32:00.531536 systemd[1]: Reached target initrd.target. Dec 13 02:32:00.531669 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:32:00.532032 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:32:00.554709 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:32:00.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.571164 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:32:00.792547 kernel: audit: type=1131 audit(1734057120.708:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.640453 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:32:00.653614 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:32:00.669614 systemd[1]: Stopped target timers.target. Dec 13 02:32:00.693658 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:32:00.693783 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:32:00.709998 systemd[1]: Stopped target initrd.target. Dec 13 02:32:00.785711 systemd[1]: Stopped target basic.target. Dec 13 02:32:00.799639 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:32:00.821684 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:32:00.836694 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:32:00.852737 systemd[1]: Stopped target remote-fs.target. Dec 13 02:32:00.868875 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:32:00.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.884013 systemd[1]: Stopped target sysinit.target. Dec 13 02:32:01.050547 kernel: audit: type=1131 audit(1734057120.962:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.900023 systemd[1]: Stopped target local-fs.target. Dec 13 02:32:01.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.915981 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:32:01.135554 kernel: audit: type=1131 audit(1734057121.058:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:00.932980 systemd[1]: Stopped target swap.target. Dec 13 02:32:00.947878 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:32:00.948252 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:32:00.964227 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:32:01.043698 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:32:01.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.043776 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:32:01.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.059757 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:32:01.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.059830 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:32:01.261489 ignition[1098]: INFO : Ignition 2.14.0 Dec 13 02:32:01.261489 ignition[1098]: INFO : Stage: umount Dec 13 02:32:01.261489 ignition[1098]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:32:01.261489 ignition[1098]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:32:01.261489 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:32:01.261489 ignition[1098]: INFO : umount: umount passed Dec 13 02:32:01.261489 ignition[1098]: INFO : POST message to Packet Timeline Dec 13 02:32:01.261489 ignition[1098]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:32:01.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.128759 systemd[1]: Stopped target paths.target. Dec 13 02:32:01.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.414889 iscsid[881]: iscsid shutting down. Dec 13 02:32:01.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:01.143548 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:32:01.148544 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:32:01.150706 systemd[1]: Stopped target slices.target. Dec 13 02:32:01.173705 systemd[1]: Stopped target sockets.target. Dec 13 02:32:01.188737 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:32:01.188876 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:32:01.205850 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:32:01.206032 systemd[1]: Stopped ignition-files.service. Dec 13 02:32:01.222081 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 02:32:01.222458 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 02:32:01.239353 systemd[1]: Stopping ignition-mount.service... Dec 13 02:32:01.253615 systemd[1]: Stopping iscsid.service... Dec 13 02:32:01.269106 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:32:01.275546 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:32:01.275681 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:32:01.287779 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:32:01.287906 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:32:01.323115 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:32:01.324920 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:32:01.325159 systemd[1]: Stopped iscsid.service. Dec 13 02:32:01.342669 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:32:01.342885 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:32:01.360994 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:32:01.361171 systemd[1]: Closed iscsid.socket. Dec 13 02:32:01.375826 systemd[1]: Stopping iscsiuio.service... Dec 13 02:32:01.391047 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:32:01.391290 systemd[1]: Stopped iscsiuio.service. Dec 13 02:32:01.408238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:32:01.408476 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:32:01.424773 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:32:01.424866 systemd[1]: Closed iscsiuio.socket. Dec 13 02:32:02.350564 ignition[1098]: INFO : GET result: OK Dec 13 02:32:02.719109 ignition[1098]: INFO : Ignition finished successfully Dec 13 02:32:02.720969 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:32:02.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.721116 systemd[1]: Stopped ignition-mount.service. Dec 13 02:32:02.735799 systemd[1]: Stopped target network.target. Dec 13 02:32:02.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.751570 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:32:02.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.751731 systemd[1]: Stopped ignition-disks.service. Dec 13 02:32:02.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.767664 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:32:02.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.767791 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:32:02.782835 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:32:02.782987 systemd[1]: Stopped ignition-setup.service. Dec 13 02:32:02.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.799725 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:32:02.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.878000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:32:02.799871 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:32:02.815000 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:32:02.820453 systemd-networkd[869]: enp1s0f0np0: DHCPv6 lease lost Dec 13 02:32:02.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.828503 systemd-networkd[869]: enp1s0f1np1: DHCPv6 lease lost Dec 13 02:32:02.932000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:32:02.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.829746 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:32:02.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.845162 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:32:02.845423 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:32:02.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.862993 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:32:02.863334 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:32:02.878005 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:32:03.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.878099 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:32:03.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.897135 systemd[1]: Stopping network-cleanup.service... Dec 13 02:32:03.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.909528 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:32:02.909676 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:32:03.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.925788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:32:03.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.925943 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:32:03.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.941973 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:32:03.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:03.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:02.942116 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:32:02.957836 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:32:02.976232 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:32:02.977643 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:32:02.977702 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:32:02.989795 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:32:02.989829 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:32:03.003428 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:32:03.003454 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:32:03.019452 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:32:03.019505 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:32:03.034594 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:32:03.034678 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:32:03.051668 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:32:03.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:03.051796 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:32:03.069293 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:32:03.082395 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:32:03.311000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:32:03.311000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:32:03.311000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:32:03.311000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:32:03.311000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:32:03.082427 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 02:32:03.098472 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:32:03.098504 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:32:03.114435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:32:03.114480 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:32:03.132299 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 02:32:03.133260 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:32:03.133425 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:32:03.256042 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:32:03.351333 systemd-journald[267]: Failed to send stream file descriptor to service manager: Connection refused Dec 13 02:32:03.351354 systemd-journald[267]: Received SIGTERM from PID 1 (n/a). Dec 13 02:32:03.256283 systemd[1]: Stopped network-cleanup.service. Dec 13 02:32:03.270897 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:32:03.291492 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:32:03.310719 systemd[1]: Switching root. Dec 13 02:32:03.351442 systemd-journald[267]: Journal stopped Dec 13 02:32:07.133466 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:32:07.133479 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:32:07.133488 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:32:07.133494 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:32:07.133499 kernel: SELinux: policy capability open_perms=1 Dec 13 02:32:07.133504 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:32:07.133511 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:32:07.133517 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:32:07.133522 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:32:07.133528 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:32:07.133534 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:32:07.133540 systemd[1]: Successfully loaded SELinux policy in 317.489ms. Dec 13 02:32:07.133547 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.178ms. Dec 13 02:32:07.133554 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:32:07.133562 systemd[1]: Detected architecture x86-64. Dec 13 02:32:07.133568 systemd[1]: Detected first boot. Dec 13 02:32:07.133573 systemd[1]: Hostname set to . Dec 13 02:32:07.133580 systemd[1]: Initializing machine ID from random generator. Dec 13 02:32:07.133586 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:32:07.133591 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:32:07.133598 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:32:07.133605 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:32:07.133612 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:32:07.133618 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:32:07.133625 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:32:07.133631 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:32:07.133637 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:32:07.133645 systemd[1]: Created slice system-getty.slice. Dec 13 02:32:07.133651 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:32:07.133657 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:32:07.133663 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:32:07.133670 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:32:07.133676 systemd[1]: Created slice user.slice. Dec 13 02:32:07.133682 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:32:07.133689 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:32:07.133695 systemd[1]: Set up automount boot.automount. Dec 13 02:32:07.133702 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:32:07.133708 systemd[1]: Reached target integritysetup.target. Dec 13 02:32:07.133714 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:32:07.133720 systemd[1]: Reached target remote-fs.target. Dec 13 02:32:07.133728 systemd[1]: Reached target slices.target. Dec 13 02:32:07.133734 systemd[1]: Reached target swap.target. Dec 13 02:32:07.133740 systemd[1]: Reached target torcx.target. Dec 13 02:32:07.133747 systemd[1]: Reached target veritysetup.target. Dec 13 02:32:07.133754 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:32:07.133760 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:32:07.133766 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 13 02:32:07.133772 kernel: audit: type=1400 audit(1734057126.378:93): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:32:07.133779 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:32:07.133786 kernel: audit: type=1335 audit(1734057126.378:94): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:32:07.133792 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:32:07.133798 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:32:07.133805 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:32:07.133812 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:32:07.133818 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:32:07.133825 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:32:07.133833 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:32:07.133839 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:32:07.133846 systemd[1]: Mounting media.mount... Dec 13 02:32:07.133852 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:07.133859 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:32:07.133865 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:32:07.133872 systemd[1]: Mounting tmp.mount... Dec 13 02:32:07.133878 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:32:07.133885 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:32:07.133892 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:32:07.133898 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:32:07.133905 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:32:07.133912 systemd[1]: Starting modprobe@drm.service... Dec 13 02:32:07.133918 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:32:07.133925 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:32:07.133931 kernel: fuse: init (API version 7.34) Dec 13 02:32:07.133937 systemd[1]: Starting modprobe@loop.service... Dec 13 02:32:07.133943 kernel: loop: module loaded Dec 13 02:32:07.133950 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:32:07.133957 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 02:32:07.133964 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 02:32:07.133970 systemd[1]: Starting systemd-journald.service... Dec 13 02:32:07.133977 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:32:07.133983 kernel: audit: type=1305 audit(1734057127.130:95): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:32:07.133991 systemd-journald[1292]: Journal started Dec 13 02:32:07.134017 systemd-journald[1292]: Runtime Journal (/run/log/journal/0d41a43625cf4edba694285323f9acf2) is 8.0M, max 640.1M, 632.1M free. Dec 13 02:32:06.378000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:32:06.378000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:32:07.130000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:32:07.130000 audit[1292]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd815beee0 a2=4000 a3=7ffd815bef7c items=0 ppid=1 pid=1292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:32:07.130000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:32:07.180356 kernel: audit: type=1300 audit(1734057127.130:95): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd815beee0 a2=4000 a3=7ffd815bef7c items=0 ppid=1 pid=1292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:32:07.180378 kernel: audit: type=1327 audit(1734057127.130:95): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:32:07.294515 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:32:07.321348 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:32:07.347354 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:32:07.391359 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:07.410350 systemd[1]: Started systemd-journald.service. Dec 13 02:32:07.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.420073 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:32:07.468512 kernel: audit: type=1130 audit(1734057127.418:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.474573 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:32:07.481572 systemd[1]: Mounted media.mount. Dec 13 02:32:07.488566 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:32:07.497597 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:32:07.506542 systemd[1]: Mounted tmp.mount. Dec 13 02:32:07.513666 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:32:07.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.522725 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:32:07.570357 kernel: audit: type=1130 audit(1734057127.521:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.578635 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:32:07.578712 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:32:07.627351 kernel: audit: type=1130 audit(1734057127.577:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.635650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:32:07.635726 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:32:07.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.686362 kernel: audit: type=1130 audit(1734057127.634:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.686397 kernel: audit: type=1131 audit(1734057127.634:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.746654 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:32:07.746730 systemd[1]: Finished modprobe@drm.service. Dec 13 02:32:07.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.755648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:32:07.755722 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:32:07.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.764658 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:32:07.764731 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:32:07.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.773640 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:32:07.773717 systemd[1]: Finished modprobe@loop.service. Dec 13 02:32:07.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.782737 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:32:07.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.791675 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:32:07.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.800702 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:32:07.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.808744 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:32:07.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.817917 systemd[1]: Reached target network-pre.target. Dec 13 02:32:07.828254 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:32:07.838935 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:32:07.845546 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:32:07.846594 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:32:07.853936 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:32:07.857408 systemd-journald[1292]: Time spent on flushing to /var/log/journal/0d41a43625cf4edba694285323f9acf2 is 14.493ms for 1513 entries. Dec 13 02:32:07.857408 systemd-journald[1292]: System Journal (/var/log/journal/0d41a43625cf4edba694285323f9acf2) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:32:07.906696 systemd-journald[1292]: Received client request to flush runtime journal. Dec 13 02:32:07.870433 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:32:07.870979 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:32:07.888433 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:32:07.889009 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:32:07.896970 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:32:07.904036 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:32:07.913669 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:32:07.921512 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:32:07.929585 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:32:07.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.937561 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:32:07.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.945499 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:32:07.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.953544 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:32:07.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:07.962428 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:32:07.971148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:32:07.980510 udevadm[1319]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:32:07.989765 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:32:07.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:08.149975 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:32:08.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:08.159266 systemd[1]: Starting systemd-udevd.service... Dec 13 02:32:08.171048 systemd-udevd[1326]: Using default interface naming scheme 'v252'. Dec 13 02:32:08.187401 systemd[1]: Started systemd-udevd.service. Dec 13 02:32:08.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:08.199054 systemd[1]: Found device dev-ttyS1.device. Dec 13 02:32:08.218594 systemd[1]: Starting systemd-networkd.service... Dec 13 02:32:08.246787 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 02:32:08.246855 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 02:32:08.246874 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1400) Dec 13 02:32:08.261266 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:32:08.273319 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:32:08.310226 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 02:32:08.318337 kernel: IPMI message handler: version 39.2 Dec 13 02:32:08.318381 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:32:08.340321 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:32:08.404336 kernel: ipmi device interface Dec 13 02:32:08.246000 audit[1391]: AVC avc: denied { confidentiality } for pid=1391 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:32:08.411144 systemd[1]: Started systemd-userdbd.service. Dec 13 02:32:08.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:08.246000 audit[1391]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ec4ca00d50 a1=4d98c a2=7fca07ef5bc5 a3=5 items=42 ppid=1326 pid=1391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:32:08.246000 audit: CWD cwd="/" Dec 13 02:32:08.246000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=1 name=(null) inode=17924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=2 name=(null) inode=17924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=3 name=(null) inode=17925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=4 name=(null) inode=17924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=5 name=(null) inode=17926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=6 name=(null) inode=17924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=7 name=(null) inode=17927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=8 name=(null) inode=17927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=9 name=(null) inode=17928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=10 name=(null) inode=17927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=11 name=(null) inode=17929 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=12 name=(null) inode=17927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=13 name=(null) inode=17930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=14 name=(null) inode=17927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=15 name=(null) inode=17931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=16 name=(null) inode=17927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=17 name=(null) inode=17932 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=18 name=(null) inode=17924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=19 name=(null) inode=17933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=20 name=(null) inode=17933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=21 name=(null) inode=17934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=22 name=(null) inode=17933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=23 name=(null) inode=17935 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=24 name=(null) inode=17933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=25 name=(null) inode=17936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=26 name=(null) inode=17933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=27 name=(null) inode=17937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=28 name=(null) inode=17933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=29 name=(null) inode=17938 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=30 name=(null) inode=17924 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=31 name=(null) inode=17939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=32 name=(null) inode=17939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=33 name=(null) inode=17940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=34 name=(null) inode=17939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=35 name=(null) inode=17941 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=36 name=(null) inode=17939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=37 name=(null) inode=17942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=38 name=(null) inode=17939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=39 name=(null) inode=17943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=40 name=(null) inode=17939 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PATH item=41 name=(null) inode=17944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:32:08.246000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:32:08.426322 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 02:32:08.447978 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 02:32:08.448058 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 02:32:08.537252 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 02:32:08.537417 kernel: i2c i2c-0: 1/4 memory slots populated (from DMI) Dec 13 02:32:08.584007 kernel: ipmi_si: IPMI System Interface driver Dec 13 02:32:08.584044 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 02:32:08.629323 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 02:32:08.629362 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 02:32:08.629379 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 02:32:08.745942 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 02:32:08.745986 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 02:32:08.746072 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 02:32:08.746132 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 02:32:08.746144 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 02:32:08.741412 systemd-networkd[1404]: bond0: netdev ready Dec 13 02:32:08.743849 systemd-networkd[1404]: lo: Link UP Dec 13 02:32:08.743851 systemd-networkd[1404]: lo: Gained carrier Dec 13 02:32:08.744392 systemd-networkd[1404]: Enumeration completed Dec 13 02:32:08.744471 systemd[1]: Started systemd-networkd.service. Dec 13 02:32:08.744703 systemd-networkd[1404]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 02:32:08.746971 systemd-networkd[1404]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2d.network. Dec 13 02:32:08.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:08.771319 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Dec 13 02:32:08.819178 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Dec 13 02:32:08.819267 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 02:32:08.888319 kernel: intel_rapl_common: Found RAPL domain package Dec 13 02:32:08.888361 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Dec 13 02:32:08.888445 kernel: intel_rapl_common: Found RAPL domain core Dec 13 02:32:08.888457 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 02:32:09.052332 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 02:32:09.074318 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 02:32:09.079606 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:32:09.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.088209 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:32:09.104467 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:32:09.138764 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:32:09.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.146448 systemd[1]: Reached target cryptsetup.target. Dec 13 02:32:09.154982 systemd[1]: Starting lvm2-activation.service... Dec 13 02:32:09.156746 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:32:09.193735 systemd[1]: Finished lvm2-activation.service. Dec 13 02:32:09.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.201504 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:32:09.209422 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:32:09.209436 systemd[1]: Reached target local-fs.target. Dec 13 02:32:09.217417 systemd[1]: Reached target machines.target. Dec 13 02:32:09.226047 systemd[1]: Starting ldconfig.service... Dec 13 02:32:09.233266 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:32:09.233290 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:32:09.233908 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:32:09.240896 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:32:09.251046 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:32:09.251857 systemd[1]: Starting systemd-sysext.service... Dec 13 02:32:09.252048 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1440 (bootctl) Dec 13 02:32:09.252676 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:32:09.270837 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:32:09.273593 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:32:09.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.273783 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:32:09.273940 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:32:09.323347 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:32:09.352347 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 02:32:09.379359 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Dec 13 02:32:09.379384 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 02:32:09.402355 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 02:32:09.422234 systemd-networkd[1404]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Dec 13 02:32:09.422318 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:32:09.442862 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:32:09.443312 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:32:09.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.468645 systemd-fsck[1454]: fsck.fat 4.2 (2021-01-31) Dec 13 02:32:09.468645 systemd-fsck[1454]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:32:09.469443 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:32:09.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.486184 systemd[1]: Mounting boot.mount... Dec 13 02:32:09.493321 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:32:09.505034 systemd[1]: Mounted boot.mount. Dec 13 02:32:09.509572 (sd-sysext)[1459]: Using extensions 'kubernetes'. Dec 13 02:32:09.509751 (sd-sysext)[1459]: Merged extensions into '/usr'. Dec 13 02:32:09.526377 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 02:32:09.570680 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:32:09.587368 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 02:32:09.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.603820 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:09.605731 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:32:09.614334 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Dec 13 02:32:09.616195 systemd-networkd[1404]: bond0: Link UP Dec 13 02:32:09.629702 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:32:09.631063 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:32:09.639331 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Dec 13 02:32:09.639396 kernel: bond0: active interface up! Dec 13 02:32:09.672475 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:32:09.676981 systemd-networkd[1404]: enp1s0f1np1: Link UP Dec 13 02:32:09.677230 systemd-networkd[1404]: enp1s0f1np1: Gained carrier Dec 13 02:32:09.677322 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Dec 13 02:32:09.678844 systemd-networkd[1404]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Dec 13 02:32:09.685105 systemd[1]: Starting modprobe@loop.service... Dec 13 02:32:09.692411 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:32:09.692501 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:32:09.692604 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:09.694454 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:32:09.702517 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:32:09.702599 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:32:09.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.711552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:32:09.711633 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:32:09.714831 ldconfig[1438]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:32:09.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.720635 systemd[1]: Finished ldconfig.service. Dec 13 02:32:09.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.728566 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:32:09.728643 systemd[1]: Finished modprobe@loop.service. Dec 13 02:32:09.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.737631 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:32:09.737691 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:32:09.738209 systemd[1]: Finished systemd-sysext.service. Dec 13 02:32:09.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.748105 systemd[1]: Starting ensure-sysext.service... Dec 13 02:32:09.755923 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:32:09.761627 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:32:09.762715 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:32:09.763746 systemd-tmpfiles[1478]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:32:09.763764 systemd-networkd[1404]: bond0: Gained carrier Dec 13 02:32:09.763919 systemd-networkd[1404]: enp1s0f0np0: Link UP Dec 13 02:32:09.764058 systemd-networkd[1404]: enp1s0f0np0: Gained carrier Dec 13 02:32:09.766557 systemd[1]: Reloading. Dec 13 02:32:09.777665 systemd-networkd[1404]: enp1s0f1np1: Link DOWN Dec 13 02:32:09.777668 systemd-networkd[1404]: enp1s0f1np1: Lost carrier Dec 13 02:32:09.787529 /usr/lib/systemd/system-generators/torcx-generator[1497]: time="2024-12-13T02:32:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:32:09.787544 /usr/lib/systemd/system-generators/torcx-generator[1497]: time="2024-12-13T02:32:09Z" level=info msg="torcx already run" Dec 13 02:32:09.805324 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.827321 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.849324 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.852922 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:32:09.852930 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:32:09.865369 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:32:09.870365 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.892359 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.905962 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:32:09.912373 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:32:09.929054 systemd[1]: Starting audit-rules.service... Dec 13 02:32:09.934318 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.949944 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:32:09.955319 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.966000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:32:09.966000 audit[1583]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd075bfd30 a2=420 a3=0 items=0 ppid=1566 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:32:09.966000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:32:09.968369 augenrules[1583]: No rules Dec 13 02:32:09.973137 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:32:09.975394 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.975423 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 02:32:09.990333 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:32:09.994111 systemd-networkd[1404]: enp1s0f1np1: Link UP Dec 13 02:32:09.994267 systemd-networkd[1404]: enp1s0f1np1: Gained carrier Dec 13 02:32:10.024184 systemd[1]: Starting systemd-resolved.service... Dec 13 02:32:10.024735 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Dec 13 02:32:10.032133 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:32:10.038983 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:32:10.045754 systemd[1]: Finished audit-rules.service. Dec 13 02:32:10.052555 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:32:10.060544 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:32:10.073307 systemd[1]: Starting systemd-update-done.service... Dec 13 02:32:10.081360 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:32:10.082224 systemd[1]: Finished systemd-update-done.service. Dec 13 02:32:10.093203 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:32:10.103404 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:32:10.104174 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:32:10.107457 systemd-resolved[1590]: Positive Trust Anchors: Dec 13 02:32:10.107462 systemd-resolved[1590]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:32:10.107481 systemd-resolved[1590]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:32:10.111399 systemd-resolved[1590]: Using system hostname 'ci-3510.3.6-a-01e29b9675'. Dec 13 02:32:10.112023 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:32:10.119967 systemd[1]: Starting modprobe@loop.service... Dec 13 02:32:10.126431 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:32:10.126500 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:32:10.126559 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:32:10.126976 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:32:10.143013 systemd[1]: Started systemd-resolved.service. Dec 13 02:32:10.145318 kernel: bond0: (slave enp1s0f1np1): link status up again after 100 ms Dec 13 02:32:10.161633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:32:10.161714 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:32:10.165317 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Dec 13 02:32:10.173642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:32:10.173716 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:32:10.181589 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:32:10.181667 systemd[1]: Finished modprobe@loop.service. Dec 13 02:32:10.190571 systemd[1]: Reached target network.target. Dec 13 02:32:10.199395 systemd[1]: Reached target nss-lookup.target. Dec 13 02:32:10.208392 systemd[1]: Reached target time-set.target. Dec 13 02:32:10.216382 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:32:10.216476 systemd[1]: Reached target sysinit.target. Dec 13 02:32:10.224433 systemd[1]: Started motdgen.path. Dec 13 02:32:10.231409 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:32:10.241469 systemd[1]: Started logrotate.timer. Dec 13 02:32:10.248442 systemd[1]: Started mdadm.timer. Dec 13 02:32:10.255394 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:32:10.263371 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:32:10.263454 systemd[1]: Reached target paths.target. Dec 13 02:32:10.270387 systemd[1]: Reached target timers.target. Dec 13 02:32:10.277541 systemd[1]: Listening on dbus.socket. Dec 13 02:32:10.285007 systemd[1]: Starting docker.socket... Dec 13 02:32:10.292137 systemd[1]: Listening on sshd.socket. Dec 13 02:32:10.299420 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:32:10.299509 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:32:10.301391 systemd[1]: Listening on docker.socket. Dec 13 02:32:10.308736 systemd[1]: Reached target sockets.target. Dec 13 02:32:10.316393 systemd[1]: Reached target basic.target. Dec 13 02:32:10.323432 systemd[1]: System is tainted: cgroupsv1 Dec 13 02:32:10.323486 systemd[1]: Stopped target timers.target. Dec 13 02:32:10.330343 systemd[1]: Stopping timers.target... Dec 13 02:32:10.337524 systemd[1]: Reached target timers.target. Dec 13 02:32:10.344375 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:10.344450 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:32:10.344532 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:32:10.345203 systemd[1]: Starting containerd.service... Dec 13 02:32:10.352841 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:32:10.361975 systemd[1]: Starting coreos-metadata.service... Dec 13 02:32:10.369003 systemd[1]: Starting dbus.service... Dec 13 02:32:10.374998 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:32:10.379883 jq[1616]: false Dec 13 02:32:10.382024 systemd[1]: Starting extend-filesystems.service... Dec 13 02:32:10.383823 coreos-metadata[1609]: Dec 13 02:32:10.383 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 02:32:10.388951 dbus-daemon[1615]: [system] SELinux support is enabled Dec 13 02:32:10.389336 systemd[1]: Starting motdgen.service... Dec 13 02:32:10.390920 extend-filesystems[1618]: Found loop1 Dec 13 02:32:10.390920 extend-filesystems[1618]: Found sda Dec 13 02:32:10.397607 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:32:10.418216 coreos-metadata[1612]: Dec 13 02:32:10.393 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 02:32:10.418401 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Dec 13 02:32:10.418422 extend-filesystems[1618]: Found sda1 Dec 13 02:32:10.418422 extend-filesystems[1618]: Found sda2 Dec 13 02:32:10.418422 extend-filesystems[1618]: Found sda3 Dec 13 02:32:10.418422 extend-filesystems[1618]: Found usr Dec 13 02:32:10.418422 extend-filesystems[1618]: Found sda4 Dec 13 02:32:10.418422 extend-filesystems[1618]: Found sda6 Dec 13 02:32:10.418422 extend-filesystems[1618]: Found sda7 Dec 13 02:32:10.418422 extend-filesystems[1618]: Found sda9 Dec 13 02:32:10.418422 extend-filesystems[1618]: Checking size of /dev/sda9 Dec 13 02:32:10.418422 extend-filesystems[1618]: Resized partition /dev/sda9 Dec 13 02:32:10.527547 extend-filesystems[1634]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:32:10.431174 systemd[1]: Starting sshd-keygen.service... Dec 13 02:32:10.445533 systemd[1]: Starting systemd-logind.service... Dec 13 02:32:10.463355 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:32:10.549642 update_engine[1648]: I1213 02:32:10.513710 1648 main.cc:92] Flatcar Update Engine starting Dec 13 02:32:10.549642 update_engine[1648]: I1213 02:32:10.517125 1648 update_check_scheduler.cc:74] Next update check in 11m16s Dec 13 02:32:10.464078 systemd[1]: Starting tcsd.service... Dec 13 02:32:10.549824 jq[1649]: true Dec 13 02:32:10.471068 systemd[1]: Starting update-engine.service... Dec 13 02:32:10.472405 systemd-logind[1646]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 02:32:10.472414 systemd-logind[1646]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:32:10.472423 systemd-logind[1646]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 02:32:10.472529 systemd-logind[1646]: New seat seat0. Dec 13 02:32:10.483155 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:32:10.505388 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:32:10.506494 systemd[1]: Started dbus.service. Dec 13 02:32:10.521235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:32:10.521367 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:32:10.521592 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:32:10.521701 systemd[1]: Finished motdgen.service. Dec 13 02:32:10.541598 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:32:10.541714 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:32:10.560197 jq[1653]: true Dec 13 02:32:10.561793 dbus-daemon[1615]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:32:10.564940 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 02:32:10.565077 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 02:32:10.567948 systemd[1]: Finished ensure-sysext.service. Dec 13 02:32:10.570174 env[1654]: time="2024-12-13T02:32:10.570128542Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:32:10.576523 systemd[1]: Started update-engine.service. Dec 13 02:32:10.578608 env[1654]: time="2024-12-13T02:32:10.578579471Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:32:10.580209 env[1654]: time="2024-12-13T02:32:10.579949059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:10.580674 env[1654]: time="2024-12-13T02:32:10.580554446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:32:10.580674 env[1654]: time="2024-12-13T02:32:10.580569069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:10.582714 env[1654]: time="2024-12-13T02:32:10.582673139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:32:10.582714 env[1654]: time="2024-12-13T02:32:10.582685690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:10.582714 env[1654]: time="2024-12-13T02:32:10.582693653Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:32:10.582714 env[1654]: time="2024-12-13T02:32:10.582699164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:10.582796 env[1654]: time="2024-12-13T02:32:10.582738148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:10.582907 env[1654]: time="2024-12-13T02:32:10.582869924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:32:10.582989 env[1654]: time="2024-12-13T02:32:10.582951063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:32:10.582989 env[1654]: time="2024-12-13T02:32:10.582960620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:32:10.585134 env[1654]: time="2024-12-13T02:32:10.585081674Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:32:10.585134 env[1654]: time="2024-12-13T02:32:10.585104151Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:32:10.587803 bash[1683]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:32:10.589582 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:32:10.592552 env[1654]: time="2024-12-13T02:32:10.592512744Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:32:10.592552 env[1654]: time="2024-12-13T02:32:10.592527877Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:32:10.592552 env[1654]: time="2024-12-13T02:32:10.592535717Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:32:10.592552 env[1654]: time="2024-12-13T02:32:10.592552992Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592561265Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592569245Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592575912Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592582736Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592590071Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592597370Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592605265Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592614805Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:32:10.592664 env[1654]: time="2024-12-13T02:32:10.592661285Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:32:10.592875 env[1654]: time="2024-12-13T02:32:10.592804887Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:32:10.593268 env[1654]: time="2024-12-13T02:32:10.593255817Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:32:10.593300 env[1654]: time="2024-12-13T02:32:10.593276064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593300 env[1654]: time="2024-12-13T02:32:10.593284428Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:32:10.593349 env[1654]: time="2024-12-13T02:32:10.593311182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593349 env[1654]: time="2024-12-13T02:32:10.593326509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593349 env[1654]: time="2024-12-13T02:32:10.593339936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593349 env[1654]: time="2024-12-13T02:32:10.593347176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593408 env[1654]: time="2024-12-13T02:32:10.593354467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593408 env[1654]: time="2024-12-13T02:32:10.593361356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593408 env[1654]: time="2024-12-13T02:32:10.593367630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593408 env[1654]: time="2024-12-13T02:32:10.593373931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593408 env[1654]: time="2024-12-13T02:32:10.593382987Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:32:10.593490 env[1654]: time="2024-12-13T02:32:10.593453450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593490 env[1654]: time="2024-12-13T02:32:10.593462381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593490 env[1654]: time="2024-12-13T02:32:10.593469095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593490 env[1654]: time="2024-12-13T02:32:10.593475382Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:32:10.593490 env[1654]: time="2024-12-13T02:32:10.593483215Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:32:10.593563 env[1654]: time="2024-12-13T02:32:10.593493195Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:32:10.593563 env[1654]: time="2024-12-13T02:32:10.593504084Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:32:10.593563 env[1654]: time="2024-12-13T02:32:10.593526033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:32:10.593742 env[1654]: time="2024-12-13T02:32:10.593629296Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:32:10.593742 env[1654]: time="2024-12-13T02:32:10.593662107Z" level=info msg="Connect containerd service" Dec 13 02:32:10.593742 env[1654]: time="2024-12-13T02:32:10.593680317Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.593930964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594015658Z" level=info msg="Start subscribing containerd event" Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594047520Z" level=info msg="Start recovering state" Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594084095Z" level=info msg="Start event monitor" Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594088641Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594096424Z" level=info msg="Start snapshots syncer" Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594102171Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594106737Z" level=info msg="Start streaming server" Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594121501Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:32:10.596552 env[1654]: time="2024-12-13T02:32:10.594161361Z" level=info msg="containerd successfully booted in 0.024376s" Dec 13 02:32:10.599548 systemd[1]: Started containerd.service. Dec 13 02:32:10.607238 systemd[1]: Started systemd-logind.service. Dec 13 02:32:10.616751 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:32:10.617608 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:32:10.619214 jq[1696]: false Dec 13 02:32:10.624402 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:32:10.625333 systemd[1]: Started locksmithd.service. Dec 13 02:32:10.633001 systemd[1]: Starting modprobe@drm.service... Dec 13 02:32:10.639979 systemd[1]: Starting motdgen.service... Dec 13 02:32:10.647111 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:32:10.653375 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:32:10.653480 systemd[1]: Reached target system-config.target. Dec 13 02:32:10.662116 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:32:10.670372 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:32:10.670463 systemd[1]: Reached target user-config.target. Dec 13 02:32:10.680303 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:32:10.680423 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:32:10.680642 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:32:10.680717 systemd[1]: Finished modprobe@drm.service. Dec 13 02:32:10.685342 locksmithd[1698]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:32:10.688572 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:32:10.688689 systemd[1]: Finished motdgen.service. Dec 13 02:32:10.695539 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:32:10.695650 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:32:10.697796 sshd_keygen[1645]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:32:10.709398 systemd[1]: Finished sshd-keygen.service. Dec 13 02:32:10.718331 systemd[1]: Starting issuegen.service... Dec 13 02:32:10.726616 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:32:10.726723 systemd[1]: Finished issuegen.service. Dec 13 02:32:10.735222 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:32:10.744636 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:32:10.753156 systemd[1]: Started getty@tty1.service. Dec 13 02:32:10.761105 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 02:32:10.769492 systemd[1]: Reached target getty.target. Dec 13 02:32:10.839399 systemd-networkd[1404]: bond0: Gained IPv6LL Dec 13 02:32:10.839619 systemd-timesyncd[1591]: Network configuration changed, trying to establish connection. Dec 13 02:32:10.901319 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Dec 13 02:32:10.930331 extend-filesystems[1634]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:32:10.930331 extend-filesystems[1634]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 02:32:10.930331 extend-filesystems[1634]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Dec 13 02:32:10.968381 extend-filesystems[1618]: Resized filesystem in /dev/sda9 Dec 13 02:32:10.968381 extend-filesystems[1618]: Found sdb Dec 13 02:32:10.930772 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:32:10.930900 systemd[1]: Finished extend-filesystems.service. Dec 13 02:32:11.351640 systemd-timesyncd[1591]: Network configuration changed, trying to establish connection. Dec 13 02:32:11.351772 systemd-timesyncd[1591]: Network configuration changed, trying to establish connection. Dec 13 02:32:11.352673 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:32:11.362720 systemd[1]: Reached target network-online.target. Dec 13 02:32:11.371481 systemd[1]: Starting kubelet.service... Dec 13 02:32:12.036504 systemd[1]: Started kubelet.service. Dec 13 02:32:12.465383 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 02:32:12.633256 kubelet[1748]: E1213 02:32:12.633168 1748 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:32:12.634488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:32:12.634581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:32:15.781505 login[1736]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:32:15.789865 systemd-logind[1646]: New session 1 of user core. Dec 13 02:32:15.789950 login[1735]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:32:15.790325 systemd[1]: Created slice user-500.slice. Dec 13 02:32:15.790843 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:32:15.793085 systemd-logind[1646]: New session 2 of user core. Dec 13 02:32:15.796667 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:32:15.797304 systemd[1]: Starting user@500.service... Dec 13 02:32:15.799448 (systemd)[1773]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:15.865874 systemd[1773]: Queued start job for default target default.target. Dec 13 02:32:15.865975 systemd[1773]: Reached target paths.target. Dec 13 02:32:15.865986 systemd[1773]: Reached target sockets.target. Dec 13 02:32:15.865995 systemd[1773]: Reached target timers.target. Dec 13 02:32:15.866002 systemd[1773]: Reached target basic.target. Dec 13 02:32:15.866021 systemd[1773]: Reached target default.target. Dec 13 02:32:15.866035 systemd[1773]: Startup finished in 63ms. Dec 13 02:32:15.866095 systemd[1]: Started user@500.service. Dec 13 02:32:15.866632 systemd[1]: Started session-1.scope. Dec 13 02:32:15.866959 systemd[1]: Started session-2.scope. Dec 13 02:32:16.536341 coreos-metadata[1612]: Dec 13 02:32:16.536 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 02:32:16.537159 coreos-metadata[1609]: Dec 13 02:32:16.536 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 02:32:17.536643 coreos-metadata[1612]: Dec 13 02:32:17.536 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 02:32:17.537547 coreos-metadata[1609]: Dec 13 02:32:17.536 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 02:32:17.733564 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Dec 13 02:32:17.733713 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Dec 13 02:32:18.590061 systemd[1]: Created slice system-sshd.slice. Dec 13 02:32:18.590744 systemd[1]: Started sshd@0-139.178.70.191:22-139.178.68.195:49134.service. Dec 13 02:32:18.631688 sshd[1795]: Accepted publickey for core from 139.178.68.195 port 49134 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:18.632803 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:18.636561 systemd-logind[1646]: New session 3 of user core. Dec 13 02:32:18.637441 systemd[1]: Started session-3.scope. Dec 13 02:32:18.691778 systemd[1]: Started sshd@1-139.178.70.191:22-139.178.68.195:49140.service. Dec 13 02:32:18.720233 sshd[1800]: Accepted publickey for core from 139.178.68.195 port 49140 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:18.720974 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:18.723444 systemd-logind[1646]: New session 4 of user core. Dec 13 02:32:18.723868 systemd[1]: Started session-4.scope. Dec 13 02:32:18.775643 sshd[1800]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:18.777239 systemd[1]: Started sshd@2-139.178.70.191:22-139.178.68.195:49142.service. Dec 13 02:32:18.777517 systemd[1]: sshd@1-139.178.70.191:22-139.178.68.195:49140.service: Deactivated successfully. Dec 13 02:32:18.778100 systemd-logind[1646]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:32:18.778113 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:32:18.778676 systemd-logind[1646]: Removed session 4. Dec 13 02:32:18.807749 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 49142 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:18.808857 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:18.812399 systemd-logind[1646]: New session 5 of user core. Dec 13 02:32:18.813243 systemd[1]: Started session-5.scope. Dec 13 02:32:18.869592 sshd[1806]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:18.870884 systemd[1]: sshd@2-139.178.70.191:22-139.178.68.195:49142.service: Deactivated successfully. Dec 13 02:32:18.871388 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:32:18.871417 systemd-logind[1646]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:32:18.872030 systemd-logind[1646]: Removed session 5. Dec 13 02:32:19.683488 coreos-metadata[1612]: Dec 13 02:32:19.683 INFO Fetch successful Dec 13 02:32:19.694841 coreos-metadata[1609]: Dec 13 02:32:19.694 INFO Fetch successful Dec 13 02:32:19.763125 systemd[1]: Finished coreos-metadata.service. Dec 13 02:32:19.764059 systemd[1]: Started packet-phone-home.service. Dec 13 02:32:19.767328 unknown[1609]: wrote ssh authorized keys file for user: core Dec 13 02:32:19.787132 curl[1819]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 02:32:19.787132 curl[1819]: Dload Upload Total Spent Left Speed Dec 13 02:32:19.798034 update-ssh-keys[1821]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:32:19.798301 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:32:19.798481 systemd[1]: Reached target multi-user.target. Dec 13 02:32:19.799271 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:32:19.803329 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:32:19.803444 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:32:19.803533 systemd[1]: Startup finished in 25.590s (kernel) + 16.287s (userspace) = 41.878s. Dec 13 02:32:20.110607 curl[1819]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 02:32:20.112954 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 02:32:22.886610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:32:22.887145 systemd[1]: Stopped kubelet.service. Dec 13 02:32:22.890450 systemd[1]: Starting kubelet.service... Dec 13 02:32:23.092460 systemd[1]: Started kubelet.service. Dec 13 02:32:23.138770 kubelet[1835]: E1213 02:32:23.138662 1835 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:32:23.141252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:32:23.141350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:32:28.876541 systemd[1]: Started sshd@3-139.178.70.191:22-139.178.68.195:51566.service. Dec 13 02:32:28.907104 sshd[1853]: Accepted publickey for core from 139.178.68.195 port 51566 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:28.907786 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:28.910172 systemd-logind[1646]: New session 6 of user core. Dec 13 02:32:28.910662 systemd[1]: Started session-6.scope. Dec 13 02:32:28.963268 sshd[1853]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:28.964718 systemd[1]: Started sshd@4-139.178.70.191:22-139.178.68.195:51572.service. Dec 13 02:32:28.965134 systemd[1]: sshd@3-139.178.70.191:22-139.178.68.195:51566.service: Deactivated successfully. Dec 13 02:32:28.965596 systemd-logind[1646]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:32:28.965636 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:32:28.966139 systemd-logind[1646]: Removed session 6. Dec 13 02:32:29.006593 sshd[1859]: Accepted publickey for core from 139.178.68.195 port 51572 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:29.007470 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:29.010523 systemd-logind[1646]: New session 7 of user core. Dec 13 02:32:29.011094 systemd[1]: Started session-7.scope. Dec 13 02:32:29.064773 sshd[1859]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:29.071154 systemd[1]: Started sshd@5-139.178.70.191:22-139.178.68.195:51578.service. Dec 13 02:32:29.072781 systemd[1]: sshd@4-139.178.70.191:22-139.178.68.195:51572.service: Deactivated successfully. Dec 13 02:32:29.075296 systemd-logind[1646]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:32:29.075367 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:32:29.076576 systemd-logind[1646]: Removed session 7. Dec 13 02:32:29.104562 sshd[1866]: Accepted publickey for core from 139.178.68.195 port 51578 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:29.105227 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:29.107728 systemd-logind[1646]: New session 8 of user core. Dec 13 02:32:29.108067 systemd[1]: Started session-8.scope. Dec 13 02:32:29.159545 sshd[1866]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:29.162044 systemd[1]: Started sshd@6-139.178.70.191:22-139.178.68.195:51586.service. Dec 13 02:32:29.162683 systemd[1]: sshd@5-139.178.70.191:22-139.178.68.195:51578.service: Deactivated successfully. Dec 13 02:32:29.163692 systemd-logind[1646]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:32:29.163721 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:32:29.164689 systemd-logind[1646]: Removed session 8. Dec 13 02:32:29.216704 sshd[1873]: Accepted publickey for core from 139.178.68.195 port 51586 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:29.217974 sshd[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:29.222144 systemd-logind[1646]: New session 9 of user core. Dec 13 02:32:29.222952 systemd[1]: Started session-9.scope. Dec 13 02:32:29.304086 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:32:29.304777 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:32:30.283934 systemd[1]: Stopped kubelet.service. Dec 13 02:32:30.289609 systemd[1]: Starting kubelet.service... Dec 13 02:32:30.306800 systemd[1]: Reloading. Dec 13 02:32:30.335875 /usr/lib/systemd/system-generators/torcx-generator[1963]: time="2024-12-13T02:32:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:32:30.335890 /usr/lib/systemd/system-generators/torcx-generator[1963]: time="2024-12-13T02:32:30Z" level=info msg="torcx already run" Dec 13 02:32:30.399501 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:32:30.399510 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:32:30.412646 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:32:30.467179 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:32:30.467357 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:32:30.467830 systemd[1]: Stopped kubelet.service. Dec 13 02:32:30.470883 systemd[1]: Starting kubelet.service... Dec 13 02:32:30.659035 systemd[1]: Started kubelet.service. Dec 13 02:32:30.684912 kubelet[2038]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:32:30.684912 kubelet[2038]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:32:30.684912 kubelet[2038]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:32:30.685144 kubelet[2038]: I1213 02:32:30.684906 2038 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:32:31.013482 kubelet[2038]: I1213 02:32:31.013406 2038 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:32:31.013482 kubelet[2038]: I1213 02:32:31.013421 2038 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:32:31.013570 kubelet[2038]: I1213 02:32:31.013564 2038 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:32:31.062889 kubelet[2038]: I1213 02:32:31.062795 2038 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:32:31.129890 kubelet[2038]: I1213 02:32:31.129796 2038 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:32:31.134193 kubelet[2038]: I1213 02:32:31.134113 2038 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:32:31.134758 kubelet[2038]: I1213 02:32:31.134684 2038 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:32:31.134758 kubelet[2038]: I1213 02:32:31.134747 2038 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:32:31.135243 kubelet[2038]: I1213 02:32:31.134780 2038 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:32:31.135243 kubelet[2038]: I1213 02:32:31.135000 2038 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:32:31.135243 kubelet[2038]: I1213 02:32:31.135207 2038 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:32:31.135445 kubelet[2038]: I1213 02:32:31.135257 2038 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:32:31.135445 kubelet[2038]: I1213 02:32:31.135299 2038 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:32:31.135445 kubelet[2038]: I1213 02:32:31.135345 2038 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:32:31.135635 kubelet[2038]: E1213 02:32:31.135522 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:31.135635 kubelet[2038]: E1213 02:32:31.135618 2038 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:31.137814 kubelet[2038]: I1213 02:32:31.137788 2038 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:32:31.143411 kubelet[2038]: I1213 02:32:31.143359 2038 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:32:31.145084 kubelet[2038]: W1213 02:32:31.145034 2038 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:32:31.145682 kubelet[2038]: I1213 02:32:31.145639 2038 server.go:1256] "Started kubelet" Dec 13 02:32:31.145767 kubelet[2038]: I1213 02:32:31.145742 2038 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:32:31.145815 kubelet[2038]: I1213 02:32:31.145762 2038 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:32:31.146093 kubelet[2038]: I1213 02:32:31.146044 2038 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:32:31.155878 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:32:31.155946 kubelet[2038]: I1213 02:32:31.155934 2038 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:32:31.156039 kubelet[2038]: I1213 02:32:31.155934 2038 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:32:31.156185 kubelet[2038]: I1213 02:32:31.156141 2038 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:32:31.156258 kubelet[2038]: I1213 02:32:31.156203 2038 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:32:31.156326 kubelet[2038]: I1213 02:32:31.156259 2038 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:32:31.189809 kubelet[2038]: I1213 02:32:31.189758 2038 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:32:31.189946 kubelet[2038]: I1213 02:32:31.189897 2038 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:32:31.190875 kubelet[2038]: E1213 02:32:31.190857 2038 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:32:31.191753 kubelet[2038]: I1213 02:32:31.191739 2038 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:32:31.192676 kubelet[2038]: E1213 02:32:31.192663 2038 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.21\" not found" node="10.67.80.21" Dec 13 02:32:31.204871 kubelet[2038]: I1213 02:32:31.204856 2038 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:32:31.204871 kubelet[2038]: I1213 02:32:31.204869 2038 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:32:31.204990 kubelet[2038]: I1213 02:32:31.204881 2038 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:32:31.205755 kubelet[2038]: I1213 02:32:31.205714 2038 policy_none.go:49] "None policy: Start" Dec 13 02:32:31.206136 kubelet[2038]: I1213 02:32:31.206124 2038 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:32:31.206185 kubelet[2038]: I1213 02:32:31.206142 2038 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:32:31.209006 kubelet[2038]: I1213 02:32:31.208996 2038 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:32:31.209166 kubelet[2038]: I1213 02:32:31.209157 2038 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:32:31.209745 kubelet[2038]: E1213 02:32:31.209732 2038 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.21\" not found" Dec 13 02:32:31.260008 kubelet[2038]: I1213 02:32:31.259117 2038 kubelet_node_status.go:73] "Attempting to register node" node="10.67.80.21" Dec 13 02:32:31.265766 kubelet[2038]: I1213 02:32:31.265578 2038 kubelet_node_status.go:76] "Successfully registered node" node="10.67.80.21" Dec 13 02:32:31.340867 kubelet[2038]: I1213 02:32:31.340825 2038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:32:31.341403 kubelet[2038]: I1213 02:32:31.341393 2038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:32:31.341456 kubelet[2038]: I1213 02:32:31.341410 2038 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:32:31.341456 kubelet[2038]: I1213 02:32:31.341421 2038 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:32:31.341456 kubelet[2038]: E1213 02:32:31.341445 2038 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 02:32:31.378535 kubelet[2038]: I1213 02:32:31.378482 2038 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:32:31.379231 env[1654]: time="2024-12-13T02:32:31.379148454Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:32:31.380204 kubelet[2038]: I1213 02:32:31.379670 2038 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:32:32.015435 kubelet[2038]: I1213 02:32:32.015346 2038 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:32:32.016415 kubelet[2038]: W1213 02:32:32.015747 2038 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:32:32.016415 kubelet[2038]: W1213 02:32:32.015771 2038 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:32:32.016415 kubelet[2038]: W1213 02:32:32.015845 2038 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:32:32.136667 kubelet[2038]: I1213 02:32:32.136556 2038 apiserver.go:52] "Watching apiserver" Dec 13 02:32:32.136930 kubelet[2038]: E1213 02:32:32.136669 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:32.146688 kubelet[2038]: I1213 02:32:32.146626 2038 topology_manager.go:215] "Topology Admit Handler" podUID="210b172a-971e-4c55-8c50-49ed39a42bcb" podNamespace="kube-system" podName="kube-proxy-572c9" Dec 13 02:32:32.146967 kubelet[2038]: I1213 02:32:32.146825 2038 topology_manager.go:215] "Topology Admit Handler" podUID="1a7104ce-d69c-4f97-9d4d-e9fda466ad06" podNamespace="kube-system" podName="cilium-8tv5r" Dec 13 02:32:32.156685 kubelet[2038]: I1213 02:32:32.156644 2038 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:32:32.162187 kubelet[2038]: I1213 02:32:32.162176 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-run\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162236 kubelet[2038]: I1213 02:32:32.162203 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cni-path\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162236 kubelet[2038]: I1213 02:32:32.162227 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-host-proc-sys-net\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162294 kubelet[2038]: I1213 02:32:32.162259 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/210b172a-971e-4c55-8c50-49ed39a42bcb-kube-proxy\") pod \"kube-proxy-572c9\" (UID: \"210b172a-971e-4c55-8c50-49ed39a42bcb\") " pod="kube-system/kube-proxy-572c9" Dec 13 02:32:32.162294 kubelet[2038]: I1213 02:32:32.162279 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/210b172a-971e-4c55-8c50-49ed39a42bcb-xtables-lock\") pod \"kube-proxy-572c9\" (UID: \"210b172a-971e-4c55-8c50-49ed39a42bcb\") " pod="kube-system/kube-proxy-572c9" Dec 13 02:32:32.162338 kubelet[2038]: I1213 02:32:32.162305 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj5z7\" (UniqueName: \"kubernetes.io/projected/210b172a-971e-4c55-8c50-49ed39a42bcb-kube-api-access-xj5z7\") pod \"kube-proxy-572c9\" (UID: \"210b172a-971e-4c55-8c50-49ed39a42bcb\") " pod="kube-system/kube-proxy-572c9" Dec 13 02:32:32.162362 kubelet[2038]: I1213 02:32:32.162337 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-clustermesh-secrets\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162382 kubelet[2038]: I1213 02:32:32.162363 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-config-path\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162402 kubelet[2038]: I1213 02:32:32.162382 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-host-proc-sys-kernel\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162402 kubelet[2038]: I1213 02:32:32.162398 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhs6l\" (UniqueName: \"kubernetes.io/projected/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-kube-api-access-nhs6l\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162439 kubelet[2038]: I1213 02:32:32.162421 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-bpf-maps\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162462 kubelet[2038]: I1213 02:32:32.162449 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-hostproc\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162548 kubelet[2038]: I1213 02:32:32.162484 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-etc-cni-netd\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162548 kubelet[2038]: I1213 02:32:32.162501 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/210b172a-971e-4c55-8c50-49ed39a42bcb-lib-modules\") pod \"kube-proxy-572c9\" (UID: \"210b172a-971e-4c55-8c50-49ed39a42bcb\") " pod="kube-system/kube-proxy-572c9" Dec 13 02:32:32.162548 kubelet[2038]: I1213 02:32:32.162524 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-cgroup\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162624 kubelet[2038]: I1213 02:32:32.162552 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-lib-modules\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162624 kubelet[2038]: I1213 02:32:32.162569 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-xtables-lock\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.162624 kubelet[2038]: I1213 02:32:32.162585 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-hubble-tls\") pod \"cilium-8tv5r\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " pod="kube-system/cilium-8tv5r" Dec 13 02:32:32.181068 sudo[1878]: pam_unix(sudo:session): session closed for user root Dec 13 02:32:32.182996 sshd[1873]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:32.184834 systemd[1]: sshd@6-139.178.70.191:22-139.178.68.195:51586.service: Deactivated successfully. Dec 13 02:32:32.185668 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:32:32.185721 systemd-logind[1646]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:32:32.186372 systemd-logind[1646]: Removed session 9. Dec 13 02:32:32.454222 env[1654]: time="2024-12-13T02:32:32.453992126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8tv5r,Uid:1a7104ce-d69c-4f97-9d4d-e9fda466ad06,Namespace:kube-system,Attempt:0,}" Dec 13 02:32:32.454222 env[1654]: time="2024-12-13T02:32:32.454041066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-572c9,Uid:210b172a-971e-4c55-8c50-49ed39a42bcb,Namespace:kube-system,Attempt:0,}" Dec 13 02:32:33.101143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3856169988.mount: Deactivated successfully. Dec 13 02:32:33.102291 env[1654]: time="2024-12-13T02:32:33.102273434Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:33.103268 env[1654]: time="2024-12-13T02:32:33.103254834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:33.103865 env[1654]: time="2024-12-13T02:32:33.103855528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:33.104230 env[1654]: time="2024-12-13T02:32:33.104217724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:33.105461 env[1654]: time="2024-12-13T02:32:33.105387662Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:33.105957 env[1654]: time="2024-12-13T02:32:33.105944502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:33.107266 env[1654]: time="2024-12-13T02:32:33.107227239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:33.108349 env[1654]: time="2024-12-13T02:32:33.108287116Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:33.114711 env[1654]: time="2024-12-13T02:32:33.114651987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:32:33.114711 env[1654]: time="2024-12-13T02:32:33.114676083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:32:33.114711 env[1654]: time="2024-12-13T02:32:33.114683742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:33.114812 env[1654]: time="2024-12-13T02:32:33.114759874Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/23edc0dff5d175d4df5f6b6c43b2021577bbaf8faf23b966464a71503a1e69c4 pid=2114 runtime=io.containerd.runc.v2 Dec 13 02:32:33.114887 env[1654]: time="2024-12-13T02:32:33.114867871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:32:33.114915 env[1654]: time="2024-12-13T02:32:33.114886561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:32:33.114915 env[1654]: time="2024-12-13T02:32:33.114894000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:33.114973 env[1654]: time="2024-12-13T02:32:33.114960908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d pid=2113 runtime=io.containerd.runc.v2 Dec 13 02:32:33.133158 env[1654]: time="2024-12-13T02:32:33.133126540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-572c9,Uid:210b172a-971e-4c55-8c50-49ed39a42bcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"23edc0dff5d175d4df5f6b6c43b2021577bbaf8faf23b966464a71503a1e69c4\"" Dec 13 02:32:33.133664 env[1654]: time="2024-12-13T02:32:33.133651221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8tv5r,Uid:1a7104ce-d69c-4f97-9d4d-e9fda466ad06,Namespace:kube-system,Attempt:0,} returns sandbox id \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\"" Dec 13 02:32:33.134185 env[1654]: time="2024-12-13T02:32:33.134174754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:32:33.136963 kubelet[2038]: E1213 02:32:33.136929 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:34.025529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248237645.mount: Deactivated successfully. Dec 13 02:32:34.137201 kubelet[2038]: E1213 02:32:34.137151 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:34.377465 env[1654]: time="2024-12-13T02:32:34.377385102Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:34.378052 env[1654]: time="2024-12-13T02:32:34.377992987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:34.378800 env[1654]: time="2024-12-13T02:32:34.378745032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:34.380132 env[1654]: time="2024-12-13T02:32:34.380117955Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:34.380443 env[1654]: time="2024-12-13T02:32:34.380429267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:32:34.380779 env[1654]: time="2024-12-13T02:32:34.380764475Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:32:34.381509 env[1654]: time="2024-12-13T02:32:34.381495785Z" level=info msg="CreateContainer within sandbox \"23edc0dff5d175d4df5f6b6c43b2021577bbaf8faf23b966464a71503a1e69c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:32:34.387200 env[1654]: time="2024-12-13T02:32:34.387160427Z" level=info msg="CreateContainer within sandbox \"23edc0dff5d175d4df5f6b6c43b2021577bbaf8faf23b966464a71503a1e69c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3c45c2bac834522e647fc6812d6ec31eff83bdc9f0577e53312948309be00bc5\"" Dec 13 02:32:34.387588 env[1654]: time="2024-12-13T02:32:34.387522806Z" level=info msg="StartContainer for \"3c45c2bac834522e647fc6812d6ec31eff83bdc9f0577e53312948309be00bc5\"" Dec 13 02:32:34.412706 env[1654]: time="2024-12-13T02:32:34.412660897Z" level=info msg="StartContainer for \"3c45c2bac834522e647fc6812d6ec31eff83bdc9f0577e53312948309be00bc5\" returns successfully" Dec 13 02:32:35.137675 kubelet[2038]: E1213 02:32:35.137550 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:35.376705 kubelet[2038]: I1213 02:32:35.376605 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-572c9" podStartSLOduration=3.129768515 podStartE2EDuration="4.376467569s" podCreationTimestamp="2024-12-13 02:32:31 +0000 UTC" firstStartedPulling="2024-12-13 02:32:33.133959154 +0000 UTC m=+2.471733923" lastFinishedPulling="2024-12-13 02:32:34.380658213 +0000 UTC m=+3.718432977" observedRunningTime="2024-12-13 02:32:35.375924505 +0000 UTC m=+4.713699346" watchObservedRunningTime="2024-12-13 02:32:35.376467569 +0000 UTC m=+4.714242439" Dec 13 02:32:36.138872 kubelet[2038]: E1213 02:32:36.138789 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:37.138979 kubelet[2038]: E1213 02:32:37.138916 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:38.139448 kubelet[2038]: E1213 02:32:38.139399 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:39.139629 kubelet[2038]: E1213 02:32:39.139609 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:39.536995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139582738.mount: Deactivated successfully. Dec 13 02:32:40.140527 kubelet[2038]: E1213 02:32:40.140465 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:41.140784 kubelet[2038]: E1213 02:32:41.140740 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:41.188566 env[1654]: time="2024-12-13T02:32:41.188512645Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:41.189117 env[1654]: time="2024-12-13T02:32:41.189100336Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:41.190012 env[1654]: time="2024-12-13T02:32:41.189999347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:41.190842 env[1654]: time="2024-12-13T02:32:41.190826634Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:32:41.191977 env[1654]: time="2024-12-13T02:32:41.191922281Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:32:41.217457 env[1654]: time="2024-12-13T02:32:41.217433264Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\"" Dec 13 02:32:41.217620 env[1654]: time="2024-12-13T02:32:41.217605591Z" level=info msg="StartContainer for \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\"" Dec 13 02:32:41.238731 env[1654]: time="2024-12-13T02:32:41.238671259Z" level=info msg="StartContainer for \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\" returns successfully" Dec 13 02:32:41.916723 systemd-timesyncd[1591]: Contacted time server [2600:3c01:e000:7e6::123]:123 (2.flatcar.pool.ntp.org). Dec 13 02:32:41.916837 systemd-timesyncd[1591]: Initial clock synchronization to Fri 2024-12-13 02:32:42.032332 UTC. Dec 13 02:32:42.141012 kubelet[2038]: E1213 02:32:42.140906 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:42.219201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603-rootfs.mount: Deactivated successfully. Dec 13 02:32:42.599997 env[1654]: time="2024-12-13T02:32:42.599900886Z" level=info msg="shim disconnected" id=9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603 Dec 13 02:32:42.600942 env[1654]: time="2024-12-13T02:32:42.600004582Z" level=warning msg="cleaning up after shim disconnected" id=9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603 namespace=k8s.io Dec 13 02:32:42.600942 env[1654]: time="2024-12-13T02:32:42.600038291Z" level=info msg="cleaning up dead shim" Dec 13 02:32:42.612502 env[1654]: time="2024-12-13T02:32:42.612454854Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:32:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2398 runtime=io.containerd.runc.v2\n" Dec 13 02:32:43.141690 kubelet[2038]: E1213 02:32:43.141596 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:43.398904 env[1654]: time="2024-12-13T02:32:43.398727456Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:32:43.412838 env[1654]: time="2024-12-13T02:32:43.412725319Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\"" Dec 13 02:32:43.413296 env[1654]: time="2024-12-13T02:32:43.413261623Z" level=info msg="StartContainer for \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\"" Dec 13 02:32:43.434518 env[1654]: time="2024-12-13T02:32:43.434461268Z" level=info msg="StartContainer for \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\" returns successfully" Dec 13 02:32:43.441652 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:32:43.441929 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:32:43.442060 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:32:43.443135 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:32:43.444855 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:32:43.447717 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:32:43.453225 env[1654]: time="2024-12-13T02:32:43.453170427Z" level=info msg="shim disconnected" id=89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af Dec 13 02:32:43.453225 env[1654]: time="2024-12-13T02:32:43.453199038Z" level=warning msg="cleaning up after shim disconnected" id=89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af namespace=k8s.io Dec 13 02:32:43.453225 env[1654]: time="2024-12-13T02:32:43.453206075Z" level=info msg="cleaning up dead shim" Dec 13 02:32:43.457033 env[1654]: time="2024-12-13T02:32:43.456983562Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:32:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2467 runtime=io.containerd.runc.v2\n" Dec 13 02:32:44.141884 kubelet[2038]: E1213 02:32:44.141765 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:44.395617 env[1654]: time="2024-12-13T02:32:44.395377703Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:32:44.405869 env[1654]: time="2024-12-13T02:32:44.405821526Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\"" Dec 13 02:32:44.406164 env[1654]: time="2024-12-13T02:32:44.406103139Z" level=info msg="StartContainer for \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\"" Dec 13 02:32:44.409753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af-rootfs.mount: Deactivated successfully. Dec 13 02:32:44.431079 env[1654]: time="2024-12-13T02:32:44.431020534Z" level=info msg="StartContainer for \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\" returns successfully" Dec 13 02:32:44.441779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7-rootfs.mount: Deactivated successfully. Dec 13 02:32:44.443170 env[1654]: time="2024-12-13T02:32:44.443142370Z" level=info msg="shim disconnected" id=c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7 Dec 13 02:32:44.443250 env[1654]: time="2024-12-13T02:32:44.443172928Z" level=warning msg="cleaning up after shim disconnected" id=c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7 namespace=k8s.io Dec 13 02:32:44.443250 env[1654]: time="2024-12-13T02:32:44.443180560Z" level=info msg="cleaning up dead shim" Dec 13 02:32:44.448026 env[1654]: time="2024-12-13T02:32:44.448004384Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:32:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2524 runtime=io.containerd.runc.v2\n" Dec 13 02:32:45.142629 kubelet[2038]: E1213 02:32:45.142511 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:45.403077 env[1654]: time="2024-12-13T02:32:45.402844918Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:32:45.429209 env[1654]: time="2024-12-13T02:32:45.429092960Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\"" Dec 13 02:32:45.429692 env[1654]: time="2024-12-13T02:32:45.429623061Z" level=info msg="StartContainer for \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\"" Dec 13 02:32:45.449851 env[1654]: time="2024-12-13T02:32:45.449822061Z" level=info msg="StartContainer for \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\" returns successfully" Dec 13 02:32:45.457161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f-rootfs.mount: Deactivated successfully. Dec 13 02:32:45.457972 env[1654]: time="2024-12-13T02:32:45.457945022Z" level=info msg="shim disconnected" id=145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f Dec 13 02:32:45.458015 env[1654]: time="2024-12-13T02:32:45.457974914Z" level=warning msg="cleaning up after shim disconnected" id=145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f namespace=k8s.io Dec 13 02:32:45.458015 env[1654]: time="2024-12-13T02:32:45.457981268Z" level=info msg="cleaning up dead shim" Dec 13 02:32:45.461671 env[1654]: time="2024-12-13T02:32:45.461626210Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:32:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2577 runtime=io.containerd.runc.v2\n" Dec 13 02:32:46.143731 kubelet[2038]: E1213 02:32:46.143659 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:46.412884 env[1654]: time="2024-12-13T02:32:46.412687822Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:32:46.427678 env[1654]: time="2024-12-13T02:32:46.427534758Z" level=info msg="CreateContainer within sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\"" Dec 13 02:32:46.428648 env[1654]: time="2024-12-13T02:32:46.428527491Z" level=info msg="StartContainer for \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\"" Dec 13 02:32:46.452428 env[1654]: time="2024-12-13T02:32:46.452400415Z" level=info msg="StartContainer for \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\" returns successfully" Dec 13 02:32:46.506327 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 02:32:46.517995 kubelet[2038]: I1213 02:32:46.517983 2038 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:32:46.663327 kernel: Initializing XFRM netlink socket Dec 13 02:32:46.676402 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 02:32:47.144474 kubelet[2038]: E1213 02:32:47.144401 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:47.862379 systemd-networkd[1404]: cilium_host: Link UP Dec 13 02:32:47.862466 systemd-networkd[1404]: cilium_net: Link UP Dec 13 02:32:47.869683 systemd-networkd[1404]: cilium_net: Gained carrier Dec 13 02:32:47.876878 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:32:47.876950 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:32:47.876987 systemd-networkd[1404]: cilium_host: Gained carrier Dec 13 02:32:47.922815 systemd-networkd[1404]: cilium_vxlan: Link UP Dec 13 02:32:47.922819 systemd-networkd[1404]: cilium_vxlan: Gained carrier Dec 13 02:32:48.043240 kubelet[2038]: I1213 02:32:48.043217 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8tv5r" podStartSLOduration=8.986271618 podStartE2EDuration="17.043176692s" podCreationTimestamp="2024-12-13 02:32:31 +0000 UTC" firstStartedPulling="2024-12-13 02:32:33.134057765 +0000 UTC m=+2.471832534" lastFinishedPulling="2024-12-13 02:32:41.19096284 +0000 UTC m=+10.528737608" observedRunningTime="2024-12-13 02:32:47.45391861 +0000 UTC m=+16.791693451" watchObservedRunningTime="2024-12-13 02:32:48.043176692 +0000 UTC m=+17.380951464" Dec 13 02:32:48.043490 kubelet[2038]: I1213 02:32:48.043452 2038 topology_manager.go:215] "Topology Admit Handler" podUID="2f185352-8418-4d6a-a2c9-1173a6f669bf" podNamespace="default" podName="nginx-deployment-6d5f899847-2mknf" Dec 13 02:32:48.057394 kernel: NET: Registered PF_ALG protocol family Dec 13 02:32:48.067611 kubelet[2038]: I1213 02:32:48.067565 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt9mh\" (UniqueName: \"kubernetes.io/projected/2f185352-8418-4d6a-a2c9-1173a6f669bf-kube-api-access-bt9mh\") pod \"nginx-deployment-6d5f899847-2mknf\" (UID: \"2f185352-8418-4d6a-a2c9-1173a6f669bf\") " pod="default/nginx-deployment-6d5f899847-2mknf" Dec 13 02:32:48.145069 kubelet[2038]: E1213 02:32:48.144979 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:48.239461 systemd-networkd[1404]: cilium_net: Gained IPv6LL Dec 13 02:32:48.345905 env[1654]: time="2024-12-13T02:32:48.345878785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2mknf,Uid:2f185352-8418-4d6a-a2c9-1173a6f669bf,Namespace:default,Attempt:0,}" Dec 13 02:32:48.508094 systemd-networkd[1404]: lxc_health: Link UP Dec 13 02:32:48.531251 systemd-networkd[1404]: lxc_health: Gained carrier Dec 13 02:32:48.531404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:32:48.855907 systemd-networkd[1404]: cilium_host: Gained IPv6LL Dec 13 02:32:48.860145 systemd-networkd[1404]: lxc2f101b535dc6: Link UP Dec 13 02:32:48.884410 kernel: eth0: renamed from tmpa4f77 Dec 13 02:32:48.915863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:32:48.915937 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2f101b535dc6: link becomes ready Dec 13 02:32:48.915992 systemd-networkd[1404]: lxc2f101b535dc6: Gained carrier Dec 13 02:32:49.146013 kubelet[2038]: E1213 02:32:49.145932 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:49.623521 systemd-networkd[1404]: cilium_vxlan: Gained IPv6LL Dec 13 02:32:50.146898 kubelet[2038]: E1213 02:32:50.146871 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:50.455581 systemd-networkd[1404]: lxc2f101b535dc6: Gained IPv6LL Dec 13 02:32:50.519471 systemd-networkd[1404]: lxc_health: Gained IPv6LL Dec 13 02:32:51.135883 kubelet[2038]: E1213 02:32:51.135860 2038 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:51.139362 env[1654]: time="2024-12-13T02:32:51.139324355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:32:51.139362 env[1654]: time="2024-12-13T02:32:51.139346830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:32:51.139362 env[1654]: time="2024-12-13T02:32:51.139354180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:32:51.139580 env[1654]: time="2024-12-13T02:32:51.139458769Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4f774ffdd381bfef3c834d614c32fc672afed279c8421eacc9c42ff36b4bdc7 pid=3227 runtime=io.containerd.runc.v2 Dec 13 02:32:51.147493 kubelet[2038]: E1213 02:32:51.147469 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:51.168489 env[1654]: time="2024-12-13T02:32:51.168433710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-2mknf,Uid:2f185352-8418-4d6a-a2c9-1173a6f669bf,Namespace:default,Attempt:0,} returns sandbox id \"a4f774ffdd381bfef3c834d614c32fc672afed279c8421eacc9c42ff36b4bdc7\"" Dec 13 02:32:51.169159 env[1654]: time="2024-12-13T02:32:51.169147635Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:32:52.148371 kubelet[2038]: E1213 02:32:52.148254 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:53.149381 kubelet[2038]: E1213 02:32:53.149330 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:53.371780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059416857.mount: Deactivated successfully. Dec 13 02:32:54.149500 kubelet[2038]: E1213 02:32:54.149426 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:54.214710 env[1654]: time="2024-12-13T02:32:54.214654336Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:54.215206 env[1654]: time="2024-12-13T02:32:54.215167270Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:54.216964 env[1654]: time="2024-12-13T02:32:54.216916413Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:54.218103 env[1654]: time="2024-12-13T02:32:54.218088673Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:32:54.218588 env[1654]: time="2024-12-13T02:32:54.218544023Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:32:54.219757 env[1654]: time="2024-12-13T02:32:54.219712822Z" level=info msg="CreateContainer within sandbox \"a4f774ffdd381bfef3c834d614c32fc672afed279c8421eacc9c42ff36b4bdc7\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:32:54.223862 env[1654]: time="2024-12-13T02:32:54.223824435Z" level=info msg="CreateContainer within sandbox \"a4f774ffdd381bfef3c834d614c32fc672afed279c8421eacc9c42ff36b4bdc7\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b0cee82b11a4e3079188a8501b7aa7e8aeea6a2f41695b8ef8024d4ff8c20c55\"" Dec 13 02:32:54.224123 env[1654]: time="2024-12-13T02:32:54.224110521Z" level=info msg="StartContainer for \"b0cee82b11a4e3079188a8501b7aa7e8aeea6a2f41695b8ef8024d4ff8c20c55\"" Dec 13 02:32:54.245077 env[1654]: time="2024-12-13T02:32:54.245050933Z" level=info msg="StartContainer for \"b0cee82b11a4e3079188a8501b7aa7e8aeea6a2f41695b8ef8024d4ff8c20c55\" returns successfully" Dec 13 02:32:54.449873 kubelet[2038]: I1213 02:32:54.449655 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-2mknf" podStartSLOduration=3.399892548 podStartE2EDuration="6.449568503s" podCreationTimestamp="2024-12-13 02:32:48 +0000 UTC" firstStartedPulling="2024-12-13 02:32:51.169011023 +0000 UTC m=+20.506785791" lastFinishedPulling="2024-12-13 02:32:54.218686974 +0000 UTC m=+23.556461746" observedRunningTime="2024-12-13 02:32:54.449104823 +0000 UTC m=+23.786879656" watchObservedRunningTime="2024-12-13 02:32:54.449568503 +0000 UTC m=+23.787343336" Dec 13 02:32:55.149829 kubelet[2038]: E1213 02:32:55.149717 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:55.944551 update_engine[1648]: I1213 02:32:55.944424 1648 update_attempter.cc:509] Updating boot flags... Dec 13 02:32:56.150778 kubelet[2038]: E1213 02:32:56.150666 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:57.151760 kubelet[2038]: E1213 02:32:57.151647 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:58.152851 kubelet[2038]: E1213 02:32:58.152736 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:59.153783 kubelet[2038]: E1213 02:32:59.153668 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:32:59.809506 kubelet[2038]: I1213 02:32:59.809461 2038 topology_manager.go:215] "Topology Admit Handler" podUID="56db2f50-7db2-40c7-b5d2-be6c3ae43fd6" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 02:32:59.845543 kubelet[2038]: I1213 02:32:59.845481 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnlp4\" (UniqueName: \"kubernetes.io/projected/56db2f50-7db2-40c7-b5d2-be6c3ae43fd6-kube-api-access-hnlp4\") pod \"nfs-server-provisioner-0\" (UID: \"56db2f50-7db2-40c7-b5d2-be6c3ae43fd6\") " pod="default/nfs-server-provisioner-0" Dec 13 02:32:59.845543 kubelet[2038]: I1213 02:32:59.845530 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/56db2f50-7db2-40c7-b5d2-be6c3ae43fd6-data\") pod \"nfs-server-provisioner-0\" (UID: \"56db2f50-7db2-40c7-b5d2-be6c3ae43fd6\") " pod="default/nfs-server-provisioner-0" Dec 13 02:33:00.112955 env[1654]: time="2024-12-13T02:33:00.112825623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:56db2f50-7db2-40c7-b5d2-be6c3ae43fd6,Namespace:default,Attempt:0,}" Dec 13 02:33:00.136041 systemd-networkd[1404]: lxc359dd9e0fca4: Link UP Dec 13 02:33:00.154470 kubelet[2038]: E1213 02:33:00.154422 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:00.160334 kernel: eth0: renamed from tmp3b6ed Dec 13 02:33:00.191090 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:33:00.191132 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc359dd9e0fca4: link becomes ready Dec 13 02:33:00.191163 systemd-networkd[1404]: lxc359dd9e0fca4: Gained carrier Dec 13 02:33:00.294510 env[1654]: time="2024-12-13T02:33:00.294478630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:33:00.294510 env[1654]: time="2024-12-13T02:33:00.294498793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:33:00.294510 env[1654]: time="2024-12-13T02:33:00.294505324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:00.294721 env[1654]: time="2024-12-13T02:33:00.294672059Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b6ed9cae1d1731da2f8ea8641b48ddf1cb526b5dc8ab93f97487b06a93f4a78 pid=3415 runtime=io.containerd.runc.v2 Dec 13 02:33:00.321618 env[1654]: time="2024-12-13T02:33:00.321592934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:56db2f50-7db2-40c7-b5d2-be6c3ae43fd6,Namespace:default,Attempt:0,} returns sandbox id \"3b6ed9cae1d1731da2f8ea8641b48ddf1cb526b5dc8ab93f97487b06a93f4a78\"" Dec 13 02:33:00.322300 env[1654]: time="2024-12-13T02:33:00.322287870Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:33:00.436766 systemd[1]: Started sshd@7-139.178.70.191:22-92.255.85.188:42942.service. Dec 13 02:33:01.155098 kubelet[2038]: E1213 02:33:01.155040 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:01.726434 sshd[3449]: Invalid user postgres from 92.255.85.188 port 42942 Dec 13 02:33:01.924802 sshd[3449]: pam_faillock(sshd:auth): User unknown Dec 13 02:33:01.925019 sshd[3449]: pam_unix(sshd:auth): check pass; user unknown Dec 13 02:33:01.925039 sshd[3449]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.188 Dec 13 02:33:01.925225 sshd[3449]: pam_faillock(sshd:auth): User unknown Dec 13 02:33:02.103493 systemd-networkd[1404]: lxc359dd9e0fca4: Gained IPv6LL Dec 13 02:33:02.155250 kubelet[2038]: E1213 02:33:02.155195 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:02.204162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827864696.mount: Deactivated successfully. Dec 13 02:33:03.156168 kubelet[2038]: E1213 02:33:03.156060 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:03.447743 env[1654]: time="2024-12-13T02:33:03.447655541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:03.448346 env[1654]: time="2024-12-13T02:33:03.448303277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:03.449105 env[1654]: time="2024-12-13T02:33:03.449064896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:03.450193 env[1654]: time="2024-12-13T02:33:03.450152012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:03.450504 env[1654]: time="2024-12-13T02:33:03.450463324Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:33:03.451885 env[1654]: time="2024-12-13T02:33:03.451822962Z" level=info msg="CreateContainer within sandbox \"3b6ed9cae1d1731da2f8ea8641b48ddf1cb526b5dc8ab93f97487b06a93f4a78\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:33:03.456209 env[1654]: time="2024-12-13T02:33:03.456164000Z" level=info msg="CreateContainer within sandbox \"3b6ed9cae1d1731da2f8ea8641b48ddf1cb526b5dc8ab93f97487b06a93f4a78\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"841088b95f952c4b656706aaafe1eb89c0f80e605e3dca0af53b52daffcabb40\"" Dec 13 02:33:03.456448 env[1654]: time="2024-12-13T02:33:03.456405240Z" level=info msg="StartContainer for \"841088b95f952c4b656706aaafe1eb89c0f80e605e3dca0af53b52daffcabb40\"" Dec 13 02:33:03.477465 env[1654]: time="2024-12-13T02:33:03.477435268Z" level=info msg="StartContainer for \"841088b95f952c4b656706aaafe1eb89c0f80e605e3dca0af53b52daffcabb40\" returns successfully" Dec 13 02:33:03.706369 sshd[3449]: Failed password for invalid user postgres from 92.255.85.188 port 42942 ssh2 Dec 13 02:33:04.157436 kubelet[2038]: E1213 02:33:04.157296 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:04.221924 sshd[3449]: Connection closed by invalid user postgres 92.255.85.188 port 42942 [preauth] Dec 13 02:33:04.224666 systemd[1]: sshd@7-139.178.70.191:22-92.255.85.188:42942.service: Deactivated successfully. Dec 13 02:33:05.158008 kubelet[2038]: E1213 02:33:05.157934 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:06.158554 kubelet[2038]: E1213 02:33:06.158482 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:07.159612 kubelet[2038]: E1213 02:33:07.159510 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:08.160526 kubelet[2038]: E1213 02:33:08.160417 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:09.161765 kubelet[2038]: E1213 02:33:09.161651 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:10.162616 kubelet[2038]: E1213 02:33:10.162504 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:11.136090 kubelet[2038]: E1213 02:33:11.135974 2038 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:11.163213 kubelet[2038]: E1213 02:33:11.163106 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:12.164455 kubelet[2038]: E1213 02:33:12.164331 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:13.165499 kubelet[2038]: E1213 02:33:13.165386 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:13.170581 kubelet[2038]: I1213 02:33:13.170477 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.041832021 podStartE2EDuration="14.17037326s" podCreationTimestamp="2024-12-13 02:32:59 +0000 UTC" firstStartedPulling="2024-12-13 02:33:00.322152168 +0000 UTC m=+29.659926937" lastFinishedPulling="2024-12-13 02:33:03.450693408 +0000 UTC m=+32.788468176" observedRunningTime="2024-12-13 02:33:04.475202735 +0000 UTC m=+33.812977573" watchObservedRunningTime="2024-12-13 02:33:13.17037326 +0000 UTC m=+42.508148081" Dec 13 02:33:13.170945 kubelet[2038]: I1213 02:33:13.170738 2038 topology_manager.go:215] "Topology Admit Handler" podUID="254261b7-de55-4e92-83cc-c6c9aab51ae1" podNamespace="default" podName="test-pod-1" Dec 13 02:33:13.338742 kubelet[2038]: I1213 02:33:13.338640 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba7440e9-b72e-4cd3-95f2-ad6b1443550a\" (UniqueName: \"kubernetes.io/nfs/254261b7-de55-4e92-83cc-c6c9aab51ae1-pvc-ba7440e9-b72e-4cd3-95f2-ad6b1443550a\") pod \"test-pod-1\" (UID: \"254261b7-de55-4e92-83cc-c6c9aab51ae1\") " pod="default/test-pod-1" Dec 13 02:33:13.339136 kubelet[2038]: I1213 02:33:13.338910 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9jk\" (UniqueName: \"kubernetes.io/projected/254261b7-de55-4e92-83cc-c6c9aab51ae1-kube-api-access-mb9jk\") pod \"test-pod-1\" (UID: \"254261b7-de55-4e92-83cc-c6c9aab51ae1\") " pod="default/test-pod-1" Dec 13 02:33:13.462368 kernel: FS-Cache: Loaded Dec 13 02:33:13.504189 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:33:13.504252 kernel: RPC: Registered udp transport module. Dec 13 02:33:13.504266 kernel: RPC: Registered tcp transport module. Dec 13 02:33:13.509111 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:33:13.562326 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:33:13.690981 kernel: NFS: Registering the id_resolver key type Dec 13 02:33:13.691034 kernel: Key type id_resolver registered Dec 13 02:33:13.691048 kernel: Key type id_legacy registered Dec 13 02:33:13.876135 nfsidmap[3549]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-01e29b9675' Dec 13 02:33:13.883541 nfsidmap[3550]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.6-a-01e29b9675' Dec 13 02:33:14.077014 env[1654]: time="2024-12-13T02:33:14.076919230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:254261b7-de55-4e92-83cc-c6c9aab51ae1,Namespace:default,Attempt:0,}" Dec 13 02:33:14.117258 systemd-networkd[1404]: lxcd244951433ed: Link UP Dec 13 02:33:14.138335 kernel: eth0: renamed from tmp0b469 Dec 13 02:33:14.158844 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:33:14.158889 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd244951433ed: link becomes ready Dec 13 02:33:14.158895 systemd-networkd[1404]: lxcd244951433ed: Gained carrier Dec 13 02:33:14.166270 kubelet[2038]: E1213 02:33:14.166175 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:14.269544 env[1654]: time="2024-12-13T02:33:14.269475663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:33:14.269544 env[1654]: time="2024-12-13T02:33:14.269496414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:33:14.269544 env[1654]: time="2024-12-13T02:33:14.269503038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:14.269672 env[1654]: time="2024-12-13T02:33:14.269568292Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b469fd0dac4a6864beb2bc517982dc3b9eed32c334a091c29a07e3845f36df6 pid=3610 runtime=io.containerd.runc.v2 Dec 13 02:33:14.297189 env[1654]: time="2024-12-13T02:33:14.297166797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:254261b7-de55-4e92-83cc-c6c9aab51ae1,Namespace:default,Attempt:0,} returns sandbox id \"0b469fd0dac4a6864beb2bc517982dc3b9eed32c334a091c29a07e3845f36df6\"" Dec 13 02:33:14.297920 env[1654]: time="2024-12-13T02:33:14.297880078Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:33:14.668101 env[1654]: time="2024-12-13T02:33:14.667973225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:14.670722 env[1654]: time="2024-12-13T02:33:14.670618618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:14.676006 env[1654]: time="2024-12-13T02:33:14.675882976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:14.681211 env[1654]: time="2024-12-13T02:33:14.681109500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:14.683709 env[1654]: time="2024-12-13T02:33:14.683592160Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:33:14.687900 env[1654]: time="2024-12-13T02:33:14.687830774Z" level=info msg="CreateContainer within sandbox \"0b469fd0dac4a6864beb2bc517982dc3b9eed32c334a091c29a07e3845f36df6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:33:14.704141 env[1654]: time="2024-12-13T02:33:14.704012906Z" level=info msg="CreateContainer within sandbox \"0b469fd0dac4a6864beb2bc517982dc3b9eed32c334a091c29a07e3845f36df6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a8aa70858fe2555000d04afc027fbdc9197d24e19c3802dde33ec2a9bb238791\"" Dec 13 02:33:14.704967 env[1654]: time="2024-12-13T02:33:14.704867522Z" level=info msg="StartContainer for \"a8aa70858fe2555000d04afc027fbdc9197d24e19c3802dde33ec2a9bb238791\"" Dec 13 02:33:14.777512 env[1654]: time="2024-12-13T02:33:14.777406280Z" level=info msg="StartContainer for \"a8aa70858fe2555000d04afc027fbdc9197d24e19c3802dde33ec2a9bb238791\" returns successfully" Dec 13 02:33:15.167494 kubelet[2038]: E1213 02:33:15.167371 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:15.510158 kubelet[2038]: I1213 02:33:15.509939 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.123603963 podStartE2EDuration="15.509848008s" podCreationTimestamp="2024-12-13 02:33:00 +0000 UTC" firstStartedPulling="2024-12-13 02:33:14.29774433 +0000 UTC m=+43.635519097" lastFinishedPulling="2024-12-13 02:33:14.683988282 +0000 UTC m=+44.021763142" observedRunningTime="2024-12-13 02:33:15.509121289 +0000 UTC m=+44.846896126" watchObservedRunningTime="2024-12-13 02:33:15.509848008 +0000 UTC m=+44.847622842" Dec 13 02:33:15.863597 systemd-networkd[1404]: lxcd244951433ed: Gained IPv6LL Dec 13 02:33:16.168625 kubelet[2038]: E1213 02:33:16.168396 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:17.028924 env[1654]: time="2024-12-13T02:33:17.028862618Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:33:17.031752 env[1654]: time="2024-12-13T02:33:17.031709552Z" level=info msg="StopContainer for \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\" with timeout 2 (s)" Dec 13 02:33:17.031848 env[1654]: time="2024-12-13T02:33:17.031804311Z" level=info msg="Stop container \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\" with signal terminated" Dec 13 02:33:17.034878 systemd-networkd[1404]: lxc_health: Link DOWN Dec 13 02:33:17.034882 systemd-networkd[1404]: lxc_health: Lost carrier Dec 13 02:33:17.108808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b-rootfs.mount: Deactivated successfully. Dec 13 02:33:17.169722 kubelet[2038]: E1213 02:33:17.169612 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:18.170099 kubelet[2038]: E1213 02:33:18.169982 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:18.244771 env[1654]: time="2024-12-13T02:33:18.244621252Z" level=info msg="shim disconnected" id=0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b Dec 13 02:33:18.244771 env[1654]: time="2024-12-13T02:33:18.244723210Z" level=warning msg="cleaning up after shim disconnected" id=0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b namespace=k8s.io Dec 13 02:33:18.244771 env[1654]: time="2024-12-13T02:33:18.244754830Z" level=info msg="cleaning up dead shim" Dec 13 02:33:18.253208 env[1654]: time="2024-12-13T02:33:18.253193982Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3749 runtime=io.containerd.runc.v2\n" Dec 13 02:33:18.254172 env[1654]: time="2024-12-13T02:33:18.254134537Z" level=info msg="StopContainer for \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\" returns successfully" Dec 13 02:33:18.254716 env[1654]: time="2024-12-13T02:33:18.254703788Z" level=info msg="StopPodSandbox for \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\"" Dec 13 02:33:18.254758 env[1654]: time="2024-12-13T02:33:18.254746672Z" level=info msg="Container to stop \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:18.254784 env[1654]: time="2024-12-13T02:33:18.254757252Z" level=info msg="Container to stop \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:18.254784 env[1654]: time="2024-12-13T02:33:18.254763596Z" level=info msg="Container to stop \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:18.254784 env[1654]: time="2024-12-13T02:33:18.254770005Z" level=info msg="Container to stop \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:18.254784 env[1654]: time="2024-12-13T02:33:18.254775075Z" level=info msg="Container to stop \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:18.256292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d-shm.mount: Deactivated successfully. Dec 13 02:33:18.265643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d-rootfs.mount: Deactivated successfully. Dec 13 02:33:18.266465 env[1654]: time="2024-12-13T02:33:18.266416349Z" level=info msg="shim disconnected" id=06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d Dec 13 02:33:18.266536 env[1654]: time="2024-12-13T02:33:18.266461314Z" level=warning msg="cleaning up after shim disconnected" id=06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d namespace=k8s.io Dec 13 02:33:18.266536 env[1654]: time="2024-12-13T02:33:18.266473368Z" level=info msg="cleaning up dead shim" Dec 13 02:33:18.270644 env[1654]: time="2024-12-13T02:33:18.270595749Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3781 runtime=io.containerd.runc.v2\n" Dec 13 02:33:18.270830 env[1654]: time="2024-12-13T02:33:18.270783210Z" level=info msg="TearDown network for sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" successfully" Dec 13 02:33:18.270830 env[1654]: time="2024-12-13T02:33:18.270799104Z" level=info msg="StopPodSandbox for \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" returns successfully" Dec 13 02:33:18.379238 kubelet[2038]: I1213 02:33:18.379136 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-host-proc-sys-net\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.379607 kubelet[2038]: I1213 02:33:18.379279 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-host-proc-sys-kernel\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.379607 kubelet[2038]: I1213 02:33:18.379275 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.379607 kubelet[2038]: I1213 02:33:18.379423 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-xtables-lock\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.379607 kubelet[2038]: I1213 02:33:18.379411 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.379607 kubelet[2038]: I1213 02:33:18.379521 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-hubble-tls\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.380203 kubelet[2038]: I1213 02:33:18.379549 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.380203 kubelet[2038]: I1213 02:33:18.379609 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-run\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.380203 kubelet[2038]: I1213 02:33:18.379659 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.380203 kubelet[2038]: I1213 02:33:18.379707 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cni-path\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.380203 kubelet[2038]: I1213 02:33:18.379788 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cni-path" (OuterVolumeSpecName: "cni-path") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.380771 kubelet[2038]: I1213 02:33:18.379817 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhs6l\" (UniqueName: \"kubernetes.io/projected/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-kube-api-access-nhs6l\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.380771 kubelet[2038]: I1213 02:33:18.379927 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-bpf-maps\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.380771 kubelet[2038]: I1213 02:33:18.380027 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-etc-cni-netd\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.380771 kubelet[2038]: I1213 02:33:18.380067 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.380771 kubelet[2038]: I1213 02:33:18.380157 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-cgroup\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.380771 kubelet[2038]: I1213 02:33:18.380191 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.381393 kubelet[2038]: I1213 02:33:18.380242 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.381393 kubelet[2038]: I1213 02:33:18.380277 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-config-path\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.381393 kubelet[2038]: I1213 02:33:18.380398 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-hostproc\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.381393 kubelet[2038]: I1213 02:33:18.380471 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-lib-modules\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.381393 kubelet[2038]: I1213 02:33:18.380485 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-hostproc" (OuterVolumeSpecName: "hostproc") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.381393 kubelet[2038]: I1213 02:33:18.380535 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-clustermesh-secrets\") pod \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\" (UID: \"1a7104ce-d69c-4f97-9d4d-e9fda466ad06\") " Dec 13 02:33:18.382062 kubelet[2038]: I1213 02:33:18.380547 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:18.382062 kubelet[2038]: I1213 02:33:18.380616 2038 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-etc-cni-netd\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.382062 kubelet[2038]: I1213 02:33:18.380664 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-cgroup\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.382062 kubelet[2038]: I1213 02:33:18.380707 2038 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-bpf-maps\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.382062 kubelet[2038]: I1213 02:33:18.380739 2038 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-hostproc\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.382062 kubelet[2038]: I1213 02:33:18.380771 2038 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-host-proc-sys-kernel\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.382062 kubelet[2038]: I1213 02:33:18.380804 2038 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-xtables-lock\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.382062 kubelet[2038]: I1213 02:33:18.380832 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-run\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.382878 kubelet[2038]: I1213 02:33:18.380865 2038 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-host-proc-sys-net\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.382878 kubelet[2038]: I1213 02:33:18.380893 2038 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cni-path\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.385512 kubelet[2038]: I1213 02:33:18.385417 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:33:18.386049 kubelet[2038]: I1213 02:33:18.386035 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-kube-api-access-nhs6l" (OuterVolumeSpecName: "kube-api-access-nhs6l") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "kube-api-access-nhs6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:18.386090 kubelet[2038]: I1213 02:33:18.386049 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:18.386122 kubelet[2038]: I1213 02:33:18.386102 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1a7104ce-d69c-4f97-9d4d-e9fda466ad06" (UID: "1a7104ce-d69c-4f97-9d4d-e9fda466ad06"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:33:18.387107 systemd[1]: var-lib-kubelet-pods-1a7104ce\x2dd69c\x2d4f97\x2d9d4d\x2de9fda466ad06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnhs6l.mount: Deactivated successfully. Dec 13 02:33:18.387186 systemd[1]: var-lib-kubelet-pods-1a7104ce\x2dd69c\x2d4f97\x2d9d4d\x2de9fda466ad06-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:33:18.387244 systemd[1]: var-lib-kubelet-pods-1a7104ce\x2dd69c\x2d4f97\x2d9d4d\x2de9fda466ad06-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:33:18.482230 kubelet[2038]: I1213 02:33:18.481999 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-cilium-config-path\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.482230 kubelet[2038]: I1213 02:33:18.482075 2038 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nhs6l\" (UniqueName: \"kubernetes.io/projected/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-kube-api-access-nhs6l\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.482230 kubelet[2038]: I1213 02:33:18.482112 2038 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-lib-modules\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.482230 kubelet[2038]: I1213 02:33:18.482150 2038 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-clustermesh-secrets\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.482230 kubelet[2038]: I1213 02:33:18.482180 2038 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a7104ce-d69c-4f97-9d4d-e9fda466ad06-hubble-tls\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:18.509172 kubelet[2038]: I1213 02:33:18.509107 2038 scope.go:117] "RemoveContainer" containerID="0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b" Dec 13 02:33:18.511994 env[1654]: time="2024-12-13T02:33:18.511910501Z" level=info msg="RemoveContainer for \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\"" Dec 13 02:33:18.515595 env[1654]: time="2024-12-13T02:33:18.515113566Z" level=info msg="RemoveContainer for \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\" returns successfully" Dec 13 02:33:18.515673 kubelet[2038]: I1213 02:33:18.515247 2038 scope.go:117] "RemoveContainer" containerID="145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f" Dec 13 02:33:18.516330 env[1654]: time="2024-12-13T02:33:18.516318023Z" level=info msg="RemoveContainer for \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\"" Dec 13 02:33:18.517542 env[1654]: time="2024-12-13T02:33:18.517507037Z" level=info msg="RemoveContainer for \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\" returns successfully" Dec 13 02:33:18.517585 kubelet[2038]: I1213 02:33:18.517579 2038 scope.go:117] "RemoveContainer" containerID="c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7" Dec 13 02:33:18.518168 env[1654]: time="2024-12-13T02:33:18.518155571Z" level=info msg="RemoveContainer for \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\"" Dec 13 02:33:18.519167 env[1654]: time="2024-12-13T02:33:18.519155426Z" level=info msg="RemoveContainer for \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\" returns successfully" Dec 13 02:33:18.519248 kubelet[2038]: I1213 02:33:18.519240 2038 scope.go:117] "RemoveContainer" containerID="89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af" Dec 13 02:33:18.519766 env[1654]: time="2024-12-13T02:33:18.519735410Z" level=info msg="RemoveContainer for \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\"" Dec 13 02:33:18.521108 env[1654]: time="2024-12-13T02:33:18.521075241Z" level=info msg="RemoveContainer for \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\" returns successfully" Dec 13 02:33:18.521172 kubelet[2038]: I1213 02:33:18.521165 2038 scope.go:117] "RemoveContainer" containerID="9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603" Dec 13 02:33:18.521672 env[1654]: time="2024-12-13T02:33:18.521643446Z" level=info msg="RemoveContainer for \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\"" Dec 13 02:33:18.522652 env[1654]: time="2024-12-13T02:33:18.522639412Z" level=info msg="RemoveContainer for \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\" returns successfully" Dec 13 02:33:18.522698 kubelet[2038]: I1213 02:33:18.522692 2038 scope.go:117] "RemoveContainer" containerID="0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b" Dec 13 02:33:18.522802 env[1654]: time="2024-12-13T02:33:18.522761364Z" level=error msg="ContainerStatus for \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\": not found" Dec 13 02:33:18.522856 kubelet[2038]: E1213 02:33:18.522849 2038 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\": not found" containerID="0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b" Dec 13 02:33:18.522892 kubelet[2038]: I1213 02:33:18.522887 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b"} err="failed to get container status \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f197b6b467e986dc31d4a4e3a05e58147e8c8ee3be6a8b9171436a559c0341b\": not found" Dec 13 02:33:18.522915 kubelet[2038]: I1213 02:33:18.522894 2038 scope.go:117] "RemoveContainer" containerID="145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f" Dec 13 02:33:18.523015 env[1654]: time="2024-12-13T02:33:18.522990258Z" level=error msg="ContainerStatus for \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\": not found" Dec 13 02:33:18.523060 kubelet[2038]: E1213 02:33:18.523055 2038 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\": not found" containerID="145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f" Dec 13 02:33:18.523082 kubelet[2038]: I1213 02:33:18.523067 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f"} err="failed to get container status \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\": rpc error: code = NotFound desc = an error occurred when try to find container \"145b12b7514864adf61aa0bbbe9da011cf4a6b191e6960e401469c4bf9b9d93f\": not found" Dec 13 02:33:18.523082 kubelet[2038]: I1213 02:33:18.523073 2038 scope.go:117] "RemoveContainer" containerID="c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7" Dec 13 02:33:18.523175 env[1654]: time="2024-12-13T02:33:18.523150797Z" level=error msg="ContainerStatus for \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\": not found" Dec 13 02:33:18.523213 kubelet[2038]: E1213 02:33:18.523209 2038 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\": not found" containerID="c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7" Dec 13 02:33:18.523240 kubelet[2038]: I1213 02:33:18.523219 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7"} err="failed to get container status \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4f13e4ad9439fc5e4b3d10b80c490eec9e2463ae1a979f1446792d31e495be7\": not found" Dec 13 02:33:18.523240 kubelet[2038]: I1213 02:33:18.523224 2038 scope.go:117] "RemoveContainer" containerID="89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af" Dec 13 02:33:18.523297 env[1654]: time="2024-12-13T02:33:18.523278378Z" level=error msg="ContainerStatus for \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\": not found" Dec 13 02:33:18.523359 kubelet[2038]: E1213 02:33:18.523353 2038 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\": not found" containerID="89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af" Dec 13 02:33:18.523396 kubelet[2038]: I1213 02:33:18.523383 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af"} err="failed to get container status \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\": rpc error: code = NotFound desc = an error occurred when try to find container \"89c8fdd981417d931a6f644a6c19fb5cdc9b33c3d6435bdb26732f6b6aa629af\": not found" Dec 13 02:33:18.523396 kubelet[2038]: I1213 02:33:18.523390 2038 scope.go:117] "RemoveContainer" containerID="9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603" Dec 13 02:33:18.523513 env[1654]: time="2024-12-13T02:33:18.523472852Z" level=error msg="ContainerStatus for \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\": not found" Dec 13 02:33:18.523583 kubelet[2038]: E1213 02:33:18.523563 2038 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\": not found" containerID="9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603" Dec 13 02:33:18.523606 kubelet[2038]: I1213 02:33:18.523589 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603"} err="failed to get container status \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e31ce4b818d0aaca1760990bbdb7a20d3d469afcb95ac47a38d61a6ddbbe603\": not found" Dec 13 02:33:19.170367 kubelet[2038]: E1213 02:33:19.170230 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:19.347603 kubelet[2038]: I1213 02:33:19.347502 2038 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1a7104ce-d69c-4f97-9d4d-e9fda466ad06" path="/var/lib/kubelet/pods/1a7104ce-d69c-4f97-9d4d-e9fda466ad06/volumes" Dec 13 02:33:19.478927 kubelet[2038]: I1213 02:33:19.478705 2038 topology_manager.go:215] "Topology Admit Handler" podUID="609ef7d8-755d-45ca-9299-2a0446e5f09e" podNamespace="kube-system" podName="cilium-operator-5cc964979-74ph2" Dec 13 02:33:19.478927 kubelet[2038]: E1213 02:33:19.478805 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7104ce-d69c-4f97-9d4d-e9fda466ad06" containerName="apply-sysctl-overwrites" Dec 13 02:33:19.478927 kubelet[2038]: E1213 02:33:19.478837 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7104ce-d69c-4f97-9d4d-e9fda466ad06" containerName="mount-bpf-fs" Dec 13 02:33:19.478927 kubelet[2038]: E1213 02:33:19.478857 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7104ce-d69c-4f97-9d4d-e9fda466ad06" containerName="clean-cilium-state" Dec 13 02:33:19.478927 kubelet[2038]: E1213 02:33:19.478877 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7104ce-d69c-4f97-9d4d-e9fda466ad06" containerName="cilium-agent" Dec 13 02:33:19.478927 kubelet[2038]: E1213 02:33:19.478898 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a7104ce-d69c-4f97-9d4d-e9fda466ad06" containerName="mount-cgroup" Dec 13 02:33:19.479710 kubelet[2038]: I1213 02:33:19.478948 2038 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7104ce-d69c-4f97-9d4d-e9fda466ad06" containerName="cilium-agent" Dec 13 02:33:19.482828 kubelet[2038]: I1213 02:33:19.482736 2038 topology_manager.go:215] "Topology Admit Handler" podUID="4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" podNamespace="kube-system" podName="cilium-6p6lc" Dec 13 02:33:19.591817 kubelet[2038]: I1213 02:33:19.591709 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-host-proc-sys-kernel\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.591817 kubelet[2038]: I1213 02:33:19.591827 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glh9g\" (UniqueName: \"kubernetes.io/projected/609ef7d8-755d-45ca-9299-2a0446e5f09e-kube-api-access-glh9g\") pod \"cilium-operator-5cc964979-74ph2\" (UID: \"609ef7d8-755d-45ca-9299-2a0446e5f09e\") " pod="kube-system/cilium-operator-5cc964979-74ph2" Dec 13 02:33:19.592187 kubelet[2038]: I1213 02:33:19.591982 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-bpf-maps\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.592187 kubelet[2038]: I1213 02:33:19.592103 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cni-path\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.592187 kubelet[2038]: I1213 02:33:19.592180 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/609ef7d8-755d-45ca-9299-2a0446e5f09e-cilium-config-path\") pod \"cilium-operator-5cc964979-74ph2\" (UID: \"609ef7d8-755d-45ca-9299-2a0446e5f09e\") " pod="kube-system/cilium-operator-5cc964979-74ph2" Dec 13 02:33:19.592578 kubelet[2038]: I1213 02:33:19.592342 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-ipsec-secrets\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.592578 kubelet[2038]: I1213 02:33:19.592418 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-host-proc-sys-net\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.592796 kubelet[2038]: I1213 02:33:19.592558 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-hubble-tls\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.592796 kubelet[2038]: I1213 02:33:19.592668 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-clustermesh-secrets\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.592796 kubelet[2038]: I1213 02:33:19.592735 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-config-path\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.592796 kubelet[2038]: I1213 02:33:19.592796 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-cgroup\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.593199 kubelet[2038]: I1213 02:33:19.592951 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-etc-cni-netd\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.593199 kubelet[2038]: I1213 02:33:19.593066 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-hostproc\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.593199 kubelet[2038]: I1213 02:33:19.593179 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-xtables-lock\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.593543 kubelet[2038]: I1213 02:33:19.593257 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmg8s\" (UniqueName: \"kubernetes.io/projected/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-kube-api-access-wmg8s\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.593543 kubelet[2038]: I1213 02:33:19.593349 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-run\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.593543 kubelet[2038]: I1213 02:33:19.593410 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-lib-modules\") pod \"cilium-6p6lc\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " pod="kube-system/cilium-6p6lc" Dec 13 02:33:19.635366 kubelet[2038]: E1213 02:33:19.635264 2038 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-wmg8s lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-6p6lc" podUID="4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" Dec 13 02:33:19.785373 env[1654]: time="2024-12-13T02:33:19.785143719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-74ph2,Uid:609ef7d8-755d-45ca-9299-2a0446e5f09e,Namespace:kube-system,Attempt:0,}" Dec 13 02:33:19.808155 env[1654]: time="2024-12-13T02:33:19.808087710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:33:19.808155 env[1654]: time="2024-12-13T02:33:19.808149646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:33:19.808244 env[1654]: time="2024-12-13T02:33:19.808160191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:19.808286 env[1654]: time="2024-12-13T02:33:19.808270818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30afa2a2fe779d0e6dde41ccaed0e106bd332d791ea14beaf40f588178815c08 pid=3808 runtime=io.containerd.runc.v2 Dec 13 02:33:19.837053 env[1654]: time="2024-12-13T02:33:19.837025376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-74ph2,Uid:609ef7d8-755d-45ca-9299-2a0446e5f09e,Namespace:kube-system,Attempt:0,} returns sandbox id \"30afa2a2fe779d0e6dde41ccaed0e106bd332d791ea14beaf40f588178815c08\"" Dec 13 02:33:19.837777 env[1654]: time="2024-12-13T02:33:19.837765523Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:33:20.171209 kubelet[2038]: E1213 02:33:20.171095 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:20.704176 kubelet[2038]: I1213 02:33:20.704067 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-cgroup\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.704176 kubelet[2038]: I1213 02:33:20.704164 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-hostproc\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.704565 kubelet[2038]: I1213 02:33:20.704226 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cni-path\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.704565 kubelet[2038]: I1213 02:33:20.704250 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.704565 kubelet[2038]: I1213 02:33:20.704278 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-etc-cni-netd\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.704565 kubelet[2038]: I1213 02:33:20.704291 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-hostproc" (OuterVolumeSpecName: "hostproc") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.704565 kubelet[2038]: I1213 02:33:20.704353 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.705109 kubelet[2038]: I1213 02:33:20.704387 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cni-path" (OuterVolumeSpecName: "cni-path") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.705109 kubelet[2038]: I1213 02:33:20.704433 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-lib-modules\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.705109 kubelet[2038]: I1213 02:33:20.704513 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-config-path\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.705109 kubelet[2038]: I1213 02:33:20.704514 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.705109 kubelet[2038]: I1213 02:33:20.704574 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-bpf-maps\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.705109 kubelet[2038]: I1213 02:33:20.704629 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-host-proc-sys-net\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.706109 kubelet[2038]: I1213 02:33:20.704693 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-ipsec-secrets\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.706109 kubelet[2038]: I1213 02:33:20.704684 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.706109 kubelet[2038]: I1213 02:33:20.704753 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-run\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.706109 kubelet[2038]: I1213 02:33:20.704762 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.706109 kubelet[2038]: I1213 02:33:20.704818 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-clustermesh-secrets\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.706321 kubelet[2038]: I1213 02:33:20.704889 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.706321 kubelet[2038]: I1213 02:33:20.704919 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmg8s\" (UniqueName: \"kubernetes.io/projected/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-kube-api-access-wmg8s\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.706321 kubelet[2038]: I1213 02:33:20.705038 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-host-proc-sys-kernel\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.706321 kubelet[2038]: I1213 02:33:20.705111 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-hubble-tls\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.706321 kubelet[2038]: I1213 02:33:20.705168 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-xtables-lock\") pod \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\" (UID: \"4428bcfb-4b80-4dbf-8833-c110b6b6fcdb\") " Dec 13 02:33:20.706408 kubelet[2038]: I1213 02:33:20.705206 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.706408 kubelet[2038]: I1213 02:33:20.705257 2038 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-hostproc\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.706408 kubelet[2038]: I1213 02:33:20.705295 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-cgroup\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.706408 kubelet[2038]: I1213 02:33:20.705351 2038 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cni-path\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.706408 kubelet[2038]: I1213 02:33:20.705382 2038 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-etc-cni-netd\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.706408 kubelet[2038]: I1213 02:33:20.705373 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.706408 kubelet[2038]: I1213 02:33:20.705415 2038 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-lib-modules\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.706542 kubelet[2038]: I1213 02:33:20.705447 2038 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-host-proc-sys-net\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.706542 kubelet[2038]: I1213 02:33:20.705480 2038 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-bpf-maps\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.706542 kubelet[2038]: I1213 02:33:20.705512 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-run\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.706889 kubelet[2038]: I1213 02:33:20.706855 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:33:20.707622 kubelet[2038]: I1213 02:33:20.707586 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-kube-api-access-wmg8s" (OuterVolumeSpecName: "kube-api-access-wmg8s") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "kube-api-access-wmg8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:20.707770 kubelet[2038]: I1213 02:33:20.707740 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:33:20.707770 kubelet[2038]: I1213 02:33:20.707739 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:20.707842 kubelet[2038]: I1213 02:33:20.707816 2038 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" (UID: "4428bcfb-4b80-4dbf-8833-c110b6b6fcdb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:33:20.708794 systemd[1]: var-lib-kubelet-pods-4428bcfb\x2d4b80\x2d4dbf\x2d8833\x2dc110b6b6fcdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwmg8s.mount: Deactivated successfully. Dec 13 02:33:20.708897 systemd[1]: var-lib-kubelet-pods-4428bcfb\x2d4b80\x2d4dbf\x2d8833\x2dc110b6b6fcdb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:33:20.708945 systemd[1]: var-lib-kubelet-pods-4428bcfb\x2d4b80\x2d4dbf\x2d8833\x2dc110b6b6fcdb-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:33:20.708990 systemd[1]: var-lib-kubelet-pods-4428bcfb\x2d4b80\x2d4dbf\x2d8833\x2dc110b6b6fcdb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:33:20.806651 kubelet[2038]: I1213 02:33:20.806557 2038 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-xtables-lock\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.806651 kubelet[2038]: I1213 02:33:20.806614 2038 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-host-proc-sys-kernel\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.806651 kubelet[2038]: I1213 02:33:20.806637 2038 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-hubble-tls\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.806651 kubelet[2038]: I1213 02:33:20.806659 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-config-path\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.807070 kubelet[2038]: I1213 02:33:20.806678 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-cilium-ipsec-secrets\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.807070 kubelet[2038]: I1213 02:33:20.806699 2038 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-clustermesh-secrets\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:20.807070 kubelet[2038]: I1213 02:33:20.806718 2038 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wmg8s\" (UniqueName: \"kubernetes.io/projected/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb-kube-api-access-wmg8s\") on node \"10.67.80.21\" DevicePath \"\"" Dec 13 02:33:21.172056 kubelet[2038]: E1213 02:33:21.171976 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:21.222532 kubelet[2038]: E1213 02:33:21.222460 2038 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:33:21.564691 kubelet[2038]: I1213 02:33:21.564536 2038 topology_manager.go:215] "Topology Admit Handler" podUID="e133eb16-25c4-4c3d-a1e2-5eada1e9dd28" podNamespace="kube-system" podName="cilium-22rfc" Dec 13 02:33:21.624267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345431341.mount: Deactivated successfully. Dec 13 02:33:21.712997 kubelet[2038]: I1213 02:33:21.712954 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-cilium-ipsec-secrets\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.712997 kubelet[2038]: I1213 02:33:21.712989 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-host-proc-sys-net\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.712997 kubelet[2038]: I1213 02:33:21.713002 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-etc-cni-netd\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713111 kubelet[2038]: I1213 02:33:21.713012 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-cilium-config-path\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713111 kubelet[2038]: I1213 02:33:21.713026 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-lib-modules\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713111 kubelet[2038]: I1213 02:33:21.713037 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-hubble-tls\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713111 kubelet[2038]: I1213 02:33:21.713048 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-xtables-lock\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713111 kubelet[2038]: I1213 02:33:21.713083 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-clustermesh-secrets\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713111 kubelet[2038]: I1213 02:33:21.713108 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-cilium-cgroup\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713215 kubelet[2038]: I1213 02:33:21.713119 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-cni-path\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713215 kubelet[2038]: I1213 02:33:21.713131 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-bpf-maps\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713215 kubelet[2038]: I1213 02:33:21.713143 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxlcm\" (UniqueName: \"kubernetes.io/projected/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-kube-api-access-pxlcm\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713215 kubelet[2038]: I1213 02:33:21.713155 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-cilium-run\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713215 kubelet[2038]: I1213 02:33:21.713164 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-hostproc\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.713215 kubelet[2038]: I1213 02:33:21.713196 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e133eb16-25c4-4c3d-a1e2-5eada1e9dd28-host-proc-sys-kernel\") pod \"cilium-22rfc\" (UID: \"e133eb16-25c4-4c3d-a1e2-5eada1e9dd28\") " pod="kube-system/cilium-22rfc" Dec 13 02:33:21.870151 env[1654]: time="2024-12-13T02:33:21.870094050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22rfc,Uid:e133eb16-25c4-4c3d-a1e2-5eada1e9dd28,Namespace:kube-system,Attempt:0,}" Dec 13 02:33:21.876835 env[1654]: time="2024-12-13T02:33:21.876744776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:33:21.876835 env[1654]: time="2024-12-13T02:33:21.876781535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:33:21.876835 env[1654]: time="2024-12-13T02:33:21.876788082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:21.876955 env[1654]: time="2024-12-13T02:33:21.876887536Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e pid=3858 runtime=io.containerd.runc.v2 Dec 13 02:33:21.893988 env[1654]: time="2024-12-13T02:33:21.893942721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22rfc,Uid:e133eb16-25c4-4c3d-a1e2-5eada1e9dd28,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\"" Dec 13 02:33:21.895277 env[1654]: time="2024-12-13T02:33:21.895260856Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:33:21.899533 env[1654]: time="2024-12-13T02:33:21.899517595Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c17ca047bc92125db2a7e055fd73b9b7e82bfd3821fee1511b68b3b2f6f8de2\"" Dec 13 02:33:21.899917 env[1654]: time="2024-12-13T02:33:21.899828583Z" level=info msg="StartContainer for \"2c17ca047bc92125db2a7e055fd73b9b7e82bfd3821fee1511b68b3b2f6f8de2\"" Dec 13 02:33:21.921776 env[1654]: time="2024-12-13T02:33:21.921723582Z" level=info msg="StartContainer for \"2c17ca047bc92125db2a7e055fd73b9b7e82bfd3821fee1511b68b3b2f6f8de2\" returns successfully" Dec 13 02:33:22.086003 env[1654]: time="2024-12-13T02:33:22.085841284Z" level=info msg="shim disconnected" id=2c17ca047bc92125db2a7e055fd73b9b7e82bfd3821fee1511b68b3b2f6f8de2 Dec 13 02:33:22.086003 env[1654]: time="2024-12-13T02:33:22.085968993Z" level=warning msg="cleaning up after shim disconnected" id=2c17ca047bc92125db2a7e055fd73b9b7e82bfd3821fee1511b68b3b2f6f8de2 namespace=k8s.io Dec 13 02:33:22.086003 env[1654]: time="2024-12-13T02:33:22.086001230Z" level=info msg="cleaning up dead shim" Dec 13 02:33:22.094099 env[1654]: time="2024-12-13T02:33:22.094054988Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3940 runtime=io.containerd.runc.v2\n" Dec 13 02:33:22.097566 env[1654]: time="2024-12-13T02:33:22.097515413Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:22.098088 env[1654]: time="2024-12-13T02:33:22.098074910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:22.098684 env[1654]: time="2024-12-13T02:33:22.098640157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:33:22.099007 env[1654]: time="2024-12-13T02:33:22.098961171Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:33:22.099901 env[1654]: time="2024-12-13T02:33:22.099858695Z" level=info msg="CreateContainer within sandbox \"30afa2a2fe779d0e6dde41ccaed0e106bd332d791ea14beaf40f588178815c08\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:33:22.103059 env[1654]: time="2024-12-13T02:33:22.103017991Z" level=info msg="CreateContainer within sandbox \"30afa2a2fe779d0e6dde41ccaed0e106bd332d791ea14beaf40f588178815c08\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0f2e50bb92eaba30873f323da2ed4b8a7fbccbf6113e14836d316649451e1824\"" Dec 13 02:33:22.103209 env[1654]: time="2024-12-13T02:33:22.103174812Z" level=info msg="StartContainer for \"0f2e50bb92eaba30873f323da2ed4b8a7fbccbf6113e14836d316649451e1824\"" Dec 13 02:33:22.124738 env[1654]: time="2024-12-13T02:33:22.124631816Z" level=info msg="StartContainer for \"0f2e50bb92eaba30873f323da2ed4b8a7fbccbf6113e14836d316649451e1824\" returns successfully" Dec 13 02:33:22.173002 kubelet[2038]: E1213 02:33:22.172950 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:22.262713 kubelet[2038]: I1213 02:33:22.262697 2038 setters.go:568] "Node became not ready" node="10.67.80.21" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:33:22Z","lastTransitionTime":"2024-12-13T02:33:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:33:22.540808 env[1654]: time="2024-12-13T02:33:22.540550940Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:33:22.544726 kubelet[2038]: I1213 02:33:22.544635 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-74ph2" podStartSLOduration=1.283024656 podStartE2EDuration="3.54452996s" podCreationTimestamp="2024-12-13 02:33:19 +0000 UTC" firstStartedPulling="2024-12-13 02:33:19.83762477 +0000 UTC m=+49.175399538" lastFinishedPulling="2024-12-13 02:33:22.099130073 +0000 UTC m=+51.436904842" observedRunningTime="2024-12-13 02:33:22.543950429 +0000 UTC m=+51.881725266" watchObservedRunningTime="2024-12-13 02:33:22.54452996 +0000 UTC m=+51.882304778" Dec 13 02:33:22.554486 env[1654]: time="2024-12-13T02:33:22.554354307Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a2bd01675c69a38fcd0d1fad867395d880aad2a1b558356991ee89f1a1d4a95e\"" Dec 13 02:33:22.555350 env[1654]: time="2024-12-13T02:33:22.555175170Z" level=info msg="StartContainer for \"a2bd01675c69a38fcd0d1fad867395d880aad2a1b558356991ee89f1a1d4a95e\"" Dec 13 02:33:22.581917 env[1654]: time="2024-12-13T02:33:22.581892807Z" level=info msg="StartContainer for \"a2bd01675c69a38fcd0d1fad867395d880aad2a1b558356991ee89f1a1d4a95e\" returns successfully" Dec 13 02:33:22.601017 env[1654]: time="2024-12-13T02:33:22.600956346Z" level=info msg="shim disconnected" id=a2bd01675c69a38fcd0d1fad867395d880aad2a1b558356991ee89f1a1d4a95e Dec 13 02:33:22.601017 env[1654]: time="2024-12-13T02:33:22.600989199Z" level=warning msg="cleaning up after shim disconnected" id=a2bd01675c69a38fcd0d1fad867395d880aad2a1b558356991ee89f1a1d4a95e namespace=k8s.io Dec 13 02:33:22.601017 env[1654]: time="2024-12-13T02:33:22.600997072Z" level=info msg="cleaning up dead shim" Dec 13 02:33:22.606484 env[1654]: time="2024-12-13T02:33:22.606436752Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4051 runtime=io.containerd.runc.v2\n" Dec 13 02:33:23.173501 kubelet[2038]: E1213 02:33:23.173388 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:23.347640 kubelet[2038]: I1213 02:33:23.347534 2038 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4428bcfb-4b80-4dbf-8833-c110b6b6fcdb" path="/var/lib/kubelet/pods/4428bcfb-4b80-4dbf-8833-c110b6b6fcdb/volumes" Dec 13 02:33:23.546892 env[1654]: time="2024-12-13T02:33:23.546652380Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:33:23.557285 env[1654]: time="2024-12-13T02:33:23.557241459Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd3464f9dfb29e9bc2d328cd94149751d793fd53e9f7b6544853af0c582b172d\"" Dec 13 02:33:23.557541 env[1654]: time="2024-12-13T02:33:23.557485603Z" level=info msg="StartContainer for \"fd3464f9dfb29e9bc2d328cd94149751d793fd53e9f7b6544853af0c582b172d\"" Dec 13 02:33:23.558901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219964649.mount: Deactivated successfully. Dec 13 02:33:23.582813 env[1654]: time="2024-12-13T02:33:23.582748716Z" level=info msg="StartContainer for \"fd3464f9dfb29e9bc2d328cd94149751d793fd53e9f7b6544853af0c582b172d\" returns successfully" Dec 13 02:33:23.594218 env[1654]: time="2024-12-13T02:33:23.594188667Z" level=info msg="shim disconnected" id=fd3464f9dfb29e9bc2d328cd94149751d793fd53e9f7b6544853af0c582b172d Dec 13 02:33:23.594218 env[1654]: time="2024-12-13T02:33:23.594218405Z" level=warning msg="cleaning up after shim disconnected" id=fd3464f9dfb29e9bc2d328cd94149751d793fd53e9f7b6544853af0c582b172d namespace=k8s.io Dec 13 02:33:23.594362 env[1654]: time="2024-12-13T02:33:23.594225217Z" level=info msg="cleaning up dead shim" Dec 13 02:33:23.598408 env[1654]: time="2024-12-13T02:33:23.598389836Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4108 runtime=io.containerd.runc.v2\n" Dec 13 02:33:23.821222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd3464f9dfb29e9bc2d328cd94149751d793fd53e9f7b6544853af0c582b172d-rootfs.mount: Deactivated successfully. Dec 13 02:33:24.174040 kubelet[2038]: E1213 02:33:24.173930 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:24.555051 env[1654]: time="2024-12-13T02:33:24.554734137Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:33:24.570439 env[1654]: time="2024-12-13T02:33:24.570363078Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f696362fd316aa33df672eee0dad89eb59b98810e9bce81d9f14f5bc7a8e04a1\"" Dec 13 02:33:24.570730 env[1654]: time="2024-12-13T02:33:24.570693309Z" level=info msg="StartContainer for \"f696362fd316aa33df672eee0dad89eb59b98810e9bce81d9f14f5bc7a8e04a1\"" Dec 13 02:33:24.591123 env[1654]: time="2024-12-13T02:33:24.591057196Z" level=info msg="StartContainer for \"f696362fd316aa33df672eee0dad89eb59b98810e9bce81d9f14f5bc7a8e04a1\" returns successfully" Dec 13 02:33:24.599452 env[1654]: time="2024-12-13T02:33:24.599422334Z" level=info msg="shim disconnected" id=f696362fd316aa33df672eee0dad89eb59b98810e9bce81d9f14f5bc7a8e04a1 Dec 13 02:33:24.599452 env[1654]: time="2024-12-13T02:33:24.599452356Z" level=warning msg="cleaning up after shim disconnected" id=f696362fd316aa33df672eee0dad89eb59b98810e9bce81d9f14f5bc7a8e04a1 namespace=k8s.io Dec 13 02:33:24.599568 env[1654]: time="2024-12-13T02:33:24.599458726Z" level=info msg="cleaning up dead shim" Dec 13 02:33:24.602895 env[1654]: time="2024-12-13T02:33:24.602880008Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4162 runtime=io.containerd.runc.v2\n" Dec 13 02:33:24.822992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f696362fd316aa33df672eee0dad89eb59b98810e9bce81d9f14f5bc7a8e04a1-rootfs.mount: Deactivated successfully. Dec 13 02:33:25.175347 kubelet[2038]: E1213 02:33:25.175210 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:25.564275 env[1654]: time="2024-12-13T02:33:25.564051794Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:33:25.573214 env[1654]: time="2024-12-13T02:33:25.573167051Z" level=info msg="CreateContainer within sandbox \"0b78288b128ca11bca3a5dd1d1a4e2302626fe0eaf3f92a184cdff596628095e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"55dcd9c673d85b117a2436513346cf78a00c3376e8fb7c896e8eb938c706aceb\"" Dec 13 02:33:25.573476 env[1654]: time="2024-12-13T02:33:25.573431224Z" level=info msg="StartContainer for \"55dcd9c673d85b117a2436513346cf78a00c3376e8fb7c896e8eb938c706aceb\"" Dec 13 02:33:25.596318 env[1654]: time="2024-12-13T02:33:25.596261667Z" level=info msg="StartContainer for \"55dcd9c673d85b117a2436513346cf78a00c3376e8fb7c896e8eb938c706aceb\" returns successfully" Dec 13 02:33:25.746324 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:33:26.176255 kubelet[2038]: E1213 02:33:26.176139 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:26.586516 kubelet[2038]: I1213 02:33:26.586285 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-22rfc" podStartSLOduration=5.586189576 podStartE2EDuration="5.586189576s" podCreationTimestamp="2024-12-13 02:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:33:26.586162944 +0000 UTC m=+55.923937785" watchObservedRunningTime="2024-12-13 02:33:26.586189576 +0000 UTC m=+55.923964413" Dec 13 02:33:27.176995 kubelet[2038]: E1213 02:33:27.176943 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:28.177933 kubelet[2038]: E1213 02:33:28.177820 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:28.631792 systemd-networkd[1404]: lxc_health: Link UP Dec 13 02:33:28.662201 systemd-networkd[1404]: lxc_health: Gained carrier Dec 13 02:33:28.662338 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:33:29.179016 kubelet[2038]: E1213 02:33:29.178960 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:30.071525 systemd-networkd[1404]: lxc_health: Gained IPv6LL Dec 13 02:33:30.179844 kubelet[2038]: E1213 02:33:30.179826 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:31.136638 kubelet[2038]: E1213 02:33:31.136524 2038 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:31.160824 env[1654]: time="2024-12-13T02:33:31.160693212Z" level=info msg="StopPodSandbox for \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\"" Dec 13 02:33:31.161699 env[1654]: time="2024-12-13T02:33:31.160927737Z" level=info msg="TearDown network for sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" successfully" Dec 13 02:33:31.161699 env[1654]: time="2024-12-13T02:33:31.161037027Z" level=info msg="StopPodSandbox for \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" returns successfully" Dec 13 02:33:31.161967 env[1654]: time="2024-12-13T02:33:31.161885698Z" level=info msg="RemovePodSandbox for \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\"" Dec 13 02:33:31.162070 env[1654]: time="2024-12-13T02:33:31.161983912Z" level=info msg="Forcibly stopping sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\"" Dec 13 02:33:31.162210 env[1654]: time="2024-12-13T02:33:31.162165785Z" level=info msg="TearDown network for sandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" successfully" Dec 13 02:33:31.167033 env[1654]: time="2024-12-13T02:33:31.166931529Z" level=info msg="RemovePodSandbox \"06e5b0cf234f3561cb05a0924c0a08c7dde22e3580aca8bf47920f983de95e0d\" returns successfully" Dec 13 02:33:31.181057 kubelet[2038]: E1213 02:33:31.180971 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:32.181849 kubelet[2038]: E1213 02:33:32.181739 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:33.182392 kubelet[2038]: E1213 02:33:33.182270 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:34.182679 kubelet[2038]: E1213 02:33:34.182564 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:33:35.182890 kubelet[2038]: E1213 02:33:35.182809 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"