Feb 13 07:35:39.550607 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 13 07:35:39.550620 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:35:39.550627 kernel: BIOS-provided physical RAM map: Feb 13 07:35:39.550631 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Feb 13 07:35:39.550634 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Feb 13 07:35:39.550638 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Feb 13 07:35:39.550642 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Feb 13 07:35:39.550646 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Feb 13 07:35:39.550650 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000082589fff] usable Feb 13 07:35:39.550654 kernel: BIOS-e820: [mem 0x000000008258a000-0x000000008258afff] ACPI NVS Feb 13 07:35:39.550658 kernel: BIOS-e820: [mem 0x000000008258b000-0x000000008258bfff] reserved Feb 13 07:35:39.550662 kernel: BIOS-e820: [mem 0x000000008258c000-0x000000008afccfff] usable Feb 13 07:35:39.550666 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Feb 13 07:35:39.550669 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Feb 13 07:35:39.550674 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Feb 13 07:35:39.550679 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Feb 13 07:35:39.550683 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Feb 13 07:35:39.550687 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Feb 13 07:35:39.550691 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 07:35:39.550695 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Feb 13 07:35:39.550699 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Feb 13 07:35:39.550703 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 07:35:39.550708 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Feb 13 07:35:39.550712 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Feb 13 07:35:39.550716 kernel: NX (Execute Disable) protection: active Feb 13 07:35:39.550720 kernel: SMBIOS 3.2.1 present. Feb 13 07:35:39.550725 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Feb 13 07:35:39.550729 kernel: tsc: Detected 3400.000 MHz processor Feb 13 07:35:39.550733 kernel: tsc: Detected 3399.906 MHz TSC Feb 13 07:35:39.550737 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 07:35:39.550742 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 07:35:39.550746 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Feb 13 07:35:39.550750 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 07:35:39.550755 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Feb 13 07:35:39.550759 kernel: Using GB pages for direct mapping Feb 13 07:35:39.550763 kernel: ACPI: Early table checksum verification disabled Feb 13 07:35:39.550768 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Feb 13 07:35:39.550772 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Feb 13 07:35:39.550776 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Feb 13 07:35:39.550781 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Feb 13 07:35:39.550787 kernel: ACPI: FACS 0x000000008C66CF80 000040 Feb 13 07:35:39.550791 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Feb 13 07:35:39.550796 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Feb 13 07:35:39.550801 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Feb 13 07:35:39.550806 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Feb 13 07:35:39.550810 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Feb 13 07:35:39.550815 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Feb 13 07:35:39.550819 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Feb 13 07:35:39.550824 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Feb 13 07:35:39.550828 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:35:39.550833 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Feb 13 07:35:39.550838 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Feb 13 07:35:39.550843 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:35:39.550847 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:35:39.550852 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Feb 13 07:35:39.550856 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Feb 13 07:35:39.550861 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:35:39.550865 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Feb 13 07:35:39.550871 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Feb 13 07:35:39.550875 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Feb 13 07:35:39.550880 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Feb 13 07:35:39.550884 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Feb 13 07:35:39.550889 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Feb 13 07:35:39.550893 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Feb 13 07:35:39.550898 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Feb 13 07:35:39.550902 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Feb 13 07:35:39.550907 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Feb 13 07:35:39.550912 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Feb 13 07:35:39.550917 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Feb 13 07:35:39.550921 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Feb 13 07:35:39.550926 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Feb 13 07:35:39.550930 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Feb 13 07:35:39.550935 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Feb 13 07:35:39.550939 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Feb 13 07:35:39.550944 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Feb 13 07:35:39.550949 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Feb 13 07:35:39.550954 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Feb 13 07:35:39.550958 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Feb 13 07:35:39.550963 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Feb 13 07:35:39.550967 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Feb 13 07:35:39.550972 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Feb 13 07:35:39.550976 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Feb 13 07:35:39.550981 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Feb 13 07:35:39.550985 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Feb 13 07:35:39.550991 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Feb 13 07:35:39.550995 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Feb 13 07:35:39.551000 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Feb 13 07:35:39.551004 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Feb 13 07:35:39.551009 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Feb 13 07:35:39.551013 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Feb 13 07:35:39.551018 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Feb 13 07:35:39.551022 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Feb 13 07:35:39.551027 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Feb 13 07:35:39.551032 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Feb 13 07:35:39.551037 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Feb 13 07:35:39.551041 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Feb 13 07:35:39.551046 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Feb 13 07:35:39.551050 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Feb 13 07:35:39.551055 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Feb 13 07:35:39.551059 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Feb 13 07:35:39.551064 kernel: No NUMA configuration found Feb 13 07:35:39.551068 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Feb 13 07:35:39.551074 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Feb 13 07:35:39.551078 kernel: Zone ranges: Feb 13 07:35:39.551083 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 07:35:39.551088 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 07:35:39.551092 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Feb 13 07:35:39.551096 kernel: Movable zone start for each node Feb 13 07:35:39.551101 kernel: Early memory node ranges Feb 13 07:35:39.551106 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Feb 13 07:35:39.551110 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Feb 13 07:35:39.551115 kernel: node 0: [mem 0x0000000040400000-0x0000000082589fff] Feb 13 07:35:39.551120 kernel: node 0: [mem 0x000000008258c000-0x000000008afccfff] Feb 13 07:35:39.551124 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Feb 13 07:35:39.551129 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Feb 13 07:35:39.551134 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Feb 13 07:35:39.551138 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Feb 13 07:35:39.551143 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 07:35:39.551150 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Feb 13 07:35:39.551156 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Feb 13 07:35:39.551161 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Feb 13 07:35:39.551166 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Feb 13 07:35:39.551172 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Feb 13 07:35:39.551177 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Feb 13 07:35:39.551182 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Feb 13 07:35:39.551187 kernel: ACPI: PM-Timer IO Port: 0x1808 Feb 13 07:35:39.551191 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 07:35:39.551196 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 07:35:39.551201 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 07:35:39.551207 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 07:35:39.551212 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 07:35:39.551217 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 07:35:39.551221 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 07:35:39.551226 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 07:35:39.551231 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 07:35:39.551236 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 07:35:39.551241 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 07:35:39.551246 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 07:35:39.551251 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 07:35:39.551256 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 07:35:39.551261 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 07:35:39.551266 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 07:35:39.551271 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Feb 13 07:35:39.551276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 07:35:39.551280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 07:35:39.551285 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 07:35:39.551290 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 07:35:39.551296 kernel: TSC deadline timer available Feb 13 07:35:39.551301 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Feb 13 07:35:39.551306 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Feb 13 07:35:39.551311 kernel: Booting paravirtualized kernel on bare hardware Feb 13 07:35:39.551316 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 07:35:39.551321 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Feb 13 07:35:39.551325 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 13 07:35:39.551330 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 13 07:35:39.551335 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 07:35:39.551340 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Feb 13 07:35:39.551345 kernel: Policy zone: Normal Feb 13 07:35:39.551351 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:35:39.551374 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 07:35:39.551379 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Feb 13 07:35:39.551384 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Feb 13 07:35:39.551389 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 07:35:39.551395 kernel: Memory: 32724720K/33452980K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 728000K reserved, 0K cma-reserved) Feb 13 07:35:39.551414 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 07:35:39.551419 kernel: ftrace: allocating 34475 entries in 135 pages Feb 13 07:35:39.551424 kernel: ftrace: allocated 135 pages with 4 groups Feb 13 07:35:39.551429 kernel: rcu: Hierarchical RCU implementation. Feb 13 07:35:39.551434 kernel: rcu: RCU event tracing is enabled. Feb 13 07:35:39.551439 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 07:35:39.551444 kernel: Rude variant of Tasks RCU enabled. Feb 13 07:35:39.551449 kernel: Tracing variant of Tasks RCU enabled. Feb 13 07:35:39.551454 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 07:35:39.551459 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 07:35:39.551464 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Feb 13 07:35:39.551469 kernel: random: crng init done Feb 13 07:35:39.551474 kernel: Console: colour dummy device 80x25 Feb 13 07:35:39.551479 kernel: printk: console [tty0] enabled Feb 13 07:35:39.551484 kernel: printk: console [ttyS1] enabled Feb 13 07:35:39.551489 kernel: ACPI: Core revision 20210730 Feb 13 07:35:39.551493 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Feb 13 07:35:39.551498 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 07:35:39.551504 kernel: DMAR: Host address width 39 Feb 13 07:35:39.551509 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Feb 13 07:35:39.551514 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Feb 13 07:35:39.551519 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Feb 13 07:35:39.551524 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Feb 13 07:35:39.551528 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Feb 13 07:35:39.551533 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Feb 13 07:35:39.551538 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Feb 13 07:35:39.551543 kernel: x2apic enabled Feb 13 07:35:39.551549 kernel: Switched APIC routing to cluster x2apic. Feb 13 07:35:39.551554 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Feb 13 07:35:39.551559 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Feb 13 07:35:39.551564 kernel: CPU0: Thermal monitoring enabled (TM1) Feb 13 07:35:39.551568 kernel: process: using mwait in idle threads Feb 13 07:35:39.551573 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 07:35:39.551578 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 07:35:39.551583 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 07:35:39.551588 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:35:39.551593 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 13 07:35:39.551598 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 07:35:39.551603 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 07:35:39.551608 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 07:35:39.551613 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 07:35:39.551617 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 13 07:35:39.551622 kernel: TAA: Mitigation: TSX disabled Feb 13 07:35:39.551627 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Feb 13 07:35:39.551632 kernel: SRBDS: Mitigation: Microcode Feb 13 07:35:39.551637 kernel: GDS: Vulnerable: No microcode Feb 13 07:35:39.551642 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 07:35:39.551647 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 07:35:39.551652 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 07:35:39.551657 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 07:35:39.551662 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 07:35:39.551666 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 07:35:39.551671 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 07:35:39.551676 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 07:35:39.551681 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Feb 13 07:35:39.551686 kernel: Freeing SMP alternatives memory: 32K Feb 13 07:35:39.551690 kernel: pid_max: default: 32768 minimum: 301 Feb 13 07:35:39.551695 kernel: LSM: Security Framework initializing Feb 13 07:35:39.551700 kernel: SELinux: Initializing. Feb 13 07:35:39.551706 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 07:35:39.551710 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 07:35:39.551715 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Feb 13 07:35:39.551720 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 07:35:39.551725 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Feb 13 07:35:39.551730 kernel: ... version: 4 Feb 13 07:35:39.551735 kernel: ... bit width: 48 Feb 13 07:35:39.551740 kernel: ... generic registers: 4 Feb 13 07:35:39.551744 kernel: ... value mask: 0000ffffffffffff Feb 13 07:35:39.551749 kernel: ... max period: 00007fffffffffff Feb 13 07:35:39.551755 kernel: ... fixed-purpose events: 3 Feb 13 07:35:39.551760 kernel: ... event mask: 000000070000000f Feb 13 07:35:39.551765 kernel: signal: max sigframe size: 2032 Feb 13 07:35:39.551769 kernel: rcu: Hierarchical SRCU implementation. Feb 13 07:35:39.551774 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Feb 13 07:35:39.551779 kernel: smp: Bringing up secondary CPUs ... Feb 13 07:35:39.551784 kernel: x86: Booting SMP configuration: Feb 13 07:35:39.551789 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Feb 13 07:35:39.551794 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 07:35:39.551800 kernel: #9 #10 #11 #12 #13 #14 #15 Feb 13 07:35:39.551805 kernel: smp: Brought up 1 node, 16 CPUs Feb 13 07:35:39.551809 kernel: smpboot: Max logical packages: 1 Feb 13 07:35:39.551814 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Feb 13 07:35:39.551819 kernel: devtmpfs: initialized Feb 13 07:35:39.551824 kernel: x86/mm: Memory block size: 128MB Feb 13 07:35:39.551829 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8258a000-0x8258afff] (4096 bytes) Feb 13 07:35:39.551834 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Feb 13 07:35:39.551840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 07:35:39.551844 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 07:35:39.551849 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 07:35:39.551854 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 07:35:39.551859 kernel: audit: initializing netlink subsys (disabled) Feb 13 07:35:39.551864 kernel: audit: type=2000 audit(1707809734.040:1): state=initialized audit_enabled=0 res=1 Feb 13 07:35:39.551869 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 07:35:39.551874 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 07:35:39.551878 kernel: cpuidle: using governor menu Feb 13 07:35:39.551884 kernel: ACPI: bus type PCI registered Feb 13 07:35:39.551889 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 07:35:39.551894 kernel: dca service started, version 1.12.1 Feb 13 07:35:39.551899 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 07:35:39.551903 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Feb 13 07:35:39.551908 kernel: PCI: Using configuration type 1 for base access Feb 13 07:35:39.551913 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Feb 13 07:35:39.551918 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 07:35:39.551923 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 07:35:39.551928 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 07:35:39.551933 kernel: ACPI: Added _OSI(Module Device) Feb 13 07:35:39.551938 kernel: ACPI: Added _OSI(Processor Device) Feb 13 07:35:39.551943 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 07:35:39.551948 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 07:35:39.551953 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 13 07:35:39.551957 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 13 07:35:39.551962 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 13 07:35:39.551967 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Feb 13 07:35:39.551973 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:35:39.551978 kernel: ACPI: SSDT 0xFFFF8E08C0212A00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Feb 13 07:35:39.551983 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Feb 13 07:35:39.551988 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:35:39.551992 kernel: ACPI: SSDT 0xFFFF8E08C1AE0C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Feb 13 07:35:39.551997 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:35:39.552002 kernel: ACPI: SSDT 0xFFFF8E08C1A5B000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Feb 13 07:35:39.552007 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:35:39.552011 kernel: ACPI: SSDT 0xFFFF8E08C1A5F800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Feb 13 07:35:39.552016 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:35:39.552022 kernel: ACPI: SSDT 0xFFFF8E08C014B000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Feb 13 07:35:39.552027 kernel: ACPI: Dynamic OEM Table Load: Feb 13 07:35:39.552032 kernel: ACPI: SSDT 0xFFFF8E08C1AE3C00 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Feb 13 07:35:39.552037 kernel: ACPI: Interpreter enabled Feb 13 07:35:39.552041 kernel: ACPI: PM: (supports S0 S5) Feb 13 07:35:39.552046 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 07:35:39.552051 kernel: HEST: Enabling Firmware First mode for corrected errors. Feb 13 07:35:39.552056 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Feb 13 07:35:39.552061 kernel: HEST: Table parsing has been initialized. Feb 13 07:35:39.552067 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Feb 13 07:35:39.552072 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 07:35:39.552076 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Feb 13 07:35:39.552081 kernel: ACPI: PM: Power Resource [USBC] Feb 13 07:35:39.552086 kernel: ACPI: PM: Power Resource [V0PR] Feb 13 07:35:39.552091 kernel: ACPI: PM: Power Resource [V1PR] Feb 13 07:35:39.552096 kernel: ACPI: PM: Power Resource [V2PR] Feb 13 07:35:39.552101 kernel: ACPI: PM: Power Resource [WRST] Feb 13 07:35:39.552105 kernel: ACPI: PM: Power Resource [FN00] Feb 13 07:35:39.552111 kernel: ACPI: PM: Power Resource [FN01] Feb 13 07:35:39.552116 kernel: ACPI: PM: Power Resource [FN02] Feb 13 07:35:39.552121 kernel: ACPI: PM: Power Resource [FN03] Feb 13 07:35:39.552125 kernel: ACPI: PM: Power Resource [FN04] Feb 13 07:35:39.552130 kernel: ACPI: PM: Power Resource [PIN] Feb 13 07:35:39.552135 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Feb 13 07:35:39.552198 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 07:35:39.552242 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Feb 13 07:35:39.552284 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Feb 13 07:35:39.552291 kernel: PCI host bridge to bus 0000:00 Feb 13 07:35:39.552334 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 07:35:39.552389 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 07:35:39.552443 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 07:35:39.552478 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Feb 13 07:35:39.552514 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Feb 13 07:35:39.552551 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Feb 13 07:35:39.552601 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Feb 13 07:35:39.552650 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Feb 13 07:35:39.552693 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Feb 13 07:35:39.552738 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Feb 13 07:35:39.552781 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Feb 13 07:35:39.552827 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Feb 13 07:35:39.552869 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Feb 13 07:35:39.552914 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Feb 13 07:35:39.552956 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Feb 13 07:35:39.552997 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Feb 13 07:35:39.553042 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Feb 13 07:35:39.553085 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Feb 13 07:35:39.553126 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Feb 13 07:35:39.553169 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Feb 13 07:35:39.553211 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:35:39.553257 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Feb 13 07:35:39.553298 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:35:39.553343 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Feb 13 07:35:39.553421 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Feb 13 07:35:39.553461 kernel: pci 0000:00:16.0: PME# supported from D3hot Feb 13 07:35:39.553505 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Feb 13 07:35:39.553546 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Feb 13 07:35:39.553586 kernel: pci 0000:00:16.1: PME# supported from D3hot Feb 13 07:35:39.553628 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Feb 13 07:35:39.553671 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Feb 13 07:35:39.553711 kernel: pci 0000:00:16.4: PME# supported from D3hot Feb 13 07:35:39.553754 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Feb 13 07:35:39.553795 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Feb 13 07:35:39.553835 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Feb 13 07:35:39.553874 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Feb 13 07:35:39.553915 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Feb 13 07:35:39.553961 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Feb 13 07:35:39.554003 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Feb 13 07:35:39.554044 kernel: pci 0000:00:17.0: PME# supported from D3hot Feb 13 07:35:39.554088 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Feb 13 07:35:39.554130 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Feb 13 07:35:39.554174 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Feb 13 07:35:39.554217 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Feb 13 07:35:39.554265 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Feb 13 07:35:39.554306 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Feb 13 07:35:39.554352 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Feb 13 07:35:39.554396 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Feb 13 07:35:39.554442 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Feb 13 07:35:39.554484 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Feb 13 07:35:39.554529 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Feb 13 07:35:39.554570 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Feb 13 07:35:39.554617 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Feb 13 07:35:39.554662 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Feb 13 07:35:39.554704 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Feb 13 07:35:39.554745 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Feb 13 07:35:39.554791 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Feb 13 07:35:39.554834 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Feb 13 07:35:39.554880 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Feb 13 07:35:39.554926 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Feb 13 07:35:39.554967 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Feb 13 07:35:39.555009 kernel: pci 0000:01:00.0: PME# supported from D3cold Feb 13 07:35:39.555051 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 07:35:39.555093 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 07:35:39.555139 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Feb 13 07:35:39.555181 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Feb 13 07:35:39.555226 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Feb 13 07:35:39.555267 kernel: pci 0000:01:00.1: PME# supported from D3cold Feb 13 07:35:39.555310 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Feb 13 07:35:39.555351 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Feb 13 07:35:39.555395 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 07:35:39.555437 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 07:35:39.555477 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:35:39.555517 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 07:35:39.555566 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Feb 13 07:35:39.555609 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Feb 13 07:35:39.555652 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Feb 13 07:35:39.555694 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Feb 13 07:35:39.555735 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 07:35:39.555776 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 07:35:39.555817 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 07:35:39.555860 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 07:35:39.555909 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Feb 13 07:35:39.555953 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Feb 13 07:35:39.556031 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Feb 13 07:35:39.556094 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Feb 13 07:35:39.556135 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Feb 13 07:35:39.556176 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 07:35:39.556217 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 07:35:39.556260 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 07:35:39.556302 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 07:35:39.556348 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Feb 13 07:35:39.556426 kernel: pci 0000:06:00.0: enabling Extended Tags Feb 13 07:35:39.556469 kernel: pci 0000:06:00.0: supports D1 D2 Feb 13 07:35:39.556512 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 07:35:39.556553 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 07:35:39.556595 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 07:35:39.556638 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:35:39.556685 kernel: pci_bus 0000:07: extended config space not accessible Feb 13 07:35:39.556734 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Feb 13 07:35:39.556779 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Feb 13 07:35:39.556824 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Feb 13 07:35:39.556868 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Feb 13 07:35:39.556911 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 07:35:39.556958 kernel: pci 0000:07:00.0: supports D1 D2 Feb 13 07:35:39.557004 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 07:35:39.557046 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 07:35:39.557089 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 07:35:39.557132 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:35:39.557139 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Feb 13 07:35:39.557145 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Feb 13 07:35:39.557151 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Feb 13 07:35:39.557156 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Feb 13 07:35:39.557161 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Feb 13 07:35:39.557167 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Feb 13 07:35:39.557172 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Feb 13 07:35:39.557177 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Feb 13 07:35:39.557182 kernel: iommu: Default domain type: Translated Feb 13 07:35:39.557187 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 07:35:39.557231 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Feb 13 07:35:39.557276 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 07:35:39.557321 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Feb 13 07:35:39.557329 kernel: vgaarb: loaded Feb 13 07:35:39.557334 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 13 07:35:39.557339 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 13 07:35:39.557344 kernel: PTP clock support registered Feb 13 07:35:39.557350 kernel: PCI: Using ACPI for IRQ routing Feb 13 07:35:39.557373 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 07:35:39.557378 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Feb 13 07:35:39.557385 kernel: e820: reserve RAM buffer [mem 0x8258a000-0x83ffffff] Feb 13 07:35:39.557390 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Feb 13 07:35:39.557395 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Feb 13 07:35:39.557420 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Feb 13 07:35:39.557425 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Feb 13 07:35:39.557430 kernel: clocksource: Switched to clocksource tsc-early Feb 13 07:35:39.557435 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 07:35:39.557440 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 07:35:39.557445 kernel: pnp: PnP ACPI init Feb 13 07:35:39.557489 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Feb 13 07:35:39.557531 kernel: pnp 00:02: [dma 0 disabled] Feb 13 07:35:39.557571 kernel: pnp 00:03: [dma 0 disabled] Feb 13 07:35:39.557613 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Feb 13 07:35:39.557651 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Feb 13 07:35:39.557691 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Feb 13 07:35:39.557733 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Feb 13 07:35:39.557769 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Feb 13 07:35:39.557807 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Feb 13 07:35:39.557844 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Feb 13 07:35:39.557880 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Feb 13 07:35:39.557918 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Feb 13 07:35:39.557956 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Feb 13 07:35:39.557994 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Feb 13 07:35:39.558034 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Feb 13 07:35:39.558070 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Feb 13 07:35:39.558107 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Feb 13 07:35:39.558144 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Feb 13 07:35:39.558180 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Feb 13 07:35:39.558217 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Feb 13 07:35:39.558255 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Feb 13 07:35:39.558296 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Feb 13 07:35:39.558303 kernel: pnp: PnP ACPI: found 10 devices Feb 13 07:35:39.558309 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 07:35:39.558314 kernel: NET: Registered PF_INET protocol family Feb 13 07:35:39.558319 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 07:35:39.558324 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 13 07:35:39.558330 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 07:35:39.558336 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 07:35:39.558341 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 13 07:35:39.558347 kernel: TCP: Hash tables configured (established 262144 bind 65536) Feb 13 07:35:39.558352 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 07:35:39.558377 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 07:35:39.558382 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 07:35:39.558387 kernel: NET: Registered PF_XDP protocol family Feb 13 07:35:39.558448 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Feb 13 07:35:39.558491 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Feb 13 07:35:39.558534 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Feb 13 07:35:39.558577 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 07:35:39.558620 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 07:35:39.558663 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Feb 13 07:35:39.558705 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Feb 13 07:35:39.558747 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 07:35:39.558789 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Feb 13 07:35:39.558831 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:35:39.558872 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Feb 13 07:35:39.558914 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Feb 13 07:35:39.558955 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Feb 13 07:35:39.558995 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Feb 13 07:35:39.559037 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Feb 13 07:35:39.559079 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Feb 13 07:35:39.559119 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Feb 13 07:35:39.559161 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Feb 13 07:35:39.559203 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Feb 13 07:35:39.559245 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Feb 13 07:35:39.559288 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:35:39.559329 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Feb 13 07:35:39.559391 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Feb 13 07:35:39.559454 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Feb 13 07:35:39.559491 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Feb 13 07:35:39.559527 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 07:35:39.559562 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 07:35:39.559598 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 07:35:39.559633 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Feb 13 07:35:39.559669 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Feb 13 07:35:39.559710 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Feb 13 07:35:39.559751 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Feb 13 07:35:39.559794 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Feb 13 07:35:39.559833 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Feb 13 07:35:39.559875 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 07:35:39.559914 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Feb 13 07:35:39.559957 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Feb 13 07:35:39.559996 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Feb 13 07:35:39.560037 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Feb 13 07:35:39.560076 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Feb 13 07:35:39.560083 kernel: PCI: CLS 64 bytes, default 64 Feb 13 07:35:39.560089 kernel: DMAR: No ATSR found Feb 13 07:35:39.560094 kernel: DMAR: No SATC found Feb 13 07:35:39.560099 kernel: DMAR: dmar0: Using Queued invalidation Feb 13 07:35:39.560140 kernel: pci 0000:00:00.0: Adding to iommu group 0 Feb 13 07:35:39.560183 kernel: pci 0000:00:01.0: Adding to iommu group 1 Feb 13 07:35:39.560226 kernel: pci 0000:00:08.0: Adding to iommu group 2 Feb 13 07:35:39.560267 kernel: pci 0000:00:12.0: Adding to iommu group 3 Feb 13 07:35:39.560308 kernel: pci 0000:00:14.0: Adding to iommu group 4 Feb 13 07:35:39.560349 kernel: pci 0000:00:14.2: Adding to iommu group 4 Feb 13 07:35:39.560392 kernel: pci 0000:00:15.0: Adding to iommu group 5 Feb 13 07:35:39.560433 kernel: pci 0000:00:15.1: Adding to iommu group 5 Feb 13 07:35:39.560473 kernel: pci 0000:00:16.0: Adding to iommu group 6 Feb 13 07:35:39.560516 kernel: pci 0000:00:16.1: Adding to iommu group 6 Feb 13 07:35:39.560556 kernel: pci 0000:00:16.4: Adding to iommu group 6 Feb 13 07:35:39.560597 kernel: pci 0000:00:17.0: Adding to iommu group 7 Feb 13 07:35:39.560638 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Feb 13 07:35:39.560678 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Feb 13 07:35:39.560719 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Feb 13 07:35:39.560760 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Feb 13 07:35:39.560801 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Feb 13 07:35:39.560843 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Feb 13 07:35:39.560883 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Feb 13 07:35:39.560924 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Feb 13 07:35:39.560965 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Feb 13 07:35:39.561009 kernel: pci 0000:01:00.0: Adding to iommu group 1 Feb 13 07:35:39.561051 kernel: pci 0000:01:00.1: Adding to iommu group 1 Feb 13 07:35:39.561094 kernel: pci 0000:03:00.0: Adding to iommu group 15 Feb 13 07:35:39.561137 kernel: pci 0000:04:00.0: Adding to iommu group 16 Feb 13 07:35:39.561182 kernel: pci 0000:06:00.0: Adding to iommu group 17 Feb 13 07:35:39.561226 kernel: pci 0000:07:00.0: Adding to iommu group 17 Feb 13 07:35:39.561234 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Feb 13 07:35:39.561239 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 07:35:39.561244 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Feb 13 07:35:39.561250 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Feb 13 07:35:39.561255 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Feb 13 07:35:39.561260 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Feb 13 07:35:39.561267 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Feb 13 07:35:39.561311 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Feb 13 07:35:39.561319 kernel: Initialise system trusted keyrings Feb 13 07:35:39.561324 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Feb 13 07:35:39.561329 kernel: Key type asymmetric registered Feb 13 07:35:39.561334 kernel: Asymmetric key parser 'x509' registered Feb 13 07:35:39.561340 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 13 07:35:39.561345 kernel: io scheduler mq-deadline registered Feb 13 07:35:39.561351 kernel: io scheduler kyber registered Feb 13 07:35:39.561359 kernel: io scheduler bfq registered Feb 13 07:35:39.561400 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Feb 13 07:35:39.561441 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Feb 13 07:35:39.561483 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Feb 13 07:35:39.561523 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Feb 13 07:35:39.561565 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Feb 13 07:35:39.561605 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Feb 13 07:35:39.561653 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Feb 13 07:35:39.561661 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Feb 13 07:35:39.561667 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Feb 13 07:35:39.561672 kernel: pstore: Registered erst as persistent store backend Feb 13 07:35:39.561677 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 07:35:39.561682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 07:35:39.561688 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 07:35:39.561693 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 07:35:39.561699 kernel: hpet_acpi_add: no address or irqs in _CRS Feb 13 07:35:39.561743 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Feb 13 07:35:39.561751 kernel: i8042: PNP: No PS/2 controller found. Feb 13 07:35:39.561788 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Feb 13 07:35:39.561826 kernel: rtc_cmos rtc_cmos: registered as rtc0 Feb 13 07:35:39.561863 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-02-13T07:35:38 UTC (1707809738) Feb 13 07:35:39.561900 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Feb 13 07:35:39.561907 kernel: fail to initialize ptp_kvm Feb 13 07:35:39.561914 kernel: intel_pstate: Intel P-state driver initializing Feb 13 07:35:39.561919 kernel: intel_pstate: Disabling energy efficiency optimization Feb 13 07:35:39.561924 kernel: intel_pstate: HWP enabled Feb 13 07:35:39.561929 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Feb 13 07:35:39.561935 kernel: vesafb: scrolling: redraw Feb 13 07:35:39.561940 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Feb 13 07:35:39.561945 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000041677ddc, using 768k, total 768k Feb 13 07:35:39.561950 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 07:35:39.561955 kernel: fb0: VESA VGA frame buffer device Feb 13 07:35:39.561961 kernel: NET: Registered PF_INET6 protocol family Feb 13 07:35:39.561967 kernel: Segment Routing with IPv6 Feb 13 07:35:39.561972 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 07:35:39.561977 kernel: NET: Registered PF_PACKET protocol family Feb 13 07:35:39.561982 kernel: Key type dns_resolver registered Feb 13 07:35:39.561987 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Feb 13 07:35:39.561992 kernel: microcode: Microcode Update Driver: v2.2. Feb 13 07:35:39.561997 kernel: IPI shorthand broadcast: enabled Feb 13 07:35:39.562003 kernel: sched_clock: Marking stable (1679471892, 1339759749)->(4439702422, -1420470781) Feb 13 07:35:39.562009 kernel: registered taskstats version 1 Feb 13 07:35:39.562014 kernel: Loading compiled-in X.509 certificates Feb 13 07:35:39.562019 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 13 07:35:39.562024 kernel: Key type .fscrypt registered Feb 13 07:35:39.562029 kernel: Key type fscrypt-provisioning registered Feb 13 07:35:39.562034 kernel: pstore: Using crash dump compression: deflate Feb 13 07:35:39.562040 kernel: ima: Allocated hash algorithm: sha1 Feb 13 07:35:39.562045 kernel: ima: No architecture policies found Feb 13 07:35:39.562050 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 13 07:35:39.562056 kernel: Write protecting the kernel read-only data: 28672k Feb 13 07:35:39.562061 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 13 07:35:39.562067 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 13 07:35:39.562072 kernel: Run /init as init process Feb 13 07:35:39.562077 kernel: with arguments: Feb 13 07:35:39.562083 kernel: /init Feb 13 07:35:39.562088 kernel: with environment: Feb 13 07:35:39.562093 kernel: HOME=/ Feb 13 07:35:39.562098 kernel: TERM=linux Feb 13 07:35:39.562103 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 07:35:39.562110 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 07:35:39.562116 systemd[1]: Detected architecture x86-64. Feb 13 07:35:39.562122 systemd[1]: Running in initrd. Feb 13 07:35:39.562127 systemd[1]: No hostname configured, using default hostname. Feb 13 07:35:39.562132 systemd[1]: Hostname set to . Feb 13 07:35:39.562138 systemd[1]: Initializing machine ID from random generator. Feb 13 07:35:39.562144 systemd[1]: Queued start job for default target initrd.target. Feb 13 07:35:39.562149 systemd[1]: Started systemd-ask-password-console.path. Feb 13 07:35:39.562155 systemd[1]: Reached target cryptsetup.target. Feb 13 07:35:39.562160 systemd[1]: Reached target paths.target. Feb 13 07:35:39.562165 systemd[1]: Reached target slices.target. Feb 13 07:35:39.562170 systemd[1]: Reached target swap.target. Feb 13 07:35:39.562176 systemd[1]: Reached target timers.target. Feb 13 07:35:39.562181 systemd[1]: Listening on iscsid.socket. Feb 13 07:35:39.562187 systemd[1]: Listening on iscsiuio.socket. Feb 13 07:35:39.562193 systemd[1]: Listening on systemd-journald-audit.socket. Feb 13 07:35:39.562198 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 13 07:35:39.562204 systemd[1]: Listening on systemd-journald.socket. Feb 13 07:35:39.562209 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Feb 13 07:35:39.562214 systemd[1]: Listening on systemd-networkd.socket. Feb 13 07:35:39.562220 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Feb 13 07:35:39.562225 kernel: clocksource: Switched to clocksource tsc Feb 13 07:35:39.562231 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 07:35:39.562237 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 07:35:39.562242 systemd[1]: Reached target sockets.target. Feb 13 07:35:39.562247 systemd[1]: Starting kmod-static-nodes.service... Feb 13 07:35:39.562253 systemd[1]: Finished network-cleanup.service. Feb 13 07:35:39.562258 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 07:35:39.562263 systemd[1]: Starting systemd-journald.service... Feb 13 07:35:39.562269 systemd[1]: Starting systemd-modules-load.service... Feb 13 07:35:39.562276 systemd-journald[268]: Journal started Feb 13 07:35:39.562301 systemd-journald[268]: Runtime Journal (/run/log/journal/0b57e640f5884c5d95b95563cfe562c7) is 8.0M, max 640.1M, 632.1M free. Feb 13 07:35:39.564727 systemd-modules-load[269]: Inserted module 'overlay' Feb 13 07:35:39.571000 audit: BPF prog-id=6 op=LOAD Feb 13 07:35:39.589579 kernel: audit: type=1334 audit(1707809739.571:2): prog-id=6 op=LOAD Feb 13 07:35:39.589594 systemd[1]: Starting systemd-resolved.service... Feb 13 07:35:39.638402 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 07:35:39.638417 systemd[1]: Starting systemd-vconsole-setup.service... Feb 13 07:35:39.670361 kernel: Bridge firewalling registered Feb 13 07:35:39.670378 systemd[1]: Started systemd-journald.service. Feb 13 07:35:39.685486 systemd-modules-load[269]: Inserted module 'br_netfilter' Feb 13 07:35:39.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.691132 systemd-resolved[271]: Positive Trust Anchors: Feb 13 07:35:39.811501 kernel: audit: type=1130 audit(1707809739.693:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.811514 kernel: SCSI subsystem initialized Feb 13 07:35:39.811521 kernel: audit: type=1130 audit(1707809739.747:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.811530 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 07:35:39.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.691139 systemd-resolved[271]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 07:35:39.914000 kernel: device-mapper: uevent: version 1.0.3 Feb 13 07:35:39.914040 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 13 07:35:39.914049 kernel: audit: type=1130 audit(1707809739.869:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.691157 systemd-resolved[271]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 07:35:39.989597 kernel: audit: type=1130 audit(1707809739.923:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.692732 systemd-resolved[271]: Defaulting to hostname 'linux'. Feb 13 07:35:39.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.693634 systemd[1]: Started systemd-resolved.service. Feb 13 07:35:40.097463 kernel: audit: type=1130 audit(1707809739.998:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:40.097475 kernel: audit: type=1130 audit(1707809740.051:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:40.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:39.747522 systemd[1]: Finished kmod-static-nodes.service. Feb 13 07:35:39.869760 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 07:35:39.914598 systemd-modules-load[269]: Inserted module 'dm_multipath' Feb 13 07:35:39.923663 systemd[1]: Finished systemd-modules-load.service. Feb 13 07:35:39.998719 systemd[1]: Finished systemd-vconsole-setup.service. Feb 13 07:35:40.051646 systemd[1]: Reached target nss-lookup.target. Feb 13 07:35:40.105949 systemd[1]: Starting dracut-cmdline-ask.service... Feb 13 07:35:40.126911 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:35:40.127198 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 07:35:40.130100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 07:35:40.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:40.179365 kernel: audit: type=1130 audit(1707809740.129:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:40.130861 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:35:40.248588 kernel: audit: type=1130 audit(1707809740.193:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:40.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:40.193705 systemd[1]: Finished dracut-cmdline-ask.service. Feb 13 07:35:40.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:40.257962 systemd[1]: Starting dracut-cmdline.service... Feb 13 07:35:40.265529 dracut-cmdline[292]: dracut-dracut-053 Feb 13 07:35:40.265529 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 13 07:35:40.265529 dracut-cmdline[292]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 13 07:35:40.347464 kernel: Loading iSCSI transport class v2.0-870. Feb 13 07:35:40.347477 kernel: iscsi: registered transport (tcp) Feb 13 07:35:40.397361 kernel: iscsi: registered transport (qla4xxx) Feb 13 07:35:40.397380 kernel: QLogic iSCSI HBA Driver Feb 13 07:35:40.413496 systemd[1]: Finished dracut-cmdline.service. Feb 13 07:35:40.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:40.423158 systemd[1]: Starting dracut-pre-udev.service... Feb 13 07:35:40.479436 kernel: raid6: avx2x4 gen() 42360 MB/s Feb 13 07:35:40.514436 kernel: raid6: avx2x4 xor() 22286 MB/s Feb 13 07:35:40.549388 kernel: raid6: avx2x2 gen() 53837 MB/s Feb 13 07:35:40.584387 kernel: raid6: avx2x2 xor() 32145 MB/s Feb 13 07:35:40.619432 kernel: raid6: avx2x1 gen() 45140 MB/s Feb 13 07:35:40.654433 kernel: raid6: avx2x1 xor() 27947 MB/s Feb 13 07:35:40.687432 kernel: raid6: sse2x4 gen() 21351 MB/s Feb 13 07:35:40.721434 kernel: raid6: sse2x4 xor() 11983 MB/s Feb 13 07:35:40.755434 kernel: raid6: sse2x2 gen() 21667 MB/s Feb 13 07:35:40.789430 kernel: raid6: sse2x2 xor() 13455 MB/s Feb 13 07:35:40.823431 kernel: raid6: sse2x1 gen() 18304 MB/s Feb 13 07:35:40.874981 kernel: raid6: sse2x1 xor() 8932 MB/s Feb 13 07:35:40.874996 kernel: raid6: using algorithm avx2x2 gen() 53837 MB/s Feb 13 07:35:40.875004 kernel: raid6: .... xor() 32145 MB/s, rmw enabled Feb 13 07:35:40.893036 kernel: raid6: using avx2x2 recovery algorithm Feb 13 07:35:40.938383 kernel: xor: automatically using best checksumming function avx Feb 13 07:35:41.017390 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 13 07:35:41.022243 systemd[1]: Finished dracut-pre-udev.service. Feb 13 07:35:41.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:41.030000 audit: BPF prog-id=7 op=LOAD Feb 13 07:35:41.030000 audit: BPF prog-id=8 op=LOAD Feb 13 07:35:41.031453 systemd[1]: Starting systemd-udevd.service... Feb 13 07:35:41.039504 systemd-udevd[473]: Using default interface naming scheme 'v252'. Feb 13 07:35:41.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:41.045712 systemd[1]: Started systemd-udevd.service. Feb 13 07:35:41.086478 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Feb 13 07:35:41.062014 systemd[1]: Starting dracut-pre-trigger.service... Feb 13 07:35:41.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:41.089257 systemd[1]: Finished dracut-pre-trigger.service. Feb 13 07:35:41.104628 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 07:35:41.184175 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 07:35:41.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:41.211363 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 07:35:41.213361 kernel: libata version 3.00 loaded. Feb 13 07:35:41.248238 kernel: ACPI: bus type USB registered Feb 13 07:35:41.248272 kernel: usbcore: registered new interface driver usbfs Feb 13 07:35:41.248284 kernel: usbcore: registered new interface driver hub Feb 13 07:35:41.265897 kernel: usbcore: registered new device driver usb Feb 13 07:35:41.283361 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 07:35:41.316420 kernel: AES CTR mode by8 optimization enabled Feb 13 07:35:41.317360 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Feb 13 07:35:41.317380 kernel: ahci 0000:00:17.0: version 3.0 Feb 13 07:35:41.350421 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Feb 13 07:35:41.350437 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Feb 13 07:35:41.389808 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Feb 13 07:35:41.395362 kernel: pps pps0: new PPS source ptp0 Feb 13 07:35:41.395463 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 07:35:41.395522 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Feb 13 07:35:41.395577 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 07:35:41.408360 kernel: scsi host0: ahci Feb 13 07:35:41.408468 kernel: scsi host1: ahci Feb 13 07:35:41.408634 kernel: scsi host2: ahci Feb 13 07:35:41.408693 kernel: scsi host3: ahci Feb 13 07:35:41.408745 kernel: scsi host4: ahci Feb 13 07:35:41.408802 kernel: scsi host5: ahci Feb 13 07:35:41.408882 kernel: scsi host6: ahci Feb 13 07:35:41.409006 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Feb 13 07:35:41.409015 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Feb 13 07:35:41.409021 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Feb 13 07:35:41.409029 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Feb 13 07:35:41.409036 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Feb 13 07:35:41.409042 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Feb 13 07:35:41.409048 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Feb 13 07:35:41.409359 kernel: igb 0000:03:00.0: added PHC on eth0 Feb 13 07:35:41.425075 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Feb 13 07:35:41.461159 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 07:35:41.474415 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Feb 13 07:35:41.474486 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b6 Feb 13 07:35:41.474542 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Feb 13 07:35:41.487413 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Feb 13 07:35:41.499400 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 07:35:41.558257 kernel: pps pps1: new PPS source ptp1 Feb 13 07:35:41.558327 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Feb 13 07:35:41.558388 kernel: igb 0000:04:00.0: added PHC on eth1 Feb 13 07:35:41.572765 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Feb 13 07:35:41.600410 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Feb 13 07:35:41.613698 kernel: hub 1-0:1.0: USB hub found Feb 13 07:35:41.613777 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) ac:1f:6b:7b:e7:b7 Feb 13 07:35:41.613833 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Feb 13 07:35:41.639946 kernel: hub 1-0:1.0: 16 ports detected Feb 13 07:35:41.640023 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Feb 13 07:35:41.663416 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 07:35:41.676359 kernel: hub 2-0:1.0: USB hub found Feb 13 07:35:41.705348 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 07:35:41.705428 kernel: hub 2-0:1.0: 10 ports detected Feb 13 07:35:41.743529 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 07:35:41.743547 kernel: usb: port power management may be unreliable Feb 13 07:35:41.744385 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 07:35:41.896360 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 07:35:41.896447 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 07:35:41.901390 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Feb 13 07:35:41.925400 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Feb 13 07:35:41.925474 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 07:35:41.951670 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Feb 13 07:35:41.951730 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 07:35:42.094403 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Feb 13 07:35:42.109428 kernel: hub 1-14:1.0: USB hub found Feb 13 07:35:42.109547 kernel: ata7: SATA link down (SStatus 0 SControl 300) Feb 13 07:35:42.137292 kernel: hub 1-14:1.0: 4 ports detected Feb 13 07:35:42.137451 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Feb 13 07:35:42.168404 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 07:35:42.217931 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 07:35:42.217948 kernel: ata1.00: Features: NCQ-prio Feb 13 07:35:42.217956 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Feb 13 07:35:42.249368 kernel: ata2.00: Features: NCQ-prio Feb 13 07:35:42.249384 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Feb 13 07:35:42.270367 kernel: ata1.00: configured for UDMA/133 Feb 13 07:35:42.285359 kernel: port_module: 9 callbacks suppressed Feb 13 07:35:42.285374 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Feb 13 07:35:42.285436 kernel: ata2.00: configured for UDMA/133 Feb 13 07:35:42.285444 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 07:35:42.319410 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 13 07:35:42.319479 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Feb 13 07:35:42.414359 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Feb 13 07:35:42.435620 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:35:42.435635 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:35:42.451978 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Feb 13 07:35:42.452047 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 07:35:42.452291 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Feb 13 07:35:42.452601 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Feb 13 07:35:42.452756 kernel: sd 1:0:0:0: [sdb] Write Protect is off Feb 13 07:35:42.452906 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Feb 13 07:35:42.453047 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 07:35:42.453194 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:35:42.453209 kernel: ata2.00: Enabling discard_zeroes_data Feb 13 07:35:42.453219 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Feb 13 07:35:42.540790 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Feb 13 07:35:42.540865 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 13 07:35:42.540928 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 07:35:42.576589 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Feb 13 07:35:42.576664 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Feb 13 07:35:42.576724 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 07:35:42.695398 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 07:35:42.695414 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:35:42.760373 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 07:35:42.760390 kernel: GPT:9289727 != 937703087 Feb 13 07:35:42.760399 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 07:35:42.777051 kernel: GPT:9289727 != 937703087 Feb 13 07:35:42.791041 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 07:35:42.806469 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:35:42.822425 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:35:42.837031 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 07:35:42.870360 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Feb 13 07:35:42.882306 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 13 07:35:42.988448 kernel: usbcore: registered new interface driver usbhid Feb 13 07:35:42.988470 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (525) Feb 13 07:35:42.988483 kernel: usbhid: USB HID core driver Feb 13 07:35:42.988494 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Feb 13 07:35:42.988506 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Feb 13 07:35:42.940463 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 13 07:35:42.974118 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 13 07:35:43.074942 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Feb 13 07:35:43.075028 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Feb 13 07:35:43.075037 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Feb 13 07:35:43.021689 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 13 07:35:43.113368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 07:35:43.140223 systemd[1]: Starting disk-uuid.service... Feb 13 07:35:43.172494 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:35:43.172507 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:35:43.172558 disk-uuid[689]: Primary Header is updated. Feb 13 07:35:43.172558 disk-uuid[689]: Secondary Entries is updated. Feb 13 07:35:43.172558 disk-uuid[689]: Secondary Header is updated. Feb 13 07:35:43.228462 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:35:43.228471 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:35:43.228478 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:35:43.254358 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:35:44.234091 kernel: ata1.00: Enabling discard_zeroes_data Feb 13 07:35:44.252359 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 07:35:44.252793 disk-uuid[690]: The operation has completed successfully. Feb 13 07:35:44.286216 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 07:35:44.381166 kernel: audit: type=1130 audit(1707809744.293:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.381181 kernel: audit: type=1131 audit(1707809744.293:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.286273 systemd[1]: Finished disk-uuid.service. Feb 13 07:35:44.410438 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 07:35:44.299129 systemd[1]: Starting verity-setup.service... Feb 13 07:35:44.472370 systemd[1]: Found device dev-mapper-usr.device. Feb 13 07:35:44.483680 systemd[1]: Mounting sysusr-usr.mount... Feb 13 07:35:44.494991 systemd[1]: Finished verity-setup.service. Feb 13 07:35:44.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.562361 kernel: audit: type=1130 audit(1707809744.510:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.619358 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 13 07:35:44.619462 systemd[1]: Mounted sysusr-usr.mount. Feb 13 07:35:44.626653 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 13 07:35:44.729663 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:35:44.729725 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:35:44.729761 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:35:44.729795 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:35:44.627021 systemd[1]: Starting ignition-setup.service... Feb 13 07:35:44.649791 systemd[1]: Starting parse-ip-for-networkd.service... Feb 13 07:35:44.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.739120 systemd[1]: Finished ignition-setup.service. Feb 13 07:35:44.866838 kernel: audit: type=1130 audit(1707809744.752:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.866851 kernel: audit: type=1130 audit(1707809744.817:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.753077 systemd[1]: Finished parse-ip-for-networkd.service. Feb 13 07:35:44.898206 kernel: audit: type=1334 audit(1707809744.876:24): prog-id=9 op=LOAD Feb 13 07:35:44.876000 audit: BPF prog-id=9 op=LOAD Feb 13 07:35:44.818052 systemd[1]: Starting ignition-fetch-offline.service... Feb 13 07:35:44.877240 systemd[1]: Starting systemd-networkd.service... Feb 13 07:35:44.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.971411 ignition[868]: Ignition 2.14.0 Feb 13 07:35:44.987482 kernel: audit: type=1130 audit(1707809744.922:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.913367 systemd-networkd[874]: lo: Link UP Feb 13 07:35:44.971416 ignition[868]: Stage: fetch-offline Feb 13 07:35:44.913369 systemd-networkd[874]: lo: Gained carrier Feb 13 07:35:45.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.971462 ignition[868]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:35:45.145180 kernel: audit: type=1130 audit(1707809745.012:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:45.145192 kernel: audit: type=1130 audit(1707809745.071:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:45.145200 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:35:45.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.913686 systemd-networkd[874]: Enumeration completed Feb 13 07:35:45.170409 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Feb 13 07:35:44.971475 ignition[868]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:35:44.913755 systemd[1]: Started systemd-networkd.service. Feb 13 07:35:44.974558 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:35:45.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.914489 systemd-networkd[874]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:35:45.225465 iscsid[902]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 13 07:35:45.225465 iscsid[902]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 13 07:35:45.225465 iscsid[902]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 13 07:35:45.225465 iscsid[902]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 13 07:35:45.225465 iscsid[902]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 13 07:35:45.225465 iscsid[902]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 13 07:35:45.225465 iscsid[902]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 13 07:35:45.388556 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 07:35:45.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:45.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:44.974625 ignition[868]: parsed url from cmdline: "" Feb 13 07:35:44.922501 systemd[1]: Reached target network.target. Feb 13 07:35:44.974627 ignition[868]: no config URL provided Feb 13 07:35:44.981908 systemd[1]: Starting iscsiuio.service... Feb 13 07:35:44.974629 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 07:35:44.990743 unknown[868]: fetched base config from "system" Feb 13 07:35:44.979174 ignition[868]: parsing config with SHA512: 6ff3c6443bb6f32fd562055488f67db413e1e555b63472d99f09767114ad11caae0e40d7ca2dad65b0e1900e2011986a9dba6a4042c340851b0311b479318560 Feb 13 07:35:44.990746 unknown[868]: fetched user config from "system" Feb 13 07:35:44.991003 ignition[868]: fetch-offline: fetch-offline passed Feb 13 07:35:44.994693 systemd[1]: Started iscsiuio.service. Feb 13 07:35:44.991006 ignition[868]: POST message to Packet Timeline Feb 13 07:35:45.012662 systemd[1]: Finished ignition-fetch-offline.service. Feb 13 07:35:44.991010 ignition[868]: POST Status error: resource requires networking Feb 13 07:35:45.071608 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 07:35:44.991038 ignition[868]: Ignition finished successfully Feb 13 07:35:45.072052 systemd[1]: Starting ignition-kargs.service... Feb 13 07:35:45.149567 ignition[891]: Ignition 2.14.0 Feb 13 07:35:45.146575 systemd-networkd[874]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:35:45.149571 ignition[891]: Stage: kargs Feb 13 07:35:45.158882 systemd[1]: Starting iscsid.service... Feb 13 07:35:45.149625 ignition[891]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:35:45.181697 systemd[1]: Started iscsid.service. Feb 13 07:35:45.149634 ignition[891]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:35:45.195820 systemd[1]: Starting dracut-initqueue.service... Feb 13 07:35:45.150931 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:35:45.213515 systemd[1]: Finished dracut-initqueue.service. Feb 13 07:35:45.152913 ignition[891]: kargs: kargs passed Feb 13 07:35:45.233662 systemd[1]: Reached target remote-fs-pre.target. Feb 13 07:35:45.152922 ignition[891]: POST message to Packet Timeline Feb 13 07:35:45.233714 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 07:35:45.152943 ignition[891]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:35:45.251619 systemd[1]: Reached target remote-fs.target. Feb 13 07:35:45.156555 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56466->[::1]:53: read: connection refused Feb 13 07:35:45.295511 systemd[1]: Starting dracut-pre-mount.service... Feb 13 07:35:45.356960 ignition[891]: GET https://metadata.packet.net/metadata: attempt #2 Feb 13 07:35:45.314687 systemd[1]: Finished dracut-pre-mount.service. Feb 13 07:35:45.357593 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42549->[::1]:53: read: connection refused Feb 13 07:35:45.378558 systemd-networkd[874]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:35:45.407946 systemd-networkd[874]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 07:35:45.438488 systemd-networkd[874]: enp1s0f1np1: Link UP Feb 13 07:35:45.438947 systemd-networkd[874]: enp1s0f1np1: Gained carrier Feb 13 07:35:45.454875 systemd-networkd[874]: enp1s0f0np0: Link UP Feb 13 07:35:45.455230 systemd-networkd[874]: eno2: Link UP Feb 13 07:35:45.455604 systemd-networkd[874]: eno1: Link UP Feb 13 07:35:45.758111 ignition[891]: GET https://metadata.packet.net/metadata: attempt #3 Feb 13 07:35:45.759147 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:56560->[::1]:53: read: connection refused Feb 13 07:35:46.186440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Feb 13 07:35:46.186477 systemd-networkd[874]: enp1s0f0np0: Gained carrier Feb 13 07:35:46.220559 systemd-networkd[874]: enp1s0f0np0: DHCPv4 address 139.178.90.101/31, gateway 139.178.90.100 acquired from 145.40.83.140 Feb 13 07:35:46.559662 ignition[891]: GET https://metadata.packet.net/metadata: attempt #4 Feb 13 07:35:46.561058 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47784->[::1]:53: read: connection refused Feb 13 07:35:47.314936 systemd-networkd[874]: enp1s0f1np1: Gained IPv6LL Feb 13 07:35:47.506882 systemd-networkd[874]: enp1s0f0np0: Gained IPv6LL Feb 13 07:35:48.162629 ignition[891]: GET https://metadata.packet.net/metadata: attempt #5 Feb 13 07:35:48.163920 ignition[891]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:53523->[::1]:53: read: connection refused Feb 13 07:35:51.367389 ignition[891]: GET https://metadata.packet.net/metadata: attempt #6 Feb 13 07:35:51.406851 ignition[891]: GET result: OK Feb 13 07:35:51.625247 ignition[891]: Ignition finished successfully Feb 13 07:35:51.629771 systemd[1]: Finished ignition-kargs.service. Feb 13 07:35:51.719942 kernel: kauditd_printk_skb: 3 callbacks suppressed Feb 13 07:35:51.719957 kernel: audit: type=1130 audit(1707809751.641:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:51.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:51.650805 ignition[920]: Ignition 2.14.0 Feb 13 07:35:51.643615 systemd[1]: Starting ignition-disks.service... Feb 13 07:35:51.650809 ignition[920]: Stage: disks Feb 13 07:35:51.650904 ignition[920]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:35:51.650913 ignition[920]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:35:51.652313 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:35:51.654026 ignition[920]: disks: disks passed Feb 13 07:35:51.654029 ignition[920]: POST message to Packet Timeline Feb 13 07:35:51.654041 ignition[920]: GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:35:51.762604 ignition[920]: GET result: OK Feb 13 07:35:52.041181 ignition[920]: Ignition finished successfully Feb 13 07:35:52.044014 systemd[1]: Finished ignition-disks.service. Feb 13 07:35:52.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.056885 systemd[1]: Reached target initrd-root-device.target. Feb 13 07:35:52.134615 kernel: audit: type=1130 audit(1707809752.056:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.120563 systemd[1]: Reached target local-fs-pre.target. Feb 13 07:35:52.120609 systemd[1]: Reached target local-fs.target. Feb 13 07:35:52.143595 systemd[1]: Reached target sysinit.target. Feb 13 07:35:52.157581 systemd[1]: Reached target basic.target. Feb 13 07:35:52.171312 systemd[1]: Starting systemd-fsck-root.service... Feb 13 07:35:52.189148 systemd-fsck[935]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 13 07:35:52.206172 systemd[1]: Finished systemd-fsck-root.service. Feb 13 07:35:52.297558 kernel: audit: type=1130 audit(1707809752.214:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.297573 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 13 07:35:52.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.216563 systemd[1]: Mounting sysroot.mount... Feb 13 07:35:52.305052 systemd[1]: Mounted sysroot.mount. Feb 13 07:35:52.318622 systemd[1]: Reached target initrd-root-fs.target. Feb 13 07:35:52.326260 systemd[1]: Mounting sysroot-usr.mount... Feb 13 07:35:52.351220 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 13 07:35:52.359864 systemd[1]: Starting flatcar-static-network.service... Feb 13 07:35:52.376585 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 07:35:52.376675 systemd[1]: Reached target ignition-diskful.target. Feb 13 07:35:52.395411 systemd[1]: Mounted sysroot-usr.mount. Feb 13 07:35:52.420026 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 07:35:52.432234 systemd[1]: Starting initrd-setup-root.service... Feb 13 07:35:52.553947 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (946) Feb 13 07:35:52.553965 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:35:52.554057 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:35:52.554065 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:35:52.554072 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:35:52.554083 initrd-setup-root[953]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 07:35:52.616568 kernel: audit: type=1130 audit(1707809752.563:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.616682 coreos-metadata[943]: Feb 13 07:35:52.494 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:35:52.616682 coreos-metadata[943]: Feb 13 07:35:52.517 INFO Fetch successful Feb 13 07:35:52.801093 kernel: audit: type=1130 audit(1707809752.624:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.801105 kernel: audit: type=1130 audit(1707809752.688:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.801114 kernel: audit: type=1131 audit(1707809752.688:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.801168 coreos-metadata[942]: Feb 13 07:35:52.494 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:35:52.801168 coreos-metadata[942]: Feb 13 07:35:52.517 INFO Fetch successful Feb 13 07:35:52.801168 coreos-metadata[942]: Feb 13 07:35:52.534 INFO wrote hostname ci-3510.3.2-a-9e65c995fd to /sysroot/etc/hostname Feb 13 07:35:52.484626 systemd[1]: Finished initrd-setup-root.service. Feb 13 07:35:52.863457 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Feb 13 07:35:52.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.564697 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 13 07:35:52.937592 kernel: audit: type=1130 audit(1707809752.872:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:52.937607 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 07:35:52.624680 systemd[1]: flatcar-static-network.service: Deactivated successfully. Feb 13 07:35:52.957435 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 07:35:52.624719 systemd[1]: Finished flatcar-static-network.service. Feb 13 07:35:52.975542 ignition[1019]: INFO : Ignition 2.14.0 Feb 13 07:35:52.975542 ignition[1019]: INFO : Stage: mount Feb 13 07:35:52.975542 ignition[1019]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:35:52.975542 ignition[1019]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:35:52.975542 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:35:52.975542 ignition[1019]: INFO : mount: mount passed Feb 13 07:35:52.975542 ignition[1019]: INFO : POST message to Packet Timeline Feb 13 07:35:52.975542 ignition[1019]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:35:52.975542 ignition[1019]: INFO : GET result: OK Feb 13 07:35:52.688645 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 07:35:52.810007 systemd[1]: Starting ignition-mount.service... Feb 13 07:35:52.836999 systemd[1]: Starting sysroot-boot.service... Feb 13 07:35:52.856172 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 13 07:35:52.856212 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 13 07:35:52.856862 systemd[1]: Finished sysroot-boot.service. Feb 13 07:35:53.113659 ignition[1019]: INFO : Ignition finished successfully Feb 13 07:35:53.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:53.107402 systemd[1]: Finished ignition-mount.service. Feb 13 07:35:53.193447 kernel: audit: type=1130 audit(1707809753.121:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:35:53.123497 systemd[1]: Starting ignition-files.service... Feb 13 07:35:53.188171 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 13 07:35:53.286439 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1033) Feb 13 07:35:53.286451 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 07:35:53.286458 kernel: BTRFS info (device sda6): using free space tree Feb 13 07:35:53.286467 kernel: BTRFS info (device sda6): has skinny extents Feb 13 07:35:53.286474 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 07:35:53.321051 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 13 07:35:53.337496 ignition[1052]: INFO : Ignition 2.14.0 Feb 13 07:35:53.337496 ignition[1052]: INFO : Stage: files Feb 13 07:35:53.337496 ignition[1052]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:35:53.337496 ignition[1052]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:35:53.337496 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:35:53.337496 ignition[1052]: DEBUG : files: compiled without relabeling support, skipping Feb 13 07:35:53.340271 unknown[1052]: wrote ssh authorized keys file for user: core Feb 13 07:35:53.413572 ignition[1052]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 07:35:53.413572 ignition[1052]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 07:35:53.413572 ignition[1052]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 07:35:53.413572 ignition[1052]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 07:35:53.413572 ignition[1052]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 07:35:53.413572 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 07:35:53.413572 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 13 07:35:53.796798 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 07:35:53.890633 ignition[1052]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 13 07:35:53.890633 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 13 07:35:53.934558 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 07:35:53.934558 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 13 07:35:54.311663 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 07:35:54.362545 ignition[1052]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 13 07:35:54.387612 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 13 07:35:54.387612 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 13 07:35:54.387612 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 13 07:35:54.573028 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 07:35:59.908658 ignition[1052]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 13 07:35:59.933580 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 13 07:35:59.933580 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 13 07:35:59.933580 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 13 07:36:00.082209 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 07:36:15.512427 ignition[1052]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem290649231" Feb 13 07:36:15.547457 ignition[1052]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem290649231": device or resource busy Feb 13 07:36:15.547457 ignition[1052]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem290649231", trying btrfs: device or resource busy Feb 13 07:36:15.547457 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem290649231" Feb 13 07:36:15.773673 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1069) Feb 13 07:36:15.773772 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem290649231" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem290649231" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem290649231" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(f): [started] processing unit "packet-phone-home.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(f): [finished] processing unit "packet-phone-home.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(14): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(14): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 13 07:36:15.773772 ignition[1052]: INFO : files: op(15): [started] setting preset to enabled for "packet-phone-home.service" Feb 13 07:36:16.485673 kernel: audit: type=1130 audit(1707809775.798:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.485774 kernel: audit: type=1130 audit(1707809775.917:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.485822 kernel: audit: type=1130 audit(1707809775.985:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.485863 kernel: audit: type=1131 audit(1707809775.985:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.485902 kernel: audit: type=1130 audit(1707809776.143:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.485941 kernel: audit: type=1131 audit(1707809776.143:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.485980 kernel: audit: type=1130 audit(1707809776.327:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:15.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:15.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:15.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:15.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.486704 ignition[1052]: INFO : files: op(15): [finished] setting preset to enabled for "packet-phone-home.service" Feb 13 07:36:16.486704 ignition[1052]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 07:36:16.486704 ignition[1052]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 13 07:36:16.486704 ignition[1052]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Feb 13 07:36:16.486704 ignition[1052]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Feb 13 07:36:16.486704 ignition[1052]: INFO : files: createResultFile: createFiles: op(18): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 07:36:16.486704 ignition[1052]: INFO : files: createResultFile: createFiles: op(18): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 07:36:16.486704 ignition[1052]: INFO : files: files passed Feb 13 07:36:16.486704 ignition[1052]: INFO : POST message to Packet Timeline Feb 13 07:36:16.486704 ignition[1052]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:36:16.486704 ignition[1052]: INFO : GET result: OK Feb 13 07:36:16.486704 ignition[1052]: INFO : Ignition finished successfully Feb 13 07:36:16.773655 kernel: audit: type=1131 audit(1707809776.493:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:15.786899 systemd[1]: Finished ignition-files.service. Feb 13 07:36:16.853467 kernel: audit: type=1131 audit(1707809776.782:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:15.803969 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 13 07:36:16.871580 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 07:36:16.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:15.865622 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 13 07:36:16.966614 kernel: audit: type=1131 audit(1707809776.879:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:15.865923 systemd[1]: Starting ignition-quench.service... Feb 13 07:36:15.903731 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 13 07:36:15.917810 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 07:36:15.917879 systemd[1]: Finished ignition-quench.service. Feb 13 07:36:15.985615 systemd[1]: Reached target ignition-complete.target. Feb 13 07:36:17.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.107961 systemd[1]: Starting initrd-parse-etc.service... Feb 13 07:36:17.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.124156 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 07:36:17.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.124200 systemd[1]: Finished initrd-parse-etc.service. Feb 13 07:36:17.095421 ignition[1100]: INFO : Ignition 2.14.0 Feb 13 07:36:17.095421 ignition[1100]: INFO : Stage: umount Feb 13 07:36:17.095421 ignition[1100]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 13 07:36:17.095421 ignition[1100]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Feb 13 07:36:17.095421 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Feb 13 07:36:17.095421 ignition[1100]: INFO : umount: umount passed Feb 13 07:36:17.095421 ignition[1100]: INFO : POST message to Packet Timeline Feb 13 07:36:17.095421 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Feb 13 07:36:17.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.143870 systemd[1]: Reached target initrd-fs.target. Feb 13 07:36:17.236721 iscsid[902]: iscsid shutting down. Feb 13 07:36:17.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.250907 ignition[1100]: INFO : GET result: OK Feb 13 07:36:17.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.275581 systemd[1]: Reached target initrd.target. Feb 13 07:36:16.275716 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 13 07:36:16.276071 systemd[1]: Starting dracut-pre-pivot.service... Feb 13 07:36:16.316794 systemd[1]: Finished dracut-pre-pivot.service. Feb 13 07:36:16.328331 systemd[1]: Starting initrd-cleanup.service... Feb 13 07:36:17.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.342758 ignition[1100]: INFO : Ignition finished successfully Feb 13 07:36:17.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.396592 systemd[1]: Stopped target nss-lookup.target. Feb 13 07:36:17.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.366000 audit: BPF prog-id=6 op=UNLOAD Feb 13 07:36:16.429729 systemd[1]: Stopped target remote-cryptsetup.target. Feb 13 07:36:16.443811 systemd[1]: Stopped target timers.target. Feb 13 07:36:17.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.472834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 07:36:17.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.473032 systemd[1]: Stopped dracut-pre-pivot.service. Feb 13 07:36:17.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.494344 systemd[1]: Stopped target initrd.target. Feb 13 07:36:17.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.569731 systemd[1]: Stopped target basic.target. Feb 13 07:36:16.584797 systemd[1]: Stopped target ignition-complete.target. Feb 13 07:36:17.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.603791 systemd[1]: Stopped target ignition-diskful.target. Feb 13 07:36:17.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.632759 systemd[1]: Stopped target initrd-root-device.target. Feb 13 07:36:17.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.654086 systemd[1]: Stopped target remote-fs.target. Feb 13 07:36:16.678031 systemd[1]: Stopped target remote-fs-pre.target. Feb 13 07:36:17.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.702059 systemd[1]: Stopped target sysinit.target. Feb 13 07:36:16.718042 systemd[1]: Stopped target local-fs.target. Feb 13 07:36:16.734027 systemd[1]: Stopped target local-fs-pre.target. Feb 13 07:36:17.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.752052 systemd[1]: Stopped target swap.target. Feb 13 07:36:17.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.765922 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 07:36:17.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.766280 systemd[1]: Stopped dracut-pre-mount.service. Feb 13 07:36:16.783259 systemd[1]: Stopped target cryptsetup.target. Feb 13 07:36:17.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.861716 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 07:36:17.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.861798 systemd[1]: Stopped dracut-initqueue.service. Feb 13 07:36:17.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.879785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 07:36:17.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:16.879860 systemd[1]: Stopped ignition-fetch-offline.service. Feb 13 07:36:16.948787 systemd[1]: Stopped target paths.target. Feb 13 07:36:16.973702 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 07:36:16.977571 systemd[1]: Stopped systemd-ask-password-console.path. Feb 13 07:36:16.989728 systemd[1]: Stopped target slices.target. Feb 13 07:36:17.004721 systemd[1]: Stopped target sockets.target. Feb 13 07:36:17.021812 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 07:36:17.021992 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 13 07:36:17.039046 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 07:36:17.039333 systemd[1]: Stopped ignition-files.service. Feb 13 07:36:17.055128 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 07:36:17.055507 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 13 07:36:17.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:17.075290 systemd[1]: Stopping ignition-mount.service... Feb 13 07:36:17.087648 systemd[1]: Stopping iscsid.service... Feb 13 07:36:17.103168 systemd[1]: Stopping sysroot-boot.service... Feb 13 07:36:17.121630 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 07:36:17.122063 systemd[1]: Stopped systemd-udev-trigger.service. Feb 13 07:36:17.133119 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 07:36:17.133483 systemd[1]: Stopped dracut-pre-trigger.service. Feb 13 07:36:17.174467 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 07:36:17.176501 systemd[1]: iscsid.service: Deactivated successfully. Feb 13 07:36:17.176732 systemd[1]: Stopped iscsid.service. Feb 13 07:36:17.185742 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 07:36:17.185957 systemd[1]: Stopped sysroot-boot.service. Feb 13 07:36:17.200830 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 07:36:17.201005 systemd[1]: Closed iscsid.socket. Feb 13 07:36:17.207918 systemd[1]: Stopping iscsiuio.service... Feb 13 07:36:17.230107 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 13 07:36:17.230331 systemd[1]: Stopped iscsiuio.service. Feb 13 07:36:17.244116 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 07:36:17.244326 systemd[1]: Finished initrd-cleanup.service. Feb 13 07:36:17.260817 systemd[1]: Stopped target network.target. Feb 13 07:36:17.273738 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 07:36:17.273840 systemd[1]: Closed iscsiuio.socket. Feb 13 07:36:17.288013 systemd[1]: Stopping systemd-networkd.service... Feb 13 07:36:17.294540 systemd-networkd[874]: enp1s0f0np0: DHCPv6 lease lost Feb 13 07:36:17.303565 systemd-networkd[874]: enp1s0f1np1: DHCPv6 lease lost Feb 13 07:36:17.305887 systemd[1]: Stopping systemd-resolved.service... Feb 13 07:36:17.881000 audit: BPF prog-id=9 op=UNLOAD Feb 13 07:36:17.321328 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 07:36:17.321590 systemd[1]: Stopped systemd-resolved.service. Feb 13 07:36:17.337186 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 07:36:17.337450 systemd[1]: Stopped systemd-networkd.service. Feb 13 07:36:17.351202 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 07:36:17.351437 systemd[1]: Stopped ignition-mount.service. Feb 13 07:36:17.367048 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 07:36:17.367153 systemd[1]: Closed systemd-networkd.socket. Feb 13 07:36:17.381734 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 07:36:17.381859 systemd[1]: Stopped ignition-disks.service. Feb 13 07:36:17.397763 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 07:36:17.397875 systemd[1]: Stopped ignition-kargs.service. Feb 13 07:36:17.413763 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 07:36:17.413882 systemd[1]: Stopped ignition-setup.service. Feb 13 07:36:17.429863 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 07:36:17.430006 systemd[1]: Stopped initrd-setup-root.service. Feb 13 07:36:17.446611 systemd[1]: Stopping network-cleanup.service... Feb 13 07:36:17.458560 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 07:36:17.458794 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 13 07:36:17.473840 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 07:36:17.473966 systemd[1]: Stopped systemd-sysctl.service. Feb 13 07:36:17.490034 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 07:36:17.490170 systemd[1]: Stopped systemd-modules-load.service. Feb 13 07:36:17.506083 systemd[1]: Stopping systemd-udevd.service... Feb 13 07:36:17.524391 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 07:36:17.882361 systemd-journald[268]: Received SIGTERM from PID 1 (n/a). Feb 13 07:36:17.525770 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 07:36:17.526080 systemd[1]: Stopped systemd-udevd.service. Feb 13 07:36:17.537499 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 07:36:17.537616 systemd[1]: Closed systemd-udevd-control.socket. Feb 13 07:36:17.551761 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 07:36:17.551860 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 13 07:36:17.566623 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 07:36:17.566765 systemd[1]: Stopped dracut-pre-udev.service. Feb 13 07:36:17.573541 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 07:36:17.573566 systemd[1]: Stopped dracut-cmdline.service. Feb 13 07:36:17.595476 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 07:36:17.595509 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 13 07:36:17.611194 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 13 07:36:17.626431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 07:36:17.626459 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 13 07:36:17.641500 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 07:36:17.641530 systemd[1]: Stopped kmod-static-nodes.service. Feb 13 07:36:17.657510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 07:36:17.657551 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 13 07:36:17.675300 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 07:36:17.676225 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 07:36:17.676377 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 13 07:36:17.783127 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 07:36:17.783352 systemd[1]: Stopped network-cleanup.service. Feb 13 07:36:17.800954 systemd[1]: Reached target initrd-switch-root.target. Feb 13 07:36:17.818413 systemd[1]: Starting initrd-switch-root.service... Feb 13 07:36:17.838687 systemd[1]: Switching root. Feb 13 07:36:17.882910 systemd-journald[268]: Journal stopped Feb 13 07:36:21.809111 kernel: SELinux: Class mctp_socket not defined in policy. Feb 13 07:36:21.809124 kernel: SELinux: Class anon_inode not defined in policy. Feb 13 07:36:21.809132 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 13 07:36:21.809138 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 07:36:21.809143 kernel: SELinux: policy capability open_perms=1 Feb 13 07:36:21.809148 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 07:36:21.809154 kernel: SELinux: policy capability always_check_network=0 Feb 13 07:36:21.809159 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 07:36:21.809164 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 07:36:21.809170 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 07:36:21.809176 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 07:36:21.809181 systemd[1]: Successfully loaded SELinux policy in 310.114ms. Feb 13 07:36:21.809188 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.674ms. Feb 13 07:36:21.809195 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 13 07:36:21.809202 systemd[1]: Detected architecture x86-64. Feb 13 07:36:21.809208 systemd[1]: Detected first boot. Feb 13 07:36:21.809214 systemd[1]: Hostname set to . Feb 13 07:36:21.809220 systemd[1]: Initializing machine ID from random generator. Feb 13 07:36:21.809226 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 13 07:36:21.809231 systemd[1]: Populated /etc with preset unit settings. Feb 13 07:36:21.809237 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:36:21.809245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:36:21.809251 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:36:21.809257 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 07:36:21.809263 systemd[1]: Stopped initrd-switch-root.service. Feb 13 07:36:21.809269 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 07:36:21.809276 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 13 07:36:21.809283 systemd[1]: Created slice system-addon\x2drun.slice. Feb 13 07:36:21.809289 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 13 07:36:21.809295 systemd[1]: Created slice system-getty.slice. Feb 13 07:36:21.809300 systemd[1]: Created slice system-modprobe.slice. Feb 13 07:36:21.809306 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 13 07:36:21.809312 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 13 07:36:21.809318 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 13 07:36:21.809324 systemd[1]: Created slice user.slice. Feb 13 07:36:21.809330 systemd[1]: Started systemd-ask-password-console.path. Feb 13 07:36:21.809337 systemd[1]: Started systemd-ask-password-wall.path. Feb 13 07:36:21.809343 systemd[1]: Set up automount boot.automount. Feb 13 07:36:21.809349 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 13 07:36:21.809357 systemd[1]: Stopped target initrd-switch-root.target. Feb 13 07:36:21.809365 systemd[1]: Stopped target initrd-fs.target. Feb 13 07:36:21.809371 systemd[1]: Stopped target initrd-root-fs.target. Feb 13 07:36:21.809400 systemd[1]: Reached target integritysetup.target. Feb 13 07:36:21.809406 systemd[1]: Reached target remote-cryptsetup.target. Feb 13 07:36:21.809429 systemd[1]: Reached target remote-fs.target. Feb 13 07:36:21.809436 systemd[1]: Reached target slices.target. Feb 13 07:36:21.809442 systemd[1]: Reached target swap.target. Feb 13 07:36:21.809448 systemd[1]: Reached target torcx.target. Feb 13 07:36:21.809454 systemd[1]: Reached target veritysetup.target. Feb 13 07:36:21.809461 systemd[1]: Listening on systemd-coredump.socket. Feb 13 07:36:21.809467 systemd[1]: Listening on systemd-initctl.socket. Feb 13 07:36:21.809473 systemd[1]: Listening on systemd-networkd.socket. Feb 13 07:36:21.809480 systemd[1]: Listening on systemd-udevd-control.socket. Feb 13 07:36:21.809487 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 13 07:36:21.809493 systemd[1]: Listening on systemd-userdbd.socket. Feb 13 07:36:21.809499 systemd[1]: Mounting dev-hugepages.mount... Feb 13 07:36:21.809506 systemd[1]: Mounting dev-mqueue.mount... Feb 13 07:36:21.809512 systemd[1]: Mounting media.mount... Feb 13 07:36:21.809519 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 07:36:21.809526 systemd[1]: Mounting sys-kernel-debug.mount... Feb 13 07:36:21.809532 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 13 07:36:21.809538 systemd[1]: Mounting tmp.mount... Feb 13 07:36:21.809544 systemd[1]: Starting flatcar-tmpfiles.service... Feb 13 07:36:21.809551 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 13 07:36:21.809557 systemd[1]: Starting kmod-static-nodes.service... Feb 13 07:36:21.809563 systemd[1]: Starting modprobe@configfs.service... Feb 13 07:36:21.809570 systemd[1]: Starting modprobe@dm_mod.service... Feb 13 07:36:21.809577 systemd[1]: Starting modprobe@drm.service... Feb 13 07:36:21.809583 systemd[1]: Starting modprobe@efi_pstore.service... Feb 13 07:36:21.809590 systemd[1]: Starting modprobe@fuse.service... Feb 13 07:36:21.809596 kernel: fuse: init (API version 7.34) Feb 13 07:36:21.809602 systemd[1]: Starting modprobe@loop.service... Feb 13 07:36:21.809608 kernel: loop: module loaded Feb 13 07:36:21.809614 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 07:36:21.809621 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 07:36:21.809628 systemd[1]: Stopped systemd-fsck-root.service. Feb 13 07:36:21.809634 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 07:36:21.809640 kernel: kauditd_printk_skb: 60 callbacks suppressed Feb 13 07:36:21.809646 kernel: audit: type=1131 audit(1707809781.450:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.809652 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 07:36:21.809659 kernel: audit: type=1131 audit(1707809781.538:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.809664 systemd[1]: Stopped systemd-journald.service. Feb 13 07:36:21.809671 kernel: audit: type=1130 audit(1707809781.602:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.809678 kernel: audit: type=1131 audit(1707809781.602:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.809684 kernel: audit: type=1334 audit(1707809781.688:107): prog-id=15 op=LOAD Feb 13 07:36:21.809689 kernel: audit: type=1334 audit(1707809781.706:108): prog-id=16 op=LOAD Feb 13 07:36:21.809695 kernel: audit: type=1334 audit(1707809781.724:109): prog-id=17 op=LOAD Feb 13 07:36:21.809701 kernel: audit: type=1334 audit(1707809781.742:110): prog-id=13 op=UNLOAD Feb 13 07:36:21.809706 systemd[1]: Starting systemd-journald.service... Feb 13 07:36:21.809713 kernel: audit: type=1334 audit(1707809781.742:111): prog-id=14 op=UNLOAD Feb 13 07:36:21.809719 systemd[1]: Starting systemd-modules-load.service... Feb 13 07:36:21.809727 kernel: audit: type=1305 audit(1707809781.806:112): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 07:36:21.809734 systemd-journald[1252]: Journal started Feb 13 07:36:21.809758 systemd-journald[1252]: Runtime Journal (/run/log/journal/b15f0090ff3242fc8c6a805361e72d3e) is 8.0M, max 640.1M, 632.1M free. Feb 13 07:36:18.266000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 07:36:18.565000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:36:18.568000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:36:18.568000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 13 07:36:18.568000 audit: BPF prog-id=10 op=LOAD Feb 13 07:36:18.568000 audit: BPF prog-id=10 op=UNLOAD Feb 13 07:36:18.568000 audit: BPF prog-id=11 op=LOAD Feb 13 07:36:18.568000 audit: BPF prog-id=11 op=UNLOAD Feb 13 07:36:18.631000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 13 07:36:18.631000 audit[1141]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001a58dc a1=c00002ce58 a2=c00002bb00 a3=32 items=0 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:36:18.631000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 07:36:18.657000 audit[1141]: AVC avc: denied { associate } for pid=1141 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 13 07:36:18.657000 audit[1141]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001a59b5 a2=1ed a3=0 items=2 ppid=1124 pid=1141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:36:18.657000 audit: CWD cwd="/" Feb 13 07:36:18.657000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:18.657000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:18.657000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 13 07:36:20.182000 audit: BPF prog-id=12 op=LOAD Feb 13 07:36:20.182000 audit: BPF prog-id=3 op=UNLOAD Feb 13 07:36:20.182000 audit: BPF prog-id=13 op=LOAD Feb 13 07:36:20.182000 audit: BPF prog-id=14 op=LOAD Feb 13 07:36:20.182000 audit: BPF prog-id=4 op=UNLOAD Feb 13 07:36:20.182000 audit: BPF prog-id=5 op=UNLOAD Feb 13 07:36:20.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:20.231000 audit: BPF prog-id=12 op=UNLOAD Feb 13 07:36:20.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:20.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:21.688000 audit: BPF prog-id=15 op=LOAD Feb 13 07:36:21.706000 audit: BPF prog-id=16 op=LOAD Feb 13 07:36:21.724000 audit: BPF prog-id=17 op=LOAD Feb 13 07:36:21.742000 audit: BPF prog-id=13 op=UNLOAD Feb 13 07:36:21.742000 audit: BPF prog-id=14 op=UNLOAD Feb 13 07:36:21.806000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 13 07:36:18.629551 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:36:20.180915 systemd[1]: Queued start job for default target multi-user.target. Feb 13 07:36:18.630073 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 07:36:20.183504 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 07:36:18.630085 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 07:36:18.630103 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 13 07:36:18.630109 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 13 07:36:18.630126 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 13 07:36:18.630134 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 13 07:36:18.630249 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 13 07:36:18.630271 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 13 07:36:18.630280 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 13 07:36:18.630808 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 13 07:36:18.630829 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 13 07:36:18.630840 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 13 07:36:18.630848 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 13 07:36:18.630858 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 13 07:36:18.630866 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 13 07:36:19.822434 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:19Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:36:19.822576 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:19Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:36:19.822632 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:19Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:36:19.822726 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:19Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 13 07:36:19.822755 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:19Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 13 07:36:19.822789 /usr/lib/systemd/system-generators/torcx-generator[1141]: time="2024-02-13T07:36:19Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 13 07:36:21.806000 audit[1252]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff29736540 a2=4000 a3=7fff297365dc items=0 ppid=1 pid=1252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:36:21.806000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 13 07:36:21.887544 systemd[1]: Starting systemd-network-generator.service... Feb 13 07:36:21.914398 systemd[1]: Starting systemd-remount-fs.service... Feb 13 07:36:21.941413 systemd[1]: Starting systemd-udev-trigger.service... Feb 13 07:36:21.984130 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 07:36:21.984152 systemd[1]: Stopped verity-setup.service. Feb 13 07:36:21.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.030401 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 07:36:22.050534 systemd[1]: Started systemd-journald.service. Feb 13 07:36:22.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.058985 systemd[1]: Mounted dev-hugepages.mount. Feb 13 07:36:22.066624 systemd[1]: Mounted dev-mqueue.mount. Feb 13 07:36:22.073627 systemd[1]: Mounted media.mount. Feb 13 07:36:22.080616 systemd[1]: Mounted sys-kernel-debug.mount. Feb 13 07:36:22.089621 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 13 07:36:22.098600 systemd[1]: Mounted tmp.mount. Feb 13 07:36:22.105720 systemd[1]: Finished flatcar-tmpfiles.service. Feb 13 07:36:22.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.114701 systemd[1]: Finished kmod-static-nodes.service. Feb 13 07:36:22.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.123718 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 07:36:22.123826 systemd[1]: Finished modprobe@configfs.service. Feb 13 07:36:22.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.132800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 07:36:22.132940 systemd[1]: Finished modprobe@dm_mod.service. Feb 13 07:36:22.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.142922 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 07:36:22.143122 systemd[1]: Finished modprobe@drm.service. Feb 13 07:36:22.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.152194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 07:36:22.152524 systemd[1]: Finished modprobe@efi_pstore.service. Feb 13 07:36:22.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.161301 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 07:36:22.161709 systemd[1]: Finished modprobe@fuse.service. Feb 13 07:36:22.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.171268 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 07:36:22.171671 systemd[1]: Finished modprobe@loop.service. Feb 13 07:36:22.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.180295 systemd[1]: Finished systemd-modules-load.service. Feb 13 07:36:22.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.189251 systemd[1]: Finished systemd-network-generator.service. Feb 13 07:36:22.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.198263 systemd[1]: Finished systemd-remount-fs.service. Feb 13 07:36:22.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.207214 systemd[1]: Finished systemd-udev-trigger.service. Feb 13 07:36:22.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.216813 systemd[1]: Reached target network-pre.target. Feb 13 07:36:22.229178 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 13 07:36:22.240165 systemd[1]: Mounting sys-kernel-config.mount... Feb 13 07:36:22.248611 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 07:36:22.249620 systemd[1]: Starting systemd-hwdb-update.service... Feb 13 07:36:22.256969 systemd[1]: Starting systemd-journal-flush.service... Feb 13 07:36:22.260471 systemd-journald[1252]: Time spent on flushing to /var/log/journal/b15f0090ff3242fc8c6a805361e72d3e is 14.637ms for 1593 entries. Feb 13 07:36:22.260471 systemd-journald[1252]: System Journal (/var/log/journal/b15f0090ff3242fc8c6a805361e72d3e) is 8.0M, max 195.6M, 187.6M free. Feb 13 07:36:22.298687 systemd-journald[1252]: Received client request to flush runtime journal. Feb 13 07:36:22.273480 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 07:36:22.273999 systemd[1]: Starting systemd-random-seed.service... Feb 13 07:36:22.288489 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 13 07:36:22.288997 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:36:22.296185 systemd[1]: Starting systemd-sysusers.service... Feb 13 07:36:22.303934 systemd[1]: Starting systemd-udev-settle.service... Feb 13 07:36:22.311467 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 13 07:36:22.319516 systemd[1]: Mounted sys-kernel-config.mount. Feb 13 07:36:22.327554 systemd[1]: Finished systemd-journal-flush.service. Feb 13 07:36:22.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.335513 systemd[1]: Finished systemd-random-seed.service. Feb 13 07:36:22.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.343573 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:36:22.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.351570 systemd[1]: Finished systemd-sysusers.service. Feb 13 07:36:22.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.360625 systemd[1]: Reached target first-boot-complete.target. Feb 13 07:36:22.369113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 13 07:36:22.378565 udevadm[1268]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 07:36:22.388469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 13 07:36:22.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.552608 systemd[1]: Finished systemd-hwdb-update.service. Feb 13 07:36:22.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.561000 audit: BPF prog-id=18 op=LOAD Feb 13 07:36:22.562000 audit: BPF prog-id=19 op=LOAD Feb 13 07:36:22.562000 audit: BPF prog-id=7 op=UNLOAD Feb 13 07:36:22.562000 audit: BPF prog-id=8 op=UNLOAD Feb 13 07:36:22.562706 systemd[1]: Starting systemd-udevd.service... Feb 13 07:36:22.574059 systemd-udevd[1271]: Using default interface naming scheme 'v252'. Feb 13 07:36:22.593091 systemd[1]: Started systemd-udevd.service. Feb 13 07:36:22.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.603377 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Feb 13 07:36:22.603000 audit: BPF prog-id=20 op=LOAD Feb 13 07:36:22.604519 systemd[1]: Starting systemd-networkd.service... Feb 13 07:36:22.631375 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Feb 13 07:36:22.631000 audit: BPF prog-id=21 op=LOAD Feb 13 07:36:22.632000 audit: BPF prog-id=22 op=LOAD Feb 13 07:36:22.635364 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 07:36:22.635400 kernel: IPMI message handler: version 39.2 Feb 13 07:36:22.635416 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1351) Feb 13 07:36:22.635432 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 07:36:22.668385 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 07:36:22.711000 audit: BPF prog-id=23 op=LOAD Feb 13 07:36:22.712288 systemd[1]: Starting systemd-userdbd.service... Feb 13 07:36:22.641000 audit[1346]: AVC avc: denied { confidentiality } for pid=1346 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 13 07:36:22.641000 audit[1346]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7f6b2b7d6010 a1=4d8bc a2=7f6b2d48ebc5 a3=5 items=42 ppid=1271 pid=1346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:36:22.641000 audit: CWD cwd="/" Feb 13 07:36:22.641000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=1 name=(null) inode=16061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=2 name=(null) inode=16061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=3 name=(null) inode=16062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=4 name=(null) inode=16061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=5 name=(null) inode=16063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=6 name=(null) inode=16061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=7 name=(null) inode=16064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=8 name=(null) inode=16064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=9 name=(null) inode=16065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=10 name=(null) inode=16064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=11 name=(null) inode=16066 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=12 name=(null) inode=16064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=13 name=(null) inode=16067 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=14 name=(null) inode=16064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=15 name=(null) inode=16068 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=16 name=(null) inode=16064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=17 name=(null) inode=16069 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=18 name=(null) inode=16061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=19 name=(null) inode=16070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=20 name=(null) inode=16070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=21 name=(null) inode=16071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=22 name=(null) inode=16070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=23 name=(null) inode=16072 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=24 name=(null) inode=16070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=25 name=(null) inode=16073 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=26 name=(null) inode=16070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=27 name=(null) inode=16074 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=28 name=(null) inode=16070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=29 name=(null) inode=16075 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=30 name=(null) inode=16061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=31 name=(null) inode=16076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=32 name=(null) inode=16076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=33 name=(null) inode=16077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=34 name=(null) inode=16076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=35 name=(null) inode=16078 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=36 name=(null) inode=16076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=37 name=(null) inode=16079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=38 name=(null) inode=16076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=39 name=(null) inode=16080 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=40 name=(null) inode=16076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PATH item=41 name=(null) inode=16081 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 13 07:36:22.641000 audit: PROCTITLE proctitle="(udev-worker)" Feb 13 07:36:22.750042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 13 07:36:22.753362 kernel: ipmi device interface Feb 13 07:36:22.753395 kernel: ACPI: button: Power Button [PWRF] Feb 13 07:36:22.794377 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Feb 13 07:36:22.794562 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Feb 13 07:36:22.828752 systemd[1]: Started systemd-userdbd.service. Feb 13 07:36:22.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:22.884960 kernel: ipmi_si: IPMI System Interface driver Feb 13 07:36:22.885030 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Feb 13 07:36:22.885156 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Feb 13 07:36:22.885344 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Feb 13 07:36:22.925288 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Feb 13 07:36:22.925317 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Feb 13 07:36:22.925335 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Feb 13 07:36:22.965543 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Feb 13 07:36:22.965689 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Feb 13 07:36:23.049364 kernel: iTCO_vendor_support: vendor-support=0 Feb 13 07:36:23.088314 systemd-networkd[1320]: bond0: netdev ready Feb 13 07:36:23.090364 systemd-networkd[1320]: lo: Link UP Feb 13 07:36:23.090367 systemd-networkd[1320]: lo: Gained carrier Feb 13 07:36:23.090828 systemd-networkd[1320]: Enumeration completed Feb 13 07:36:23.090888 systemd[1]: Started systemd-networkd.service. Feb 13 07:36:23.091024 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Feb 13 07:36:23.091138 kernel: ipmi_si: Adding ACPI-specified kcs state machine Feb 13 07:36:23.091152 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Feb 13 07:36:23.091133 systemd-networkd[1320]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Feb 13 07:36:23.091879 systemd-networkd[1320]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:79.network. Feb 13 07:36:23.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.138364 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Feb 13 07:36:23.138489 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Feb 13 07:36:23.157358 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Feb 13 07:36:23.194361 kernel: intel_rapl_common: Found RAPL domain package Feb 13 07:36:23.211359 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Feb 13 07:36:23.211469 kernel: intel_rapl_common: Found RAPL domain core Feb 13 07:36:23.267944 kernel: intel_rapl_common: Found RAPL domain dram Feb 13 07:36:23.333388 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Feb 13 07:36:23.353391 kernel: ipmi_ssif: IPMI SSIF Interface driver Feb 13 07:36:23.357629 systemd[1]: Finished systemd-udev-settle.service. Feb 13 07:36:23.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.366083 systemd[1]: Starting lvm2-activation-early.service... Feb 13 07:36:23.382207 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 07:36:23.417761 systemd[1]: Finished lvm2-activation-early.service. Feb 13 07:36:23.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.425484 systemd[1]: Reached target cryptsetup.target. Feb 13 07:36:23.434031 systemd[1]: Starting lvm2-activation.service... Feb 13 07:36:23.436270 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 07:36:23.475201 systemd[1]: Finished lvm2-activation.service. Feb 13 07:36:23.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.483618 systemd[1]: Reached target local-fs-pre.target. Feb 13 07:36:23.491557 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 07:36:23.491616 systemd[1]: Reached target local-fs.target. Feb 13 07:36:23.499437 systemd[1]: Reached target machines.target. Feb 13 07:36:23.508057 systemd[1]: Starting ldconfig.service... Feb 13 07:36:23.514818 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 13 07:36:23.514840 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:36:23.515375 systemd[1]: Starting systemd-boot-update.service... Feb 13 07:36:23.522855 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 13 07:36:23.532930 systemd[1]: Starting systemd-machine-id-commit.service... Feb 13 07:36:23.533015 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 13 07:36:23.533036 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 13 07:36:23.533520 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 13 07:36:23.533720 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1382 (bootctl) Feb 13 07:36:23.534266 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 13 07:36:23.542713 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 13 07:36:23.550972 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 07:36:23.553799 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 13 07:36:23.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.566901 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 07:36:23.702373 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:36:23.730371 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Feb 13 07:36:23.730506 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:36:23.771396 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Feb 13 07:36:23.771428 systemd-networkd[1320]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 13 07:36:23.811778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 07:36:23.812153 systemd[1]: Finished systemd-machine-id-commit.service. Feb 13 07:36:23.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.854057 systemd-fsck[1390]: fsck.fat 4.2 (2021-01-31) Feb 13 07:36:23.854057 systemd-fsck[1390]: /dev/sda1: 789 files, 115339/258078 clusters Feb 13 07:36:23.854908 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 13 07:36:23.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.876551 systemd[1]: Mounting boot.mount... Feb 13 07:36:23.882401 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Feb 13 07:36:23.896313 systemd[1]: Mounted boot.mount. Feb 13 07:36:23.919456 systemd[1]: Finished systemd-boot-update.service. Feb 13 07:36:23.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.938358 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Feb 13 07:36:23.948941 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 13 07:36:23.962359 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Feb 13 07:36:23.963180 systemd-networkd[1320]: bond0: Link UP Feb 13 07:36:23.963385 systemd-networkd[1320]: enp1s0f1np1: Link UP Feb 13 07:36:23.963519 systemd-networkd[1320]: enp1s0f1np1: Gained carrier Feb 13 07:36:23.964652 systemd-networkd[1320]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:29:78.network. Feb 13 07:36:23.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 13 07:36:23.980263 systemd[1]: Starting audit-rules.service... Feb 13 07:36:23.999013 systemd[1]: Starting clean-ca-certificates.service... Feb 13 07:36:24.002637 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 07:36:24.002668 kernel: bond0: active interface up! Feb 13 07:36:24.006000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 13 07:36:24.006000 audit[1410]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff84af9e60 a2=420 a3=0 items=0 ppid=1394 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 13 07:36:24.006000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 13 07:36:24.007154 augenrules[1410]: No rules Feb 13 07:36:24.020031 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 13 07:36:24.025358 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Feb 13 07:36:24.033392 systemd[1]: Starting systemd-resolved.service... Feb 13 07:36:24.041332 systemd[1]: Starting systemd-timesyncd.service... Feb 13 07:36:24.048931 systemd[1]: Starting systemd-update-utmp.service... Feb 13 07:36:24.055713 systemd[1]: Finished audit-rules.service. Feb 13 07:36:24.062569 systemd[1]: Finished clean-ca-certificates.service. Feb 13 07:36:24.068867 ldconfig[1381]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 07:36:24.070553 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 13 07:36:24.080576 systemd[1]: Finished ldconfig.service. Feb 13 07:36:24.089187 systemd[1]: Finished systemd-update-utmp.service. Feb 13 07:36:24.097110 systemd-networkd[1320]: bond0: Gained carrier Feb 13 07:36:24.097242 systemd-networkd[1320]: enp1s0f0np0: Link UP Feb 13 07:36:24.097380 systemd-networkd[1320]: enp1s0f0np0: Gained carrier Feb 13 07:36:24.098096 systemd[1]: Starting systemd-update-done.service... Feb 13 07:36:24.105400 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 07:36:24.105575 systemd[1]: Finished systemd-update-done.service. Feb 13 07:36:24.109626 systemd-networkd[1320]: enp1s0f1np1: Link DOWN Feb 13 07:36:24.109628 systemd-networkd[1320]: enp1s0f1np1: Lost carrier Feb 13 07:36:24.115076 systemd[1]: Started systemd-timesyncd.service. Feb 13 07:36:24.116242 systemd-resolved[1416]: Positive Trust Anchors: Feb 13 07:36:24.116247 systemd-resolved[1416]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 07:36:24.116265 systemd-resolved[1416]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 13 07:36:24.119542 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:24.119721 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:24.119925 systemd-resolved[1416]: Using system hostname 'ci-3510.3.2-a-9e65c995fd'. Feb 13 07:36:24.123653 systemd[1]: Reached target time-set.target. Feb 13 07:36:24.151360 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:36:24.174359 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:36:24.197358 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:36:24.220358 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:36:24.242363 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:36:24.263359 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:36:24.280370 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Feb 13 07:36:24.280644 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Feb 13 07:36:24.284140 systemd-networkd[1320]: enp1s0f1np1: Link UP Feb 13 07:36:24.284287 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:24.284303 systemd-networkd[1320]: enp1s0f1np1: Gained carrier Feb 13 07:36:24.284328 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:24.285116 systemd[1]: Started systemd-resolved.service. Feb 13 07:36:24.300360 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Feb 13 07:36:24.322487 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:24.322586 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:24.324520 systemd[1]: Reached target network.target. Feb 13 07:36:24.332439 systemd[1]: Reached target nss-lookup.target. Feb 13 07:36:24.340444 systemd[1]: Reached target sysinit.target. Feb 13 07:36:24.348472 systemd[1]: Started motdgen.path. Feb 13 07:36:24.355452 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 13 07:36:24.365506 systemd[1]: Started logrotate.timer. Feb 13 07:36:24.372478 systemd[1]: Started mdadm.timer. Feb 13 07:36:24.379425 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 13 07:36:24.387425 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 07:36:24.387443 systemd[1]: Reached target paths.target. Feb 13 07:36:24.394425 systemd[1]: Reached target timers.target. Feb 13 07:36:24.401554 systemd[1]: Listening on dbus.socket. Feb 13 07:36:24.416041 systemd[1]: Starting docker.socket... Feb 13 07:36:24.422359 kernel: bond0: (slave enp1s0f1np1): link status up again after 100 ms Feb 13 07:36:24.438877 systemd[1]: Listening on sshd.socket. Feb 13 07:36:24.443357 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Feb 13 07:36:24.449511 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:36:24.449721 systemd[1]: Listening on docker.socket. Feb 13 07:36:24.456502 systemd[1]: Reached target sockets.target. Feb 13 07:36:24.464459 systemd[1]: Reached target basic.target. Feb 13 07:36:24.471490 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 07:36:24.471503 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 13 07:36:24.471945 systemd[1]: Starting containerd.service... Feb 13 07:36:24.478865 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 13 07:36:24.487929 systemd[1]: Starting coreos-metadata.service... Feb 13 07:36:24.494918 systemd[1]: Starting dbus.service... Feb 13 07:36:24.500873 systemd[1]: Starting enable-oem-cloudinit.service... Feb 13 07:36:24.506295 jq[1432]: false Feb 13 07:36:24.507956 coreos-metadata[1425]: Feb 13 07:36:24.507 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:36:24.508908 systemd[1]: Starting extend-filesystems.service... Feb 13 07:36:24.515729 dbus-daemon[1431]: [system] SELinux support is enabled Feb 13 07:36:24.516424 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 13 07:36:24.516556 extend-filesystems[1434]: Found sda Feb 13 07:36:24.523438 extend-filesystems[1434]: Found sda1 Feb 13 07:36:24.523438 extend-filesystems[1434]: Found sda2 Feb 13 07:36:24.523438 extend-filesystems[1434]: Found sda3 Feb 13 07:36:24.523438 extend-filesystems[1434]: Found usr Feb 13 07:36:24.523438 extend-filesystems[1434]: Found sda4 Feb 13 07:36:24.523438 extend-filesystems[1434]: Found sda6 Feb 13 07:36:24.523438 extend-filesystems[1434]: Found sda7 Feb 13 07:36:24.523438 extend-filesystems[1434]: Found sda9 Feb 13 07:36:24.523438 extend-filesystems[1434]: Checking size of /dev/sda9 Feb 13 07:36:24.523438 extend-filesystems[1434]: Resized partition /dev/sda9 Feb 13 07:36:24.671516 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Feb 13 07:36:24.671552 coreos-metadata[1428]: Feb 13 07:36:24.516 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Feb 13 07:36:24.517002 systemd[1]: Starting motdgen.service... Feb 13 07:36:24.671709 extend-filesystems[1449]: resize2fs 1.46.5 (30-Dec-2021) Feb 13 07:36:24.539091 systemd[1]: Starting prepare-cni-plugins.service... Feb 13 07:36:24.559948 systemd[1]: Starting prepare-critools.service... Feb 13 07:36:24.575890 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 13 07:36:24.591860 systemd[1]: Starting sshd-keygen.service... Feb 13 07:36:24.608677 systemd[1]: Starting systemd-logind.service... Feb 13 07:36:24.622388 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 13 07:36:24.691956 update_engine[1463]: I0213 07:36:24.683789 1463 main.cc:92] Flatcar Update Engine starting Feb 13 07:36:24.691956 update_engine[1463]: I0213 07:36:24.687135 1463 update_check_scheduler.cc:74] Next update check in 8m9s Feb 13 07:36:24.622892 systemd[1]: Starting tcsd.service... Feb 13 07:36:24.692120 jq[1464]: true Feb 13 07:36:24.629216 systemd-logind[1461]: Watching system buttons on /dev/input/event3 (Power Button) Feb 13 07:36:24.629225 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 07:36:24.629234 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Feb 13 07:36:24.629328 systemd-logind[1461]: New seat seat0. Feb 13 07:36:24.634628 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 07:36:24.634984 systemd[1]: Starting update-engine.service... Feb 13 07:36:24.649033 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 13 07:36:24.663690 systemd[1]: Started dbus.service. Feb 13 07:36:24.685059 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 07:36:24.685145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 13 07:36:24.685288 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 07:36:24.685373 systemd[1]: Finished motdgen.service. Feb 13 07:36:24.699372 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 07:36:24.699471 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 13 07:36:24.704367 tar[1466]: ./ Feb 13 07:36:24.704367 tar[1466]: ./loopback Feb 13 07:36:24.710035 jq[1470]: true Feb 13 07:36:24.710327 dbus-daemon[1431]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 07:36:24.711588 tar[1467]: crictl Feb 13 07:36:24.715808 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Feb 13 07:36:24.715925 systemd[1]: Condition check resulted in tcsd.service being skipped. Feb 13 07:36:24.716015 systemd[1]: Started update-engine.service. Feb 13 07:36:24.719244 env[1471]: time="2024-02-13T07:36:24.719202273Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 13 07:36:24.722828 tar[1466]: ./bandwidth Feb 13 07:36:24.728017 env[1471]: time="2024-02-13T07:36:24.727973868Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 07:36:24.729178 env[1471]: time="2024-02-13T07:36:24.729163433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:36:24.729263 systemd[1]: Started systemd-logind.service. Feb 13 07:36:24.729861 env[1471]: time="2024-02-13T07:36:24.729845929Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:36:24.730977 env[1471]: time="2024-02-13T07:36:24.729861363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:36:24.731515 env[1471]: time="2024-02-13T07:36:24.731503205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:36:24.731548 env[1471]: time="2024-02-13T07:36:24.731516213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 07:36:24.731548 env[1471]: time="2024-02-13T07:36:24.731524247Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 13 07:36:24.731548 env[1471]: time="2024-02-13T07:36:24.731530321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 07:36:24.731614 env[1471]: time="2024-02-13T07:36:24.731573418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:36:24.731741 env[1471]: time="2024-02-13T07:36:24.731731557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 07:36:24.731811 env[1471]: time="2024-02-13T07:36:24.731801152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 07:36:24.731829 env[1471]: time="2024-02-13T07:36:24.731811289Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 07:36:24.733702 env[1471]: time="2024-02-13T07:36:24.733690399Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 13 07:36:24.733733 env[1471]: time="2024-02-13T07:36:24.733701404Z" level=info msg="metadata content store policy set" policy=shared Feb 13 07:36:24.738037 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Feb 13 07:36:24.739255 systemd[1]: Started locksmithd.service. Feb 13 07:36:24.742510 env[1471]: time="2024-02-13T07:36:24.742494983Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 07:36:24.742547 env[1471]: time="2024-02-13T07:36:24.742515484Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 07:36:24.742547 env[1471]: time="2024-02-13T07:36:24.742523860Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 07:36:24.742547 env[1471]: time="2024-02-13T07:36:24.742541179Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742597 env[1471]: time="2024-02-13T07:36:24.742549567Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742597 env[1471]: time="2024-02-13T07:36:24.742556888Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742597 env[1471]: time="2024-02-13T07:36:24.742563716Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742597 env[1471]: time="2024-02-13T07:36:24.742570911Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742597 env[1471]: time="2024-02-13T07:36:24.742578077Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742597 env[1471]: time="2024-02-13T07:36:24.742585539Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742597 env[1471]: time="2024-02-13T07:36:24.742592199Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742727 env[1471]: time="2024-02-13T07:36:24.742598949Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 07:36:24.742727 env[1471]: time="2024-02-13T07:36:24.742650671Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 07:36:24.742727 env[1471]: time="2024-02-13T07:36:24.742695074Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 07:36:24.742856 env[1471]: time="2024-02-13T07:36:24.742841690Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 07:36:24.742885 env[1471]: time="2024-02-13T07:36:24.742868147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.742885 env[1471]: time="2024-02-13T07:36:24.742880211Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 07:36:24.742926 env[1471]: time="2024-02-13T07:36:24.742917399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.742945 env[1471]: time="2024-02-13T07:36:24.742930447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.742967 env[1471]: time="2024-02-13T07:36:24.742942554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.742967 env[1471]: time="2024-02-13T07:36:24.742953004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743001 env[1471]: time="2024-02-13T07:36:24.742966895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743001 env[1471]: time="2024-02-13T07:36:24.742978425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743001 env[1471]: time="2024-02-13T07:36:24.742988755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743057 env[1471]: time="2024-02-13T07:36:24.743001491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743057 env[1471]: time="2024-02-13T07:36:24.743013939Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 07:36:24.743110 env[1471]: time="2024-02-13T07:36:24.743101383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743129 env[1471]: time="2024-02-13T07:36:24.743114791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743156 env[1471]: time="2024-02-13T07:36:24.743126379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743156 env[1471]: time="2024-02-13T07:36:24.743137261Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 07:36:24.743191 env[1471]: time="2024-02-13T07:36:24.743151514Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 13 07:36:24.743191 env[1471]: time="2024-02-13T07:36:24.743165480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 07:36:24.743191 env[1471]: time="2024-02-13T07:36:24.743184279Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 13 07:36:24.743246 env[1471]: time="2024-02-13T07:36:24.743211351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 07:36:24.743427 env[1471]: time="2024-02-13T07:36:24.743382973Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.743436756Z" level=info msg="Connect containerd service" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.743471890Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.743860295Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.743949943Z" level=info msg="Start subscribing containerd event" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.743974231Z" level=info msg="Start recovering state" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.744005160Z" level=info msg="Start event monitor" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.744009248Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.744016537Z" level=info msg="Start snapshots syncer" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.744025180Z" level=info msg="Start cni network conf syncer for default" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.744031240Z" level=info msg="Start streaming server" Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.744045090Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 07:36:24.745455 env[1471]: time="2024-02-13T07:36:24.744075753Z" level=info msg="containerd successfully booted in 0.025210s" Feb 13 07:36:24.746514 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 07:36:24.746638 systemd[1]: Reached target system-config.target. Feb 13 07:36:24.753634 tar[1466]: ./ptp Feb 13 07:36:24.754517 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 07:36:24.754636 systemd[1]: Reached target user-config.target. Feb 13 07:36:24.763988 systemd[1]: Started containerd.service. Feb 13 07:36:24.770639 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 13 07:36:24.777573 tar[1466]: ./vlan Feb 13 07:36:24.797958 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 07:36:24.800109 tar[1466]: ./host-device Feb 13 07:36:24.821361 tar[1466]: ./tuning Feb 13 07:36:24.840210 tar[1466]: ./vrf Feb 13 07:36:24.859884 tar[1466]: ./sbr Feb 13 07:36:24.879119 tar[1466]: ./tap Feb 13 07:36:24.901142 tar[1466]: ./dhcp Feb 13 07:36:24.957183 tar[1466]: ./static Feb 13 07:36:24.973038 tar[1466]: ./firewall Feb 13 07:36:24.975206 systemd[1]: Finished prepare-critools.service. Feb 13 07:36:24.997457 tar[1466]: ./macvlan Feb 13 07:36:25.019443 tar[1466]: ./dummy Feb 13 07:36:25.028358 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Feb 13 07:36:25.057589 extend-filesystems[1449]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 07:36:25.057589 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 56 Feb 13 07:36:25.057589 extend-filesystems[1449]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Feb 13 07:36:25.094405 extend-filesystems[1434]: Resized filesystem in /dev/sda9 Feb 13 07:36:25.094405 extend-filesystems[1434]: Found sdb Feb 13 07:36:25.109456 tar[1466]: ./bridge Feb 13 07:36:25.109456 tar[1466]: ./ipvlan Feb 13 07:36:25.058026 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 07:36:25.058114 systemd[1]: Finished extend-filesystems.service. Feb 13 07:36:25.116663 tar[1466]: ./portmap Feb 13 07:36:25.137026 tar[1466]: ./host-local Feb 13 07:36:25.160944 systemd[1]: Finished prepare-cni-plugins.service. Feb 13 07:36:25.260618 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 07:36:25.272196 systemd[1]: Finished sshd-keygen.service. Feb 13 07:36:25.280317 systemd[1]: Starting issuegen.service... Feb 13 07:36:25.288786 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 07:36:25.288857 systemd[1]: Finished issuegen.service. Feb 13 07:36:25.296317 systemd[1]: Starting systemd-user-sessions.service... Feb 13 07:36:25.304786 systemd[1]: Finished systemd-user-sessions.service. Feb 13 07:36:25.317081 systemd[1]: Started getty@tty1.service. Feb 13 07:36:25.327928 systemd[1]: Started serial-getty@ttyS1.service. Feb 13 07:36:25.337182 systemd[1]: Reached target getty.target. Feb 13 07:36:25.714412 systemd-networkd[1320]: bond0: Gained IPv6LL Feb 13 07:36:25.714636 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:25.842599 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:25.842722 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:26.756388 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Feb 13 07:36:30.346722 login[1534]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 07:36:30.354359 login[1533]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 07:36:30.355222 systemd-logind[1461]: New session 1 of user core. Feb 13 07:36:30.355836 systemd[1]: Created slice user-500.slice. Feb 13 07:36:30.356501 systemd[1]: Starting user-runtime-dir@500.service... Feb 13 07:36:30.357801 systemd-logind[1461]: New session 2 of user core. Feb 13 07:36:30.361659 systemd[1]: Finished user-runtime-dir@500.service. Feb 13 07:36:30.362315 systemd[1]: Starting user@500.service... Feb 13 07:36:30.364314 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:30.439739 systemd[1538]: Queued start job for default target default.target. Feb 13 07:36:30.439972 systemd[1538]: Reached target paths.target. Feb 13 07:36:30.439984 systemd[1538]: Reached target sockets.target. Feb 13 07:36:30.439992 systemd[1538]: Reached target timers.target. Feb 13 07:36:30.439999 systemd[1538]: Reached target basic.target. Feb 13 07:36:30.440018 systemd[1538]: Reached target default.target. Feb 13 07:36:30.440032 systemd[1538]: Startup finished in 72ms. Feb 13 07:36:30.440086 systemd[1]: Started user@500.service. Feb 13 07:36:30.440632 systemd[1]: Started session-1.scope. Feb 13 07:36:30.440984 systemd[1]: Started session-2.scope. Feb 13 07:36:30.643625 coreos-metadata[1425]: Feb 13 07:36:30.643 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 07:36:30.644424 coreos-metadata[1428]: Feb 13 07:36:30.643 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Feb 13 07:36:31.643700 coreos-metadata[1428]: Feb 13 07:36:31.643 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 07:36:31.643954 coreos-metadata[1425]: Feb 13 07:36:31.643 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Feb 13 07:36:32.049362 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Feb 13 07:36:32.049525 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Feb 13 07:36:32.716256 coreos-metadata[1428]: Feb 13 07:36:32.716 INFO Fetch successful Feb 13 07:36:32.716982 coreos-metadata[1425]: Feb 13 07:36:32.716 INFO Fetch successful Feb 13 07:36:32.738785 systemd[1]: Finished coreos-metadata.service. Feb 13 07:36:32.739557 unknown[1425]: wrote ssh authorized keys file for user: core Feb 13 07:36:32.739709 systemd[1]: Started packet-phone-home.service. Feb 13 07:36:32.749553 curl[1560]: % Total % Received % Xferd Average Speed Time Time Time Current Feb 13 07:36:32.749553 curl[1560]: Dload Upload Total Spent Left Speed Feb 13 07:36:32.753226 update-ssh-keys[1561]: Updated "/home/core/.ssh/authorized_keys" Feb 13 07:36:32.753454 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 13 07:36:32.753614 systemd[1]: Reached target multi-user.target. Feb 13 07:36:32.754224 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 13 07:36:32.758205 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 13 07:36:32.758279 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 13 07:36:32.758366 systemd[1]: Startup finished in 1.847s (kernel) + 39.104s (initrd) + 14.823s (userspace) = 55.776s. Feb 13 07:36:32.932515 curl[1560]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Feb 13 07:36:32.934987 systemd[1]: packet-phone-home.service: Deactivated successfully. Feb 13 07:36:33.857918 systemd[1]: Created slice system-sshd.slice. Feb 13 07:36:33.858550 systemd[1]: Started sshd@0-139.178.90.101:22-139.178.68.195:38334.service. Feb 13 07:36:33.904082 sshd[1564]: Accepted publickey for core from 139.178.68.195 port 38334 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:33.905040 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:33.908236 systemd-logind[1461]: New session 3 of user core. Feb 13 07:36:33.908956 systemd[1]: Started session-3.scope. Feb 13 07:36:33.961645 systemd[1]: Started sshd@1-139.178.90.101:22-139.178.68.195:38346.service. Feb 13 07:36:33.986727 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 38346 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:33.987435 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:33.989778 systemd-logind[1461]: New session 4 of user core. Feb 13 07:36:33.990215 systemd[1]: Started session-4.scope. Feb 13 07:36:34.043696 sshd[1569]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:34.045079 systemd[1]: sshd@1-139.178.90.101:22-139.178.68.195:38346.service: Deactivated successfully. Feb 13 07:36:34.045401 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 07:36:34.045791 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Feb 13 07:36:34.046270 systemd[1]: Started sshd@2-139.178.90.101:22-139.178.68.195:38354.service. Feb 13 07:36:34.046703 systemd-logind[1461]: Removed session 4. Feb 13 07:36:34.072673 sshd[1575]: Accepted publickey for core from 139.178.68.195 port 38354 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:34.073500 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:34.076319 systemd-logind[1461]: New session 5 of user core. Feb 13 07:36:34.076992 systemd[1]: Started session-5.scope. Feb 13 07:36:34.130349 sshd[1575]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:34.136928 systemd[1]: sshd@2-139.178.90.101:22-139.178.68.195:38354.service: Deactivated successfully. Feb 13 07:36:34.138516 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 07:36:34.140228 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Feb 13 07:36:34.142690 systemd[1]: Started sshd@3-139.178.90.101:22-139.178.68.195:38368.service. Feb 13 07:36:34.145127 systemd-logind[1461]: Removed session 5. Feb 13 07:36:34.213091 sshd[1581]: Accepted publickey for core from 139.178.68.195 port 38368 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:34.216245 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:34.226525 systemd-logind[1461]: New session 6 of user core. Feb 13 07:36:34.228924 systemd[1]: Started session-6.scope. Feb 13 07:36:34.298894 sshd[1581]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:34.300641 systemd[1]: sshd@3-139.178.90.101:22-139.178.68.195:38368.service: Deactivated successfully. Feb 13 07:36:34.301014 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 07:36:34.301356 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Feb 13 07:36:34.301902 systemd[1]: Started sshd@4-139.178.90.101:22-139.178.68.195:38382.service. Feb 13 07:36:34.302366 systemd-logind[1461]: Removed session 6. Feb 13 07:36:34.328113 sshd[1587]: Accepted publickey for core from 139.178.68.195 port 38382 ssh2: RSA SHA256:1PlZIJIBJggYI3VbAvHiWZPn3uvIsILcfs6l5/y3kqg Feb 13 07:36:34.329030 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 13 07:36:34.332254 systemd-logind[1461]: New session 7 of user core. Feb 13 07:36:34.333417 systemd[1]: Started session-7.scope. Feb 13 07:36:34.421490 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 07:36:34.422093 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 13 07:36:38.937001 systemd[1]: Reloading. Feb 13 07:36:38.970246 /usr/lib/systemd/system-generators/torcx-generator[1619]: time="2024-02-13T07:36:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:36:38.970262 /usr/lib/systemd/system-generators/torcx-generator[1619]: time="2024-02-13T07:36:38Z" level=info msg="torcx already run" Feb 13 07:36:39.036537 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:36:39.036547 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:36:39.052270 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:36:39.103232 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 13 07:36:39.106770 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 13 07:36:39.107017 systemd[1]: Reached target network-online.target. Feb 13 07:36:39.107664 systemd[1]: Started kubelet.service. Feb 13 07:36:39.129730 kubelet[1680]: E0213 07:36:39.129678 1680 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 13 07:36:39.130839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 07:36:39.130905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 07:36:39.415868 systemd[1]: Started sshd@5-139.178.90.101:22-1.117.181.161:36018.service. Feb 13 07:36:39.690635 systemd[1]: Stopped kubelet.service. Feb 13 07:36:39.732868 systemd[1]: Reloading. Feb 13 07:36:39.779758 /usr/lib/systemd/system-generators/torcx-generator[1787]: time="2024-02-13T07:36:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 13 07:36:39.779774 /usr/lib/systemd/system-generators/torcx-generator[1787]: time="2024-02-13T07:36:39Z" level=info msg="torcx already run" Feb 13 07:36:39.829856 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 13 07:36:39.829863 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 13 07:36:39.841857 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 07:36:39.896068 systemd[1]: Started kubelet.service. Feb 13 07:36:39.918300 kubelet[1847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:36:39.918300 kubelet[1847]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 07:36:39.918300 kubelet[1847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 07:36:39.918300 kubelet[1847]: I0213 07:36:39.918286 1847 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 07:36:40.177669 kubelet[1847]: I0213 07:36:40.177627 1847 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 13 07:36:40.177669 kubelet[1847]: I0213 07:36:40.177640 1847 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 07:36:40.177771 kubelet[1847]: I0213 07:36:40.177761 1847 server.go:837] "Client rotation is on, will bootstrap in background" Feb 13 07:36:40.178805 kubelet[1847]: I0213 07:36:40.178796 1847 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 07:36:40.207819 kubelet[1847]: I0213 07:36:40.207753 1847 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 07:36:40.208161 kubelet[1847]: I0213 07:36:40.208092 1847 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 07:36:40.208307 kubelet[1847]: I0213 07:36:40.208211 1847 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 13 07:36:40.208307 kubelet[1847]: I0213 07:36:40.208242 1847 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 13 07:36:40.208307 kubelet[1847]: I0213 07:36:40.208265 1847 container_manager_linux.go:302] "Creating device plugin manager" Feb 13 07:36:40.208728 kubelet[1847]: I0213 07:36:40.208418 1847 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:36:40.212612 kubelet[1847]: I0213 07:36:40.212545 1847 kubelet.go:405] "Attempting to sync node with API server" Feb 13 07:36:40.212612 kubelet[1847]: I0213 07:36:40.212586 1847 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 07:36:40.212837 kubelet[1847]: I0213 07:36:40.212630 1847 kubelet.go:309] "Adding apiserver pod source" Feb 13 07:36:40.212837 kubelet[1847]: I0213 07:36:40.212674 1847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 07:36:40.212970 kubelet[1847]: E0213 07:36:40.212763 1847 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:40.213417 kubelet[1847]: E0213 07:36:40.212780 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:40.214475 kubelet[1847]: I0213 07:36:40.214411 1847 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 13 07:36:40.215273 kubelet[1847]: W0213 07:36:40.215239 1847 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 07:36:40.216691 kubelet[1847]: I0213 07:36:40.216655 1847 server.go:1168] "Started kubelet" Feb 13 07:36:40.216854 kubelet[1847]: I0213 07:36:40.216786 1847 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 13 07:36:40.216854 kubelet[1847]: I0213 07:36:40.216789 1847 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 07:36:40.217457 kubelet[1847]: E0213 07:36:40.217417 1847 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 13 07:36:40.217585 kubelet[1847]: E0213 07:36:40.217471 1847 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 07:36:40.218750 kubelet[1847]: I0213 07:36:40.218741 1847 server.go:461] "Adding debug handlers to kubelet server" Feb 13 07:36:40.227828 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 13 07:36:40.227919 kubelet[1847]: I0213 07:36:40.227910 1847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 07:36:40.227974 kubelet[1847]: I0213 07:36:40.227954 1847 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 13 07:36:40.228006 kubelet[1847]: I0213 07:36:40.227993 1847 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 13 07:36:40.228033 kubelet[1847]: E0213 07:36:40.228013 1847 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.67.80.31\" not found" Feb 13 07:36:40.233941 kubelet[1847]: E0213 07:36:40.233901 1847 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.67.80.31\" not found" node="10.67.80.31" Feb 13 07:36:40.237738 kubelet[1847]: I0213 07:36:40.237697 1847 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 07:36:40.237738 kubelet[1847]: I0213 07:36:40.237706 1847 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 07:36:40.237738 kubelet[1847]: I0213 07:36:40.237714 1847 state_mem.go:36] "Initialized new in-memory state store" Feb 13 07:36:40.238506 kubelet[1847]: I0213 07:36:40.238470 1847 policy_none.go:49] "None policy: Start" Feb 13 07:36:40.238803 kubelet[1847]: I0213 07:36:40.238773 1847 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 13 07:36:40.238803 kubelet[1847]: I0213 07:36:40.238784 1847 state_mem.go:35] "Initializing new in-memory state store" Feb 13 07:36:40.241403 systemd[1]: Created slice kubepods.slice. Feb 13 07:36:40.243348 systemd[1]: Created slice kubepods-burstable.slice. Feb 13 07:36:40.244696 systemd[1]: Created slice kubepods-besteffort.slice. Feb 13 07:36:40.266760 kubelet[1847]: I0213 07:36:40.266748 1847 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 07:36:40.266891 kubelet[1847]: I0213 07:36:40.266883 1847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 07:36:40.267164 kubelet[1847]: E0213 07:36:40.267156 1847 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.80.31\" not found" Feb 13 07:36:40.312479 kubelet[1847]: I0213 07:36:40.312439 1847 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 13 07:36:40.313130 kubelet[1847]: I0213 07:36:40.313117 1847 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 13 07:36:40.313191 kubelet[1847]: I0213 07:36:40.313138 1847 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 13 07:36:40.313191 kubelet[1847]: I0213 07:36:40.313151 1847 kubelet.go:2257] "Starting kubelet main sync loop" Feb 13 07:36:40.313191 kubelet[1847]: E0213 07:36:40.313183 1847 kubelet.go:2281] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 07:36:40.320591 sshd[1728]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=1.117.181.161 user=root Feb 13 07:36:40.329374 kubelet[1847]: I0213 07:36:40.329328 1847 kubelet_node_status.go:70] "Attempting to register node" node="10.67.80.31" Feb 13 07:36:40.335925 kubelet[1847]: I0213 07:36:40.335884 1847 kubelet_node_status.go:73] "Successfully registered node" node="10.67.80.31" Feb 13 07:36:40.348953 kubelet[1847]: I0213 07:36:40.348901 1847 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 07:36:40.349748 env[1471]: time="2024-02-13T07:36:40.349631191Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 07:36:40.350678 kubelet[1847]: I0213 07:36:40.350059 1847 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 07:36:41.178784 kubelet[1847]: I0213 07:36:41.178688 1847 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 07:36:41.179677 kubelet[1847]: W0213 07:36:41.179023 1847 reflector.go:456] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Feb 13 07:36:41.179677 kubelet[1847]: W0213 07:36:41.179096 1847 reflector.go:456] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Feb 13 07:36:41.179677 kubelet[1847]: W0213 07:36:41.179118 1847 reflector.go:456] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:150: Unexpected watch close - watch lasted less than a second and no items received Feb 13 07:36:41.213976 kubelet[1847]: I0213 07:36:41.213868 1847 apiserver.go:52] "Watching apiserver" Feb 13 07:36:41.213976 kubelet[1847]: E0213 07:36:41.213945 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:41.219373 kubelet[1847]: I0213 07:36:41.219257 1847 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:36:41.219553 kubelet[1847]: I0213 07:36:41.219516 1847 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:36:41.229307 kubelet[1847]: I0213 07:36:41.229226 1847 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 13 07:36:41.230924 systemd[1]: Created slice kubepods-besteffort-pod23b677ea_1a1a_4162_8031_5f7d5a83ed82.slice. Feb 13 07:36:41.232999 kubelet[1847]: I0213 07:36:41.232964 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-etc-cni-netd\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.232999 kubelet[1847]: I0213 07:36:41.232981 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-config-path\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.232999 kubelet[1847]: I0213 07:36:41.232994 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-host-proc-sys-net\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233071 kubelet[1847]: I0213 07:36:41.233004 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cni-path\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233071 kubelet[1847]: I0213 07:36:41.233016 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxlg\" (UniqueName: \"kubernetes.io/projected/23b677ea-1a1a-4162-8031-5f7d5a83ed82-kube-api-access-2bxlg\") pod \"kube-proxy-rv9pk\" (UID: \"23b677ea-1a1a-4162-8031-5f7d5a83ed82\") " pod="kube-system/kube-proxy-rv9pk" Feb 13 07:36:41.233071 kubelet[1847]: I0213 07:36:41.233027 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-run\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233071 kubelet[1847]: I0213 07:36:41.233045 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-xtables-lock\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233071 kubelet[1847]: I0213 07:36:41.233066 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef40e6d3-001f-48c3-82da-9b0db1166435-hubble-tls\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233152 kubelet[1847]: I0213 07:36:41.233091 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23b677ea-1a1a-4162-8031-5f7d5a83ed82-kube-proxy\") pod \"kube-proxy-rv9pk\" (UID: \"23b677ea-1a1a-4162-8031-5f7d5a83ed82\") " pod="kube-system/kube-proxy-rv9pk" Feb 13 07:36:41.233152 kubelet[1847]: I0213 07:36:41.233107 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef40e6d3-001f-48c3-82da-9b0db1166435-clustermesh-secrets\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233152 kubelet[1847]: I0213 07:36:41.233126 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-hostproc\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233201 kubelet[1847]: I0213 07:36:41.233153 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23b677ea-1a1a-4162-8031-5f7d5a83ed82-lib-modules\") pod \"kube-proxy-rv9pk\" (UID: \"23b677ea-1a1a-4162-8031-5f7d5a83ed82\") " pod="kube-system/kube-proxy-rv9pk" Feb 13 07:36:41.233201 kubelet[1847]: I0213 07:36:41.233172 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-bpf-maps\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233201 kubelet[1847]: I0213 07:36:41.233193 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-cgroup\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233253 kubelet[1847]: I0213 07:36:41.233216 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-lib-modules\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233253 kubelet[1847]: I0213 07:36:41.233230 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-host-proc-sys-kernel\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233287 kubelet[1847]: I0213 07:36:41.233259 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwk8\" (UniqueName: \"kubernetes.io/projected/ef40e6d3-001f-48c3-82da-9b0db1166435-kube-api-access-dxwk8\") pod \"cilium-4qrhn\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " pod="kube-system/cilium-4qrhn" Feb 13 07:36:41.233287 kubelet[1847]: I0213 07:36:41.233280 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23b677ea-1a1a-4162-8031-5f7d5a83ed82-xtables-lock\") pod \"kube-proxy-rv9pk\" (UID: \"23b677ea-1a1a-4162-8031-5f7d5a83ed82\") " pod="kube-system/kube-proxy-rv9pk" Feb 13 07:36:41.233320 kubelet[1847]: I0213 07:36:41.233290 1847 reconciler.go:41] "Reconciler: start to sync state" Feb 13 07:36:41.240748 systemd[1]: Created slice kubepods-burstable-podef40e6d3_001f_48c3_82da_9b0db1166435.slice. Feb 13 07:36:41.439895 sudo[1590]: pam_unix(sudo:session): session closed for user root Feb 13 07:36:41.444641 sshd[1587]: pam_unix(sshd:session): session closed for user core Feb 13 07:36:41.450083 systemd[1]: sshd@4-139.178.90.101:22-139.178.68.195:38382.service: Deactivated successfully. Feb 13 07:36:41.451818 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 07:36:41.453344 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Feb 13 07:36:41.455304 systemd-logind[1461]: Removed session 7. Feb 13 07:36:41.542544 env[1471]: time="2024-02-13T07:36:41.542443219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rv9pk,Uid:23b677ea-1a1a-4162-8031-5f7d5a83ed82,Namespace:kube-system,Attempt:0,}" Feb 13 07:36:41.559744 env[1471]: time="2024-02-13T07:36:41.559616419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qrhn,Uid:ef40e6d3-001f-48c3-82da-9b0db1166435,Namespace:kube-system,Attempt:0,}" Feb 13 07:36:42.214777 kubelet[1847]: E0213 07:36:42.214710 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:42.292901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011586927.mount: Deactivated successfully. Feb 13 07:36:42.305910 env[1471]: time="2024-02-13T07:36:42.305889627Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:42.306793 env[1471]: time="2024-02-13T07:36:42.306756074Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:42.307592 env[1471]: time="2024-02-13T07:36:42.307552625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:42.308043 env[1471]: time="2024-02-13T07:36:42.308026800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:42.308845 env[1471]: time="2024-02-13T07:36:42.308833608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:42.310120 env[1471]: time="2024-02-13T07:36:42.310103435Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:42.310851 env[1471]: time="2024-02-13T07:36:42.310833538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:42.311249 env[1471]: time="2024-02-13T07:36:42.311235045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:42.317685 env[1471]: time="2024-02-13T07:36:42.317649594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:36:42.317685 env[1471]: time="2024-02-13T07:36:42.317671854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:36:42.317685 env[1471]: time="2024-02-13T07:36:42.317668228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:36:42.317685 env[1471]: time="2024-02-13T07:36:42.317679032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:36:42.317849 env[1471]: time="2024-02-13T07:36:42.317688572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:36:42.317849 env[1471]: time="2024-02-13T07:36:42.317698593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:36:42.317849 env[1471]: time="2024-02-13T07:36:42.317744334Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e4b66c632a8177ebce85490ec3a184bca2cb20f5ba995f288e615b80e470cd9 pid=1920 runtime=io.containerd.runc.v2 Feb 13 07:36:42.317849 env[1471]: time="2024-02-13T07:36:42.317773723Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633 pid=1922 runtime=io.containerd.runc.v2 Feb 13 07:36:42.323943 systemd[1]: Started cri-containerd-0e4b66c632a8177ebce85490ec3a184bca2cb20f5ba995f288e615b80e470cd9.scope. Feb 13 07:36:42.324784 systemd[1]: Started cri-containerd-200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633.scope. Feb 13 07:36:42.334729 env[1471]: time="2024-02-13T07:36:42.334697872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qrhn,Uid:ef40e6d3-001f-48c3-82da-9b0db1166435,Namespace:kube-system,Attempt:0,} returns sandbox id \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\"" Feb 13 07:36:42.335249 env[1471]: time="2024-02-13T07:36:42.335231934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rv9pk,Uid:23b677ea-1a1a-4162-8031-5f7d5a83ed82,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e4b66c632a8177ebce85490ec3a184bca2cb20f5ba995f288e615b80e470cd9\"" Feb 13 07:36:42.335718 env[1471]: time="2024-02-13T07:36:42.335705558Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 07:36:42.833978 sshd[1728]: Failed password for root from 1.117.181.161 port 36018 ssh2 Feb 13 07:36:43.215170 kubelet[1847]: E0213 07:36:43.214947 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:44.215092 kubelet[1847]: E0213 07:36:44.215044 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:44.424601 sshd[1728]: Received disconnect from 1.117.181.161 port 36018:11: Bye Bye [preauth] Feb 13 07:36:44.424601 sshd[1728]: Disconnected from authenticating user root 1.117.181.161 port 36018 [preauth] Feb 13 07:36:44.425200 systemd[1]: sshd@5-139.178.90.101:22-1.117.181.161:36018.service: Deactivated successfully. Feb 13 07:36:45.215141 kubelet[1847]: E0213 07:36:45.215107 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:45.809904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289732206.mount: Deactivated successfully. Feb 13 07:36:46.215555 kubelet[1847]: E0213 07:36:46.215460 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:47.216492 kubelet[1847]: E0213 07:36:47.216424 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:47.478435 env[1471]: time="2024-02-13T07:36:47.478324782Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:47.478943 env[1471]: time="2024-02-13T07:36:47.478908944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:47.479981 env[1471]: time="2024-02-13T07:36:47.479923940Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:47.480455 env[1471]: time="2024-02-13T07:36:47.480349257Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 07:36:47.480898 env[1471]: time="2024-02-13T07:36:47.480882554Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 13 07:36:47.481802 env[1471]: time="2024-02-13T07:36:47.481746842Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:36:47.486629 env[1471]: time="2024-02-13T07:36:47.486587155Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\"" Feb 13 07:36:47.486964 env[1471]: time="2024-02-13T07:36:47.486921628Z" level=info msg="StartContainer for \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\"" Feb 13 07:36:47.496130 systemd[1]: Started cri-containerd-df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350.scope. Feb 13 07:36:47.507799 env[1471]: time="2024-02-13T07:36:47.507773706Z" level=info msg="StartContainer for \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\" returns successfully" Feb 13 07:36:47.512734 systemd[1]: cri-containerd-df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350.scope: Deactivated successfully. Feb 13 07:36:48.217395 kubelet[1847]: E0213 07:36:48.217299 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:48.489085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350-rootfs.mount: Deactivated successfully. Feb 13 07:36:48.661486 env[1471]: time="2024-02-13T07:36:48.661305046Z" level=info msg="shim disconnected" id=df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350 Feb 13 07:36:48.661486 env[1471]: time="2024-02-13T07:36:48.661438998Z" level=warning msg="cleaning up after shim disconnected" id=df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350 namespace=k8s.io Feb 13 07:36:48.661486 env[1471]: time="2024-02-13T07:36:48.661466326Z" level=info msg="cleaning up dead shim" Feb 13 07:36:48.669411 env[1471]: time="2024-02-13T07:36:48.669355885Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:36:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2036 runtime=io.containerd.runc.v2\n" Feb 13 07:36:49.181172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293733916.mount: Deactivated successfully. Feb 13 07:36:49.217885 kubelet[1847]: E0213 07:36:49.217843 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:49.331116 env[1471]: time="2024-02-13T07:36:49.331063778Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 07:36:49.336682 env[1471]: time="2024-02-13T07:36:49.336638812Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\"" Feb 13 07:36:49.336952 env[1471]: time="2024-02-13T07:36:49.336887437Z" level=info msg="StartContainer for \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\"" Feb 13 07:36:49.345130 systemd[1]: Started cri-containerd-2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73.scope. Feb 13 07:36:49.357462 env[1471]: time="2024-02-13T07:36:49.357413022Z" level=info msg="StartContainer for \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\" returns successfully" Feb 13 07:36:49.365937 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 07:36:49.366237 systemd[1]: Stopped systemd-sysctl.service. Feb 13 07:36:49.366340 systemd[1]: Stopping systemd-sysctl.service... Feb 13 07:36:49.367833 systemd[1]: Starting systemd-sysctl.service... Feb 13 07:36:49.368065 systemd[1]: cri-containerd-2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73.scope: Deactivated successfully. Feb 13 07:36:49.372039 systemd[1]: Finished systemd-sysctl.service. Feb 13 07:36:49.557841 env[1471]: time="2024-02-13T07:36:49.557773198Z" level=info msg="shim disconnected" id=2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73 Feb 13 07:36:49.557841 env[1471]: time="2024-02-13T07:36:49.557838868Z" level=warning msg="cleaning up after shim disconnected" id=2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73 namespace=k8s.io Feb 13 07:36:49.557841 env[1471]: time="2024-02-13T07:36:49.557845580Z" level=info msg="cleaning up dead shim" Feb 13 07:36:49.559580 env[1471]: time="2024-02-13T07:36:49.559560188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:49.560113 env[1471]: time="2024-02-13T07:36:49.560099072Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:49.560842 env[1471]: time="2024-02-13T07:36:49.560831478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:49.561482 env[1471]: time="2024-02-13T07:36:49.561468844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:36:49.561881 env[1471]: time="2024-02-13T07:36:49.561850462Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:36:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2099 runtime=io.containerd.runc.v2\n" Feb 13 07:36:49.562324 env[1471]: time="2024-02-13T07:36:49.562311135Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 13 07:36:49.563367 env[1471]: time="2024-02-13T07:36:49.563349061Z" level=info msg="CreateContainer within sandbox \"0e4b66c632a8177ebce85490ec3a184bca2cb20f5ba995f288e615b80e470cd9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 07:36:49.568604 env[1471]: time="2024-02-13T07:36:49.568561477Z" level=info msg="CreateContainer within sandbox \"0e4b66c632a8177ebce85490ec3a184bca2cb20f5ba995f288e615b80e470cd9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"187b7761c75dbe4249d3d9805729a585a9177a4ecc4d46a03917b210ef65617f\"" Feb 13 07:36:49.568760 env[1471]: time="2024-02-13T07:36:49.568749309Z" level=info msg="StartContainer for \"187b7761c75dbe4249d3d9805729a585a9177a4ecc4d46a03917b210ef65617f\"" Feb 13 07:36:49.577565 systemd[1]: Started cri-containerd-187b7761c75dbe4249d3d9805729a585a9177a4ecc4d46a03917b210ef65617f.scope. Feb 13 07:36:49.591104 env[1471]: time="2024-02-13T07:36:49.591058815Z" level=info msg="StartContainer for \"187b7761c75dbe4249d3d9805729a585a9177a4ecc4d46a03917b210ef65617f\" returns successfully" Feb 13 07:36:50.218643 kubelet[1847]: E0213 07:36:50.218568 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:50.341574 env[1471]: time="2024-02-13T07:36:50.341442971Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 07:36:50.365219 kubelet[1847]: I0213 07:36:50.365151 1847 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rv9pk" podStartSLOduration=4.138216651 podCreationTimestamp="2024-02-13 07:36:39 +0000 UTC" firstStartedPulling="2024-02-13 07:36:42.335660112 +0000 UTC m=+2.437341956" lastFinishedPulling="2024-02-13 07:36:49.562501285 +0000 UTC m=+9.664183131" observedRunningTime="2024-02-13 07:36:50.36480025 +0000 UTC m=+10.466482173" watchObservedRunningTime="2024-02-13 07:36:50.365057826 +0000 UTC m=+10.466739727" Feb 13 07:36:50.376104 env[1471]: time="2024-02-13T07:36:50.376021673Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\"" Feb 13 07:36:50.377397 env[1471]: time="2024-02-13T07:36:50.377274263Z" level=info msg="StartContainer for \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\"" Feb 13 07:36:50.413608 systemd[1]: Started cri-containerd-cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a.scope. Feb 13 07:36:50.468959 env[1471]: time="2024-02-13T07:36:50.468757928Z" level=info msg="StartContainer for \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\" returns successfully" Feb 13 07:36:50.477244 systemd[1]: cri-containerd-cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a.scope: Deactivated successfully. Feb 13 07:36:50.518042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a-rootfs.mount: Deactivated successfully. Feb 13 07:36:50.532628 env[1471]: time="2024-02-13T07:36:50.532540735Z" level=info msg="shim disconnected" id=cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a Feb 13 07:36:50.532949 env[1471]: time="2024-02-13T07:36:50.532638352Z" level=warning msg="cleaning up after shim disconnected" id=cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a namespace=k8s.io Feb 13 07:36:50.532949 env[1471]: time="2024-02-13T07:36:50.532669032Z" level=info msg="cleaning up dead shim" Feb 13 07:36:50.547660 env[1471]: time="2024-02-13T07:36:50.547551692Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:36:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2327 runtime=io.containerd.runc.v2\n" Feb 13 07:36:51.219136 kubelet[1847]: E0213 07:36:51.219018 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:51.352275 env[1471]: time="2024-02-13T07:36:51.352141223Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 07:36:51.367476 env[1471]: time="2024-02-13T07:36:51.367329544Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\"" Feb 13 07:36:51.367935 env[1471]: time="2024-02-13T07:36:51.367917125Z" level=info msg="StartContainer for \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\"" Feb 13 07:36:51.375530 systemd[1]: Started cri-containerd-b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d.scope. Feb 13 07:36:51.387347 systemd[1]: cri-containerd-b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d.scope: Deactivated successfully. Feb 13 07:36:51.403834 env[1471]: time="2024-02-13T07:36:51.403781402Z" level=info msg="StartContainer for \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\" returns successfully" Feb 13 07:36:51.415049 env[1471]: time="2024-02-13T07:36:51.414986009Z" level=info msg="shim disconnected" id=b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d Feb 13 07:36:51.415049 env[1471]: time="2024-02-13T07:36:51.415020105Z" level=warning msg="cleaning up after shim disconnected" id=b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d namespace=k8s.io Feb 13 07:36:51.415049 env[1471]: time="2024-02-13T07:36:51.415028238Z" level=info msg="cleaning up dead shim" Feb 13 07:36:51.420291 env[1471]: time="2024-02-13T07:36:51.420266302Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:36:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2380 runtime=io.containerd.runc.v2\n" Feb 13 07:36:51.488883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d-rootfs.mount: Deactivated successfully. Feb 13 07:36:52.219247 kubelet[1847]: E0213 07:36:52.219142 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:52.362034 env[1471]: time="2024-02-13T07:36:52.361889166Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 07:36:52.380158 env[1471]: time="2024-02-13T07:36:52.380139949Z" level=info msg="CreateContainer within sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\"" Feb 13 07:36:52.380459 env[1471]: time="2024-02-13T07:36:52.380441478Z" level=info msg="StartContainer for \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\"" Feb 13 07:36:52.389051 systemd[1]: Started cri-containerd-d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3.scope. Feb 13 07:36:52.402013 env[1471]: time="2024-02-13T07:36:52.401986305Z" level=info msg="StartContainer for \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\" returns successfully" Feb 13 07:36:52.457443 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:36:52.524331 kubelet[1847]: I0213 07:36:52.524313 1847 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 13 07:36:52.603078 kernel: Initializing XFRM netlink socket Feb 13 07:36:52.603141 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 13 07:36:53.219519 kubelet[1847]: E0213 07:36:53.219416 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:53.379252 kubelet[1847]: I0213 07:36:53.379186 1847 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4qrhn" podStartSLOduration=9.233831993 podCreationTimestamp="2024-02-13 07:36:39 +0000 UTC" firstStartedPulling="2024-02-13 07:36:42.335481674 +0000 UTC m=+2.437163518" lastFinishedPulling="2024-02-13 07:36:47.480751854 +0000 UTC m=+7.582433701" observedRunningTime="2024-02-13 07:36:53.379031325 +0000 UTC m=+13.480713268" watchObservedRunningTime="2024-02-13 07:36:53.379102176 +0000 UTC m=+13.480784091" Feb 13 07:36:54.203690 systemd-timesyncd[1417]: Network configuration changed, trying to establish connection. Feb 13 07:36:54.203809 systemd-networkd[1320]: cilium_host: Link UP Feb 13 07:36:54.203887 systemd-networkd[1320]: cilium_net: Link UP Feb 13 07:36:54.203889 systemd-networkd[1320]: cilium_net: Gained carrier Feb 13 07:36:54.204017 systemd-networkd[1320]: cilium_host: Gained carrier Feb 13 07:36:54.211568 systemd-networkd[1320]: cilium_net: Gained IPv6LL Feb 13 07:36:54.211730 systemd-networkd[1320]: cilium_host: Gained IPv6LL Feb 13 07:36:54.212440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 13 07:36:54.219992 kubelet[1847]: E0213 07:36:54.219967 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:54.253985 systemd-networkd[1320]: cilium_vxlan: Link UP Feb 13 07:36:54.253990 systemd-networkd[1320]: cilium_vxlan: Gained carrier Feb 13 07:36:54.383368 kernel: NET: Registered PF_ALG protocol family Feb 13 07:36:54.216996 systemd-resolved[1416]: Clock change detected. Flushing caches. Feb 13 07:36:54.234586 systemd-journald[1252]: Time jumped backwards, rotating. Feb 13 07:36:54.217009 systemd-timesyncd[1417]: Contacted time server [2607:ff50:0:1a::20]:123 (2.flatcar.pool.ntp.org). Feb 13 07:36:54.217039 systemd-timesyncd[1417]: Initial clock synchronization to Tue 2024-02-13 07:36:54.216926 UTC. Feb 13 07:36:54.392214 systemd-networkd[1320]: lxc_health: Link UP Feb 13 07:36:54.414311 systemd-networkd[1320]: lxc_health: Gained carrier Feb 13 07:36:54.414462 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 07:36:54.783217 kubelet[1847]: E0213 07:36:54.783169 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:55.783870 kubelet[1847]: E0213 07:36:55.783825 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:55.869543 systemd-networkd[1320]: cilium_vxlan: Gained IPv6LL Feb 13 07:36:56.061571 systemd-networkd[1320]: lxc_health: Gained IPv6LL Feb 13 07:36:56.784558 kubelet[1847]: E0213 07:36:56.784534 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:57.109505 kubelet[1847]: I0213 07:36:57.109438 1847 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:36:57.112725 systemd[1]: Created slice kubepods-besteffort-pod0f64ef9d_0d2d_47d5_905d_6cf8208cdb13.slice. Feb 13 07:36:57.203780 kubelet[1847]: I0213 07:36:57.203740 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5gd\" (UniqueName: \"kubernetes.io/projected/0f64ef9d-0d2d-47d5-905d-6cf8208cdb13-kube-api-access-zq5gd\") pod \"nginx-deployment-845c78c8b9-rwdq5\" (UID: \"0f64ef9d-0d2d-47d5-905d-6cf8208cdb13\") " pod="default/nginx-deployment-845c78c8b9-rwdq5" Feb 13 07:36:57.415219 env[1471]: time="2024-02-13T07:36:57.415129744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-rwdq5,Uid:0f64ef9d-0d2d-47d5-905d-6cf8208cdb13,Namespace:default,Attempt:0,}" Feb 13 07:36:57.430530 systemd-networkd[1320]: lxc34d53ab5991e: Link UP Feb 13 07:36:57.451447 kernel: eth0: renamed from tmpedda3 Feb 13 07:36:57.465496 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:36:57.465549 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc34d53ab5991e: link becomes ready Feb 13 07:36:57.473055 systemd-networkd[1320]: lxc34d53ab5991e: Gained carrier Feb 13 07:36:57.661980 env[1471]: time="2024-02-13T07:36:57.661894301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:36:57.661980 env[1471]: time="2024-02-13T07:36:57.661916318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:36:57.661980 env[1471]: time="2024-02-13T07:36:57.661923456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:36:57.662075 env[1471]: time="2024-02-13T07:36:57.662017733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/edda3273ac13d05faa09548f2857fe80943b05eca1edd0a4cb95708a12d44bd6 pid=3056 runtime=io.containerd.runc.v2 Feb 13 07:36:57.668226 systemd[1]: Started cri-containerd-edda3273ac13d05faa09548f2857fe80943b05eca1edd0a4cb95708a12d44bd6.scope. Feb 13 07:36:57.692067 env[1471]: time="2024-02-13T07:36:57.692036746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-845c78c8b9-rwdq5,Uid:0f64ef9d-0d2d-47d5-905d-6cf8208cdb13,Namespace:default,Attempt:0,} returns sandbox id \"edda3273ac13d05faa09548f2857fe80943b05eca1edd0a4cb95708a12d44bd6\"" Feb 13 07:36:57.692806 env[1471]: time="2024-02-13T07:36:57.692789163Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 07:36:57.785237 kubelet[1847]: E0213 07:36:57.785164 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:58.785513 kubelet[1847]: E0213 07:36:58.785417 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:58.877658 systemd-networkd[1320]: lxc34d53ab5991e: Gained IPv6LL Feb 13 07:36:59.776675 kubelet[1847]: E0213 07:36:59.776601 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:36:59.786546 kubelet[1847]: E0213 07:36:59.786473 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:00.786811 kubelet[1847]: E0213 07:37:00.786689 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:00.853970 kubelet[1847]: I0213 07:37:00.853863 1847 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 13 07:37:01.787866 kubelet[1847]: E0213 07:37:01.787753 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:02.788946 kubelet[1847]: E0213 07:37:02.788828 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:03.789375 kubelet[1847]: E0213 07:37:03.789256 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:04.789835 kubelet[1847]: E0213 07:37:04.789757 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:05.790307 kubelet[1847]: E0213 07:37:05.790179 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:06.790571 kubelet[1847]: E0213 07:37:06.790528 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:06.921834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187637027.mount: Deactivated successfully. Feb 13 07:37:07.460004 env[1471]: time="2024-02-13T07:37:07.459967088Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:07.460573 env[1471]: time="2024-02-13T07:37:07.460559971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:07.461767 env[1471]: time="2024-02-13T07:37:07.461754855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:07.462521 env[1471]: time="2024-02-13T07:37:07.462508523Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:07.462981 env[1471]: time="2024-02-13T07:37:07.462938447Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 13 07:37:07.463773 env[1471]: time="2024-02-13T07:37:07.463758001Z" level=info msg="CreateContainer within sandbox \"edda3273ac13d05faa09548f2857fe80943b05eca1edd0a4cb95708a12d44bd6\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 07:37:07.467717 env[1471]: time="2024-02-13T07:37:07.467702322Z" level=info msg="CreateContainer within sandbox \"edda3273ac13d05faa09548f2857fe80943b05eca1edd0a4cb95708a12d44bd6\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4556c684d9e2dc1c646fbeed72cca0dd601764ea636f7ef6040d807e70f088f6\"" Feb 13 07:37:07.468010 env[1471]: time="2024-02-13T07:37:07.467986352Z" level=info msg="StartContainer for \"4556c684d9e2dc1c646fbeed72cca0dd601764ea636f7ef6040d807e70f088f6\"" Feb 13 07:37:07.470004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039495061.mount: Deactivated successfully. Feb 13 07:37:07.476901 systemd[1]: Started cri-containerd-4556c684d9e2dc1c646fbeed72cca0dd601764ea636f7ef6040d807e70f088f6.scope. Feb 13 07:37:07.487604 env[1471]: time="2024-02-13T07:37:07.487547877Z" level=info msg="StartContainer for \"4556c684d9e2dc1c646fbeed72cca0dd601764ea636f7ef6040d807e70f088f6\" returns successfully" Feb 13 07:37:07.791736 kubelet[1847]: E0213 07:37:07.791534 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:07.984836 kubelet[1847]: I0213 07:37:07.984776 1847 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-845c78c8b9-rwdq5" podStartSLOduration=1.214172656 podCreationTimestamp="2024-02-13 07:36:57 +0000 UTC" firstStartedPulling="2024-02-13 07:36:57.692631153 +0000 UTC m=+18.231235318" lastFinishedPulling="2024-02-13 07:37:07.463147263 +0000 UTC m=+28.001751417" observedRunningTime="2024-02-13 07:37:07.98396765 +0000 UTC m=+28.522571880" watchObservedRunningTime="2024-02-13 07:37:07.984688755 +0000 UTC m=+28.523292962" Feb 13 07:37:08.792724 kubelet[1847]: E0213 07:37:08.792621 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:09.548317 update_engine[1463]: I0213 07:37:09.548179 1463 update_attempter.cc:509] Updating boot flags... Feb 13 07:37:09.793133 kubelet[1847]: E0213 07:37:09.793076 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:10.794271 kubelet[1847]: E0213 07:37:10.794151 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:11.795108 kubelet[1847]: E0213 07:37:11.794991 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:12.795313 kubelet[1847]: E0213 07:37:12.795148 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:13.795934 kubelet[1847]: E0213 07:37:13.795868 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:14.629044 kubelet[1847]: I0213 07:37:14.628996 1847 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:37:14.632003 systemd[1]: Created slice kubepods-besteffort-podd7f77b57_9e95_4644_934f_8a883a227006.slice. Feb 13 07:37:14.643727 kubelet[1847]: I0213 07:37:14.643685 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d7f77b57-9e95-4644-934f-8a883a227006-data\") pod \"nfs-server-provisioner-0\" (UID: \"d7f77b57-9e95-4644-934f-8a883a227006\") " pod="default/nfs-server-provisioner-0" Feb 13 07:37:14.643727 kubelet[1847]: I0213 07:37:14.643710 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6rk9\" (UniqueName: \"kubernetes.io/projected/d7f77b57-9e95-4644-934f-8a883a227006-kube-api-access-g6rk9\") pod \"nfs-server-provisioner-0\" (UID: \"d7f77b57-9e95-4644-934f-8a883a227006\") " pod="default/nfs-server-provisioner-0" Feb 13 07:37:14.797060 kubelet[1847]: E0213 07:37:14.797011 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:14.935134 env[1471]: time="2024-02-13T07:37:14.934918409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d7f77b57-9e95-4644-934f-8a883a227006,Namespace:default,Attempt:0,}" Feb 13 07:37:14.983878 systemd-networkd[1320]: lxc1ed1f30eece9: Link UP Feb 13 07:37:15.011507 kernel: eth0: renamed from tmp06022 Feb 13 07:37:15.050507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:37:15.050604 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1ed1f30eece9: link becomes ready Feb 13 07:37:15.050620 systemd-networkd[1320]: lxc1ed1f30eece9: Gained carrier Feb 13 07:37:15.304030 env[1471]: time="2024-02-13T07:37:15.303967610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:37:15.304030 env[1471]: time="2024-02-13T07:37:15.303989786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:37:15.304030 env[1471]: time="2024-02-13T07:37:15.304000489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:37:15.304166 env[1471]: time="2024-02-13T07:37:15.304126539Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0602277d23283305d5624976102e433bab9bf320bc6795902df74ea3928c8db9 pid=3247 runtime=io.containerd.runc.v2 Feb 13 07:37:15.310641 systemd[1]: Started cri-containerd-0602277d23283305d5624976102e433bab9bf320bc6795902df74ea3928c8db9.scope. Feb 13 07:37:15.333682 env[1471]: time="2024-02-13T07:37:15.333621453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d7f77b57-9e95-4644-934f-8a883a227006,Namespace:default,Attempt:0,} returns sandbox id \"0602277d23283305d5624976102e433bab9bf320bc6795902df74ea3928c8db9\"" Feb 13 07:37:15.334441 env[1471]: time="2024-02-13T07:37:15.334394146Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 07:37:15.798316 kubelet[1847]: E0213 07:37:15.798239 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:15.917416 systemd[1]: Started sshd@6-139.178.90.101:22-1.117.181.161:44730.service. Feb 13 07:37:16.093561 systemd-networkd[1320]: lxc1ed1f30eece9: Gained IPv6LL Feb 13 07:37:16.798752 kubelet[1847]: E0213 07:37:16.798733 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:16.872935 sshd[3281]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=1.117.181.161 user=root Feb 13 07:37:17.799171 kubelet[1847]: E0213 07:37:17.799127 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:17.926115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963711914.mount: Deactivated successfully. Feb 13 07:37:18.799906 kubelet[1847]: E0213 07:37:18.799863 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:18.994511 sshd[3281]: Failed password for root from 1.117.181.161 port 44730 ssh2 Feb 13 07:37:19.088207 env[1471]: time="2024-02-13T07:37:19.088166925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:19.088831 env[1471]: time="2024-02-13T07:37:19.088797057Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:19.089799 env[1471]: time="2024-02-13T07:37:19.089765251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:19.090794 env[1471]: time="2024-02-13T07:37:19.090749568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:19.091288 env[1471]: time="2024-02-13T07:37:19.091245609Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 07:37:19.092576 env[1471]: time="2024-02-13T07:37:19.092549892Z" level=info msg="CreateContainer within sandbox \"0602277d23283305d5624976102e433bab9bf320bc6795902df74ea3928c8db9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 07:37:19.097452 env[1471]: time="2024-02-13T07:37:19.097408515Z" level=info msg="CreateContainer within sandbox \"0602277d23283305d5624976102e433bab9bf320bc6795902df74ea3928c8db9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4fba5412bbeddc50c4108c71765fb6b11b61907405527b6dccce63a3697b11e2\"" Feb 13 07:37:19.097809 env[1471]: time="2024-02-13T07:37:19.097773706Z" level=info msg="StartContainer for \"4fba5412bbeddc50c4108c71765fb6b11b61907405527b6dccce63a3697b11e2\"" Feb 13 07:37:19.107771 systemd[1]: Started cri-containerd-4fba5412bbeddc50c4108c71765fb6b11b61907405527b6dccce63a3697b11e2.scope. Feb 13 07:37:19.117980 env[1471]: time="2024-02-13T07:37:19.117956494Z" level=info msg="StartContainer for \"4fba5412bbeddc50c4108c71765fb6b11b61907405527b6dccce63a3697b11e2\" returns successfully" Feb 13 07:37:19.776361 kubelet[1847]: E0213 07:37:19.776251 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:19.800317 kubelet[1847]: E0213 07:37:19.800211 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:20.801557 kubelet[1847]: E0213 07:37:20.801448 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:20.996381 sshd[3281]: Received disconnect from 1.117.181.161 port 44730:11: Bye Bye [preauth] Feb 13 07:37:20.996381 sshd[3281]: Disconnected from authenticating user root 1.117.181.161 port 44730 [preauth] Feb 13 07:37:20.999087 systemd[1]: sshd@6-139.178.90.101:22-1.117.181.161:44730.service: Deactivated successfully. Feb 13 07:37:21.802316 kubelet[1847]: E0213 07:37:21.802195 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:22.802854 kubelet[1847]: E0213 07:37:22.802747 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:23.803857 kubelet[1847]: E0213 07:37:23.803751 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:24.804627 kubelet[1847]: E0213 07:37:24.804521 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:25.805472 kubelet[1847]: E0213 07:37:25.805352 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:26.806452 kubelet[1847]: E0213 07:37:26.806305 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:27.807224 kubelet[1847]: E0213 07:37:27.807116 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:28.496083 kubelet[1847]: I0213 07:37:28.495987 1847 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.738700035 podCreationTimestamp="2024-02-13 07:37:14 +0000 UTC" firstStartedPulling="2024-02-13 07:37:15.334251925 +0000 UTC m=+35.872856090" lastFinishedPulling="2024-02-13 07:37:19.091447914 +0000 UTC m=+39.630052076" observedRunningTime="2024-02-13 07:37:20.014180628 +0000 UTC m=+40.552784861" watchObservedRunningTime="2024-02-13 07:37:28.495896021 +0000 UTC m=+49.034500255" Feb 13 07:37:28.496888 kubelet[1847]: I0213 07:37:28.496817 1847 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:37:28.510808 systemd[1]: Created slice kubepods-besteffort-pod83242cde_c728_41bf_ae82_1381e773d073.slice. Feb 13 07:37:28.648818 kubelet[1847]: I0213 07:37:28.648702 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84r7s\" (UniqueName: \"kubernetes.io/projected/83242cde-c728-41bf-ae82-1381e773d073-kube-api-access-84r7s\") pod \"test-pod-1\" (UID: \"83242cde-c728-41bf-ae82-1381e773d073\") " pod="default/test-pod-1" Feb 13 07:37:28.649059 kubelet[1847]: I0213 07:37:28.648923 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8b92c37-d2a8-42ef-bfbe-2f3022a57399\" (UniqueName: \"kubernetes.io/nfs/83242cde-c728-41bf-ae82-1381e773d073-pvc-b8b92c37-d2a8-42ef-bfbe-2f3022a57399\") pod \"test-pod-1\" (UID: \"83242cde-c728-41bf-ae82-1381e773d073\") " pod="default/test-pod-1" Feb 13 07:37:28.773461 kernel: FS-Cache: Loaded Feb 13 07:37:28.807707 kubelet[1847]: E0213 07:37:28.807668 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:28.809718 kernel: RPC: Registered named UNIX socket transport module. Feb 13 07:37:28.809776 kernel: RPC: Registered udp transport module. Feb 13 07:37:28.809808 kernel: RPC: Registered tcp transport module. Feb 13 07:37:28.814647 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 07:37:28.854439 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 13 07:37:28.991288 kernel: NFS: Registering the id_resolver key type Feb 13 07:37:28.991330 kernel: Key type id_resolver registered Feb 13 07:37:28.991343 kernel: Key type id_legacy registered Feb 13 07:37:29.348118 nfsidmap[3379]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-9e65c995fd' Feb 13 07:37:29.381620 nfsidmap[3380]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-9e65c995fd' Feb 13 07:37:29.415758 env[1471]: time="2024-02-13T07:37:29.415699312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:83242cde-c728-41bf-ae82-1381e773d073,Namespace:default,Attempt:0,}" Feb 13 07:37:29.431214 systemd-networkd[1320]: lxc936f109b7668: Link UP Feb 13 07:37:29.450538 kernel: eth0: renamed from tmp5d259 Feb 13 07:37:29.477174 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 13 07:37:29.477226 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc936f109b7668: link becomes ready Feb 13 07:37:29.477235 systemd-networkd[1320]: lxc936f109b7668: Gained carrier Feb 13 07:37:29.668474 env[1471]: time="2024-02-13T07:37:29.668410857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:37:29.668474 env[1471]: time="2024-02-13T07:37:29.668437271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:37:29.668474 env[1471]: time="2024-02-13T07:37:29.668445603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:37:29.668574 env[1471]: time="2024-02-13T07:37:29.668505825Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d259f9deb96edbb948a40f9d1bea9af16880eecac29e5199e53c81d562022e4 pid=3440 runtime=io.containerd.runc.v2 Feb 13 07:37:29.674489 systemd[1]: Started cri-containerd-5d259f9deb96edbb948a40f9d1bea9af16880eecac29e5199e53c81d562022e4.scope. Feb 13 07:37:29.695872 env[1471]: time="2024-02-13T07:37:29.695844234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:83242cde-c728-41bf-ae82-1381e773d073,Namespace:default,Attempt:0,} returns sandbox id \"5d259f9deb96edbb948a40f9d1bea9af16880eecac29e5199e53c81d562022e4\"" Feb 13 07:37:29.696638 env[1471]: time="2024-02-13T07:37:29.696594757Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 07:37:29.808202 kubelet[1847]: E0213 07:37:29.808088 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:30.119389 env[1471]: time="2024-02-13T07:37:30.119236109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:30.122080 env[1471]: time="2024-02-13T07:37:30.121953251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:30.127146 env[1471]: time="2024-02-13T07:37:30.127032137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:30.131931 env[1471]: time="2024-02-13T07:37:30.131822887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:30.134324 env[1471]: time="2024-02-13T07:37:30.134193390Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 13 07:37:30.138520 env[1471]: time="2024-02-13T07:37:30.138416945Z" level=info msg="CreateContainer within sandbox \"5d259f9deb96edbb948a40f9d1bea9af16880eecac29e5199e53c81d562022e4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 07:37:30.153752 env[1471]: time="2024-02-13T07:37:30.153734478Z" level=info msg="CreateContainer within sandbox \"5d259f9deb96edbb948a40f9d1bea9af16880eecac29e5199e53c81d562022e4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"edaa185bbe8b7b0e4a728cdb0c1cc0dde47ebae5e7da842a0ecc4d974499db11\"" Feb 13 07:37:30.153975 env[1471]: time="2024-02-13T07:37:30.153943892Z" level=info msg="StartContainer for \"edaa185bbe8b7b0e4a728cdb0c1cc0dde47ebae5e7da842a0ecc4d974499db11\"" Feb 13 07:37:30.162103 systemd[1]: Started cri-containerd-edaa185bbe8b7b0e4a728cdb0c1cc0dde47ebae5e7da842a0ecc4d974499db11.scope. Feb 13 07:37:30.173638 env[1471]: time="2024-02-13T07:37:30.173576815Z" level=info msg="StartContainer for \"edaa185bbe8b7b0e4a728cdb0c1cc0dde47ebae5e7da842a0ecc4d974499db11\" returns successfully" Feb 13 07:37:30.809085 kubelet[1847]: E0213 07:37:30.808966 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:31.050036 kubelet[1847]: I0213 07:37:31.049931 1847 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.61163244 podCreationTimestamp="2024-02-13 07:37:14 +0000 UTC" firstStartedPulling="2024-02-13 07:37:29.696448006 +0000 UTC m=+50.235052161" lastFinishedPulling="2024-02-13 07:37:30.134656483 +0000 UTC m=+50.673260712" observedRunningTime="2024-02-13 07:37:31.04932314 +0000 UTC m=+51.587927371" watchObservedRunningTime="2024-02-13 07:37:31.049840991 +0000 UTC m=+51.588445201" Feb 13 07:37:31.453765 systemd-networkd[1320]: lxc936f109b7668: Gained IPv6LL Feb 13 07:37:31.810189 kubelet[1847]: E0213 07:37:31.810080 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:32.811230 kubelet[1847]: E0213 07:37:32.811117 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:33.812400 kubelet[1847]: E0213 07:37:33.812281 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:34.812605 kubelet[1847]: E0213 07:37:34.812518 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:35.813825 kubelet[1847]: E0213 07:37:35.813746 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:36.814985 kubelet[1847]: E0213 07:37:36.814748 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:37.122586 env[1471]: time="2024-02-13T07:37:37.122447289Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 07:37:37.125654 env[1471]: time="2024-02-13T07:37:37.125641173Z" level=info msg="StopContainer for \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\" with timeout 1 (s)" Feb 13 07:37:37.125744 env[1471]: time="2024-02-13T07:37:37.125732802Z" level=info msg="Stop container \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\" with signal terminated" Feb 13 07:37:37.128580 systemd-networkd[1320]: lxc_health: Link DOWN Feb 13 07:37:37.128584 systemd-networkd[1320]: lxc_health: Lost carrier Feb 13 07:37:37.177890 systemd[1]: cri-containerd-d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3.scope: Deactivated successfully. Feb 13 07:37:37.178177 systemd[1]: cri-containerd-d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3.scope: Consumed 4.712s CPU time. Feb 13 07:37:37.197480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3-rootfs.mount: Deactivated successfully. Feb 13 07:37:37.815766 kubelet[1847]: E0213 07:37:37.815670 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:38.129936 env[1471]: time="2024-02-13T07:37:38.129658673Z" level=info msg="Kill container \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\"" Feb 13 07:37:38.205811 env[1471]: time="2024-02-13T07:37:38.205674531Z" level=info msg="shim disconnected" id=d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3 Feb 13 07:37:38.205811 env[1471]: time="2024-02-13T07:37:38.205778483Z" level=warning msg="cleaning up after shim disconnected" id=d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3 namespace=k8s.io Feb 13 07:37:38.205811 env[1471]: time="2024-02-13T07:37:38.205807429Z" level=info msg="cleaning up dead shim" Feb 13 07:37:38.222112 env[1471]: time="2024-02-13T07:37:38.221989533Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3580 runtime=io.containerd.runc.v2\n" Feb 13 07:37:38.225477 env[1471]: time="2024-02-13T07:37:38.225348263Z" level=info msg="StopContainer for \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\" returns successfully" Feb 13 07:37:38.226781 env[1471]: time="2024-02-13T07:37:38.226660362Z" level=info msg="StopPodSandbox for \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\"" Feb 13 07:37:38.226983 env[1471]: time="2024-02-13T07:37:38.226807698Z" level=info msg="Container to stop \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:38.226983 env[1471]: time="2024-02-13T07:37:38.226853907Z" level=info msg="Container to stop \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:38.226983 env[1471]: time="2024-02-13T07:37:38.226889074Z" level=info msg="Container to stop \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:38.226983 env[1471]: time="2024-02-13T07:37:38.226920532Z" level=info msg="Container to stop \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:38.226983 env[1471]: time="2024-02-13T07:37:38.226950922Z" level=info msg="Container to stop \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:38.231786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633-shm.mount: Deactivated successfully. Feb 13 07:37:38.239330 systemd[1]: cri-containerd-200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633.scope: Deactivated successfully. Feb 13 07:37:38.255584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633-rootfs.mount: Deactivated successfully. Feb 13 07:37:38.265690 env[1471]: time="2024-02-13T07:37:38.265621372Z" level=info msg="shim disconnected" id=200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633 Feb 13 07:37:38.265690 env[1471]: time="2024-02-13T07:37:38.265651686Z" level=warning msg="cleaning up after shim disconnected" id=200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633 namespace=k8s.io Feb 13 07:37:38.265690 env[1471]: time="2024-02-13T07:37:38.265657924Z" level=info msg="cleaning up dead shim" Feb 13 07:37:38.269327 env[1471]: time="2024-02-13T07:37:38.269311059Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3610 runtime=io.containerd.runc.v2\n" Feb 13 07:37:38.269518 env[1471]: time="2024-02-13T07:37:38.269473140Z" level=info msg="TearDown network for sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" successfully" Feb 13 07:37:38.269518 env[1471]: time="2024-02-13T07:37:38.269487092Z" level=info msg="StopPodSandbox for \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" returns successfully" Feb 13 07:37:38.427300 kubelet[1847]: I0213 07:37:38.427092 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-host-proc-sys-net\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.427300 kubelet[1847]: I0213 07:37:38.427196 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-bpf-maps\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.427300 kubelet[1847]: I0213 07:37:38.427208 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.427300 kubelet[1847]: I0213 07:37:38.427260 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-cgroup\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.427941 kubelet[1847]: I0213 07:37:38.427343 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-config-path\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.427941 kubelet[1847]: I0213 07:37:38.427337 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.427941 kubelet[1847]: I0213 07:37:38.427410 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cni-path\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.427941 kubelet[1847]: I0213 07:37:38.427513 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-host-proc-sys-kernel\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.427941 kubelet[1847]: I0213 07:37:38.427430 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.428668 kubelet[1847]: I0213 07:37:38.427521 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cni-path" (OuterVolumeSpecName: "cni-path") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.428668 kubelet[1847]: I0213 07:37:38.427607 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-etc-cni-netd\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.428668 kubelet[1847]: I0213 07:37:38.427616 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.428668 kubelet[1847]: I0213 07:37:38.427664 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-xtables-lock\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.428668 kubelet[1847]: I0213 07:37:38.427725 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef40e6d3-001f-48c3-82da-9b0db1166435-hubble-tls\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.429394 kubelet[1847]: I0213 07:37:38.427731 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.429394 kubelet[1847]: I0213 07:37:38.427776 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-hostproc\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.429394 kubelet[1847]: I0213 07:37:38.427784 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.429394 kubelet[1847]: I0213 07:37:38.427838 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-hostproc" (OuterVolumeSpecName: "hostproc") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.429394 kubelet[1847]: W0213 07:37:38.427845 1847 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ef40e6d3-001f-48c3-82da-9b0db1166435/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 07:37:38.429394 kubelet[1847]: I0213 07:37:38.427948 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxwk8\" (UniqueName: \"kubernetes.io/projected/ef40e6d3-001f-48c3-82da-9b0db1166435-kube-api-access-dxwk8\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.430102 kubelet[1847]: I0213 07:37:38.428067 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-run\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.430102 kubelet[1847]: I0213 07:37:38.428192 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef40e6d3-001f-48c3-82da-9b0db1166435-clustermesh-secrets\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.430102 kubelet[1847]: I0213 07:37:38.428188 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.430102 kubelet[1847]: I0213 07:37:38.428296 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-lib-modules\") pod \"ef40e6d3-001f-48c3-82da-9b0db1166435\" (UID: \"ef40e6d3-001f-48c3-82da-9b0db1166435\") " Feb 13 07:37:38.430102 kubelet[1847]: I0213 07:37:38.428369 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:38.430102 kubelet[1847]: I0213 07:37:38.428426 1847 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-etc-cni-netd\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.430694 kubelet[1847]: I0213 07:37:38.428524 1847 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-xtables-lock\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.430694 kubelet[1847]: I0213 07:37:38.428582 1847 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-hostproc\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.430694 kubelet[1847]: I0213 07:37:38.428638 1847 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-run\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.430694 kubelet[1847]: I0213 07:37:38.428694 1847 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-host-proc-sys-net\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.430694 kubelet[1847]: I0213 07:37:38.428755 1847 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-bpf-maps\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.430694 kubelet[1847]: I0213 07:37:38.428794 1847 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-cgroup\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.430694 kubelet[1847]: I0213 07:37:38.428822 1847 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-cni-path\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.430694 kubelet[1847]: I0213 07:37:38.428855 1847 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-host-proc-sys-kernel\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.432935 kubelet[1847]: I0213 07:37:38.432880 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:37:38.433273 kubelet[1847]: I0213 07:37:38.433238 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef40e6d3-001f-48c3-82da-9b0db1166435-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:38.433320 kubelet[1847]: I0213 07:37:38.433291 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef40e6d3-001f-48c3-82da-9b0db1166435-kube-api-access-dxwk8" (OuterVolumeSpecName: "kube-api-access-dxwk8") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "kube-api-access-dxwk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:38.433352 kubelet[1847]: I0213 07:37:38.433319 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef40e6d3-001f-48c3-82da-9b0db1166435-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ef40e6d3-001f-48c3-82da-9b0db1166435" (UID: "ef40e6d3-001f-48c3-82da-9b0db1166435"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:37:38.433976 systemd[1]: var-lib-kubelet-pods-ef40e6d3\x2d001f\x2d48c3\x2d82da\x2d9b0db1166435-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxwk8.mount: Deactivated successfully. Feb 13 07:37:38.434032 systemd[1]: var-lib-kubelet-pods-ef40e6d3\x2d001f\x2d48c3\x2d82da\x2d9b0db1166435-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 07:37:38.434066 systemd[1]: var-lib-kubelet-pods-ef40e6d3\x2d001f\x2d48c3\x2d82da\x2d9b0db1166435-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 07:37:38.529203 kubelet[1847]: I0213 07:37:38.529100 1847 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef40e6d3-001f-48c3-82da-9b0db1166435-cilium-config-path\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.529203 kubelet[1847]: I0213 07:37:38.529176 1847 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef40e6d3-001f-48c3-82da-9b0db1166435-hubble-tls\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.529203 kubelet[1847]: I0213 07:37:38.529213 1847 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dxwk8\" (UniqueName: \"kubernetes.io/projected/ef40e6d3-001f-48c3-82da-9b0db1166435-kube-api-access-dxwk8\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.529711 kubelet[1847]: I0213 07:37:38.529246 1847 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef40e6d3-001f-48c3-82da-9b0db1166435-clustermesh-secrets\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.529711 kubelet[1847]: I0213 07:37:38.529275 1847 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef40e6d3-001f-48c3-82da-9b0db1166435-lib-modules\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:38.816201 kubelet[1847]: E0213 07:37:38.816100 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:39.066149 kubelet[1847]: I0213 07:37:39.066081 1847 scope.go:115] "RemoveContainer" containerID="d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3" Feb 13 07:37:39.069300 env[1471]: time="2024-02-13T07:37:39.069124829Z" level=info msg="RemoveContainer for \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\"" Feb 13 07:37:39.072342 env[1471]: time="2024-02-13T07:37:39.072328381Z" level=info msg="RemoveContainer for \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\" returns successfully" Feb 13 07:37:39.072540 kubelet[1847]: I0213 07:37:39.072497 1847 scope.go:115] "RemoveContainer" containerID="b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d" Feb 13 07:37:39.073352 env[1471]: time="2024-02-13T07:37:39.073311964Z" level=info msg="RemoveContainer for \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\"" Feb 13 07:37:39.073668 systemd[1]: Removed slice kubepods-burstable-podef40e6d3_001f_48c3_82da_9b0db1166435.slice. Feb 13 07:37:39.073787 systemd[1]: kubepods-burstable-podef40e6d3_001f_48c3_82da_9b0db1166435.slice: Consumed 4.798s CPU time. Feb 13 07:37:39.074707 env[1471]: time="2024-02-13T07:37:39.074667046Z" level=info msg="RemoveContainer for \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\" returns successfully" Feb 13 07:37:39.074741 kubelet[1847]: I0213 07:37:39.074730 1847 scope.go:115] "RemoveContainer" containerID="cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a" Feb 13 07:37:39.075273 env[1471]: time="2024-02-13T07:37:39.075261792Z" level=info msg="RemoveContainer for \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\"" Feb 13 07:37:39.076231 env[1471]: time="2024-02-13T07:37:39.076203417Z" level=info msg="RemoveContainer for \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\" returns successfully" Feb 13 07:37:39.076290 kubelet[1847]: I0213 07:37:39.076283 1847 scope.go:115] "RemoveContainer" containerID="2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73" Feb 13 07:37:39.076791 env[1471]: time="2024-02-13T07:37:39.076780068Z" level=info msg="RemoveContainer for \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\"" Feb 13 07:37:39.077876 env[1471]: time="2024-02-13T07:37:39.077865782Z" level=info msg="RemoveContainer for \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\" returns successfully" Feb 13 07:37:39.077926 kubelet[1847]: I0213 07:37:39.077919 1847 scope.go:115] "RemoveContainer" containerID="df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350" Feb 13 07:37:39.078344 env[1471]: time="2024-02-13T07:37:39.078305725Z" level=info msg="RemoveContainer for \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\"" Feb 13 07:37:39.095757 env[1471]: time="2024-02-13T07:37:39.095711125Z" level=info msg="RemoveContainer for \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\" returns successfully" Feb 13 07:37:39.095806 kubelet[1847]: I0213 07:37:39.095788 1847 scope.go:115] "RemoveContainer" containerID="d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3" Feb 13 07:37:39.095960 env[1471]: time="2024-02-13T07:37:39.095908125Z" level=error msg="ContainerStatus for \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\": not found" Feb 13 07:37:39.096051 kubelet[1847]: E0213 07:37:39.096008 1847 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\": not found" containerID="d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3" Feb 13 07:37:39.096051 kubelet[1847]: I0213 07:37:39.096034 1847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3} err="failed to get container status \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d66eb7449a4e9b6bb47f1698e9a9f3f4b727600b8d7f5d4767c3011b0002f7d3\": not found" Feb 13 07:37:39.096051 kubelet[1847]: I0213 07:37:39.096041 1847 scope.go:115] "RemoveContainer" containerID="b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d" Feb 13 07:37:39.096150 env[1471]: time="2024-02-13T07:37:39.096122825Z" level=error msg="ContainerStatus for \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\": not found" Feb 13 07:37:39.096213 kubelet[1847]: E0213 07:37:39.096203 1847 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\": not found" containerID="b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d" Feb 13 07:37:39.096242 kubelet[1847]: I0213 07:37:39.096223 1847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d} err="failed to get container status \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9b264918cd1e58fb37d509bf16c2cd15556ea4037d7cd2da5e55ac02369269d\": not found" Feb 13 07:37:39.096242 kubelet[1847]: I0213 07:37:39.096231 1847 scope.go:115] "RemoveContainer" containerID="cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a" Feb 13 07:37:39.096349 env[1471]: time="2024-02-13T07:37:39.096321469Z" level=error msg="ContainerStatus for \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\": not found" Feb 13 07:37:39.096397 kubelet[1847]: E0213 07:37:39.096391 1847 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\": not found" containerID="cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a" Feb 13 07:37:39.096424 kubelet[1847]: I0213 07:37:39.096405 1847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a} err="failed to get container status \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfbbbbc13fea2a1249061d86bbb58fbf099ccd71adaefa004c831f65964de11a\": not found" Feb 13 07:37:39.096424 kubelet[1847]: I0213 07:37:39.096414 1847 scope.go:115] "RemoveContainer" containerID="2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73" Feb 13 07:37:39.096585 env[1471]: time="2024-02-13T07:37:39.096556814Z" level=error msg="ContainerStatus for \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\": not found" Feb 13 07:37:39.096634 kubelet[1847]: E0213 07:37:39.096628 1847 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\": not found" containerID="2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73" Feb 13 07:37:39.096667 kubelet[1847]: I0213 07:37:39.096641 1847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73} err="failed to get container status \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\": rpc error: code = NotFound desc = an error occurred when try to find container \"2fbdf5817cf07de79f97f11a3b9d62ce35a648d81adfbf915b45e4625a7fda73\": not found" Feb 13 07:37:39.096667 kubelet[1847]: I0213 07:37:39.096649 1847 scope.go:115] "RemoveContainer" containerID="df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350" Feb 13 07:37:39.096761 env[1471]: time="2024-02-13T07:37:39.096733637Z" level=error msg="ContainerStatus for \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\": not found" Feb 13 07:37:39.096811 kubelet[1847]: E0213 07:37:39.096805 1847 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\": not found" containerID="df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350" Feb 13 07:37:39.096839 kubelet[1847]: I0213 07:37:39.096818 1847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350} err="failed to get container status \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\": rpc error: code = NotFound desc = an error occurred when try to find container \"df02e9dbcce5c145be25bafc0bc241e192d5a38884399825cbe3bc391901d350\": not found" Feb 13 07:37:39.477753 kubelet[1847]: I0213 07:37:39.477547 1847 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:37:39.477753 kubelet[1847]: E0213 07:37:39.477666 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef40e6d3-001f-48c3-82da-9b0db1166435" containerName="apply-sysctl-overwrites" Feb 13 07:37:39.477753 kubelet[1847]: E0213 07:37:39.477697 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef40e6d3-001f-48c3-82da-9b0db1166435" containerName="clean-cilium-state" Feb 13 07:37:39.477753 kubelet[1847]: E0213 07:37:39.477723 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef40e6d3-001f-48c3-82da-9b0db1166435" containerName="mount-cgroup" Feb 13 07:37:39.477753 kubelet[1847]: E0213 07:37:39.477743 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef40e6d3-001f-48c3-82da-9b0db1166435" containerName="mount-bpf-fs" Feb 13 07:37:39.477753 kubelet[1847]: E0213 07:37:39.477762 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ef40e6d3-001f-48c3-82da-9b0db1166435" containerName="cilium-agent" Feb 13 07:37:39.478668 kubelet[1847]: I0213 07:37:39.477815 1847 memory_manager.go:346] "RemoveStaleState removing state" podUID="ef40e6d3-001f-48c3-82da-9b0db1166435" containerName="cilium-agent" Feb 13 07:37:39.478668 kubelet[1847]: I0213 07:37:39.478209 1847 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:37:39.493910 systemd[1]: Created slice kubepods-burstable-podf0e67bfd_6f86_4bf1_b307_7fc50247717d.slice. Feb 13 07:37:39.523025 systemd[1]: Created slice kubepods-besteffort-podd16037e0_98a4_42ed_af6f_e50b6dd57e80.slice. Feb 13 07:37:39.636577 kubelet[1847]: I0213 07:37:39.636512 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbcd\" (UniqueName: \"kubernetes.io/projected/f0e67bfd-6f86-4bf1-b307-7fc50247717d-kube-api-access-tcbcd\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.636833 kubelet[1847]: I0213 07:37:39.636633 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d16037e0-98a4-42ed-af6f-e50b6dd57e80-cilium-config-path\") pod \"cilium-operator-574c4bb98d-hbpsx\" (UID: \"d16037e0-98a4-42ed-af6f-e50b6dd57e80\") " pod="kube-system/cilium-operator-574c4bb98d-hbpsx" Feb 13 07:37:39.636833 kubelet[1847]: I0213 07:37:39.636765 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cni-path\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637037 kubelet[1847]: I0213 07:37:39.636866 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-lib-modules\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637037 kubelet[1847]: I0213 07:37:39.636971 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-bpf-maps\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637257 kubelet[1847]: I0213 07:37:39.637127 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-cgroup\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637362 kubelet[1847]: I0213 07:37:39.637255 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-etc-cni-netd\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637362 kubelet[1847]: I0213 07:37:39.637322 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-xtables-lock\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637564 kubelet[1847]: I0213 07:37:39.637486 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-run\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637674 kubelet[1847]: I0213 07:37:39.637582 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-ipsec-secrets\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637674 kubelet[1847]: I0213 07:37:39.637644 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-host-proc-sys-net\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.637878 kubelet[1847]: I0213 07:37:39.637749 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8t8\" (UniqueName: \"kubernetes.io/projected/d16037e0-98a4-42ed-af6f-e50b6dd57e80-kube-api-access-rr8t8\") pod \"cilium-operator-574c4bb98d-hbpsx\" (UID: \"d16037e0-98a4-42ed-af6f-e50b6dd57e80\") " pod="kube-system/cilium-operator-574c4bb98d-hbpsx" Feb 13 07:37:39.637878 kubelet[1847]: I0213 07:37:39.637822 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-host-proc-sys-kernel\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.638092 kubelet[1847]: I0213 07:37:39.638053 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0e67bfd-6f86-4bf1-b307-7fc50247717d-hubble-tls\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.638193 kubelet[1847]: I0213 07:37:39.638142 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-hostproc\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.638335 kubelet[1847]: I0213 07:37:39.638297 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0e67bfd-6f86-4bf1-b307-7fc50247717d-clustermesh-secrets\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.638587 kubelet[1847]: I0213 07:37:39.638505 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-config-path\") pod \"cilium-qpxld\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " pod="kube-system/cilium-qpxld" Feb 13 07:37:39.776059 kubelet[1847]: E0213 07:37:39.775999 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:39.783406 env[1471]: time="2024-02-13T07:37:39.783384044Z" level=info msg="StopPodSandbox for \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\"" Feb 13 07:37:39.783620 env[1471]: time="2024-02-13T07:37:39.783450807Z" level=info msg="TearDown network for sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" successfully" Feb 13 07:37:39.783620 env[1471]: time="2024-02-13T07:37:39.783480316Z" level=info msg="StopPodSandbox for \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" returns successfully" Feb 13 07:37:39.783733 env[1471]: time="2024-02-13T07:37:39.783716038Z" level=info msg="RemovePodSandbox for \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\"" Feb 13 07:37:39.783785 env[1471]: time="2024-02-13T07:37:39.783736867Z" level=info msg="Forcibly stopping sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\"" Feb 13 07:37:39.783821 env[1471]: time="2024-02-13T07:37:39.783806043Z" level=info msg="TearDown network for sandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" successfully" Feb 13 07:37:39.785479 env[1471]: time="2024-02-13T07:37:39.785444096Z" level=info msg="RemovePodSandbox \"200136cf54d2cf26b5b5fbee275221b53bc51249940693bdcd6ca0793df5f633\" returns successfully" Feb 13 07:37:39.817057 kubelet[1847]: E0213 07:37:39.816958 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:39.820506 env[1471]: time="2024-02-13T07:37:39.820392446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qpxld,Uid:f0e67bfd-6f86-4bf1-b307-7fc50247717d,Namespace:kube-system,Attempt:0,}" Feb 13 07:37:39.825745 env[1471]: time="2024-02-13T07:37:39.825632363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-hbpsx,Uid:d16037e0-98a4-42ed-af6f-e50b6dd57e80,Namespace:kube-system,Attempt:0,}" Feb 13 07:37:39.836604 env[1471]: time="2024-02-13T07:37:39.836569532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:37:39.836604 env[1471]: time="2024-02-13T07:37:39.836591579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:37:39.836604 env[1471]: time="2024-02-13T07:37:39.836598632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:37:39.836724 env[1471]: time="2024-02-13T07:37:39.836662423Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4 pid=3637 runtime=io.containerd.runc.v2 Feb 13 07:37:39.838034 env[1471]: time="2024-02-13T07:37:39.838003430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:37:39.838034 env[1471]: time="2024-02-13T07:37:39.838021255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:37:39.838034 env[1471]: time="2024-02-13T07:37:39.838028353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:37:39.838114 env[1471]: time="2024-02-13T07:37:39.838081437Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b1eb63975a15701ecdb15cccb511c0a2effa425fdb5a3bba11882871c2a2085 pid=3653 runtime=io.containerd.runc.v2 Feb 13 07:37:39.843003 systemd[1]: Started cri-containerd-8b1eb63975a15701ecdb15cccb511c0a2effa425fdb5a3bba11882871c2a2085.scope. Feb 13 07:37:39.843945 systemd[1]: Started cri-containerd-d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4.scope. Feb 13 07:37:39.853946 env[1471]: time="2024-02-13T07:37:39.853923980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qpxld,Uid:f0e67bfd-6f86-4bf1-b307-7fc50247717d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4\"" Feb 13 07:37:39.854854 env[1471]: time="2024-02-13T07:37:39.854841487Z" level=info msg="CreateContainer within sandbox \"d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:37:39.857631 kubelet[1847]: E0213 07:37:39.857593 1847 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 07:37:39.860528 env[1471]: time="2024-02-13T07:37:39.860473290Z" level=info msg="CreateContainer within sandbox \"d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601\"" Feb 13 07:37:39.860783 env[1471]: time="2024-02-13T07:37:39.860739475Z" level=info msg="StartContainer for \"688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601\"" Feb 13 07:37:39.867775 env[1471]: time="2024-02-13T07:37:39.867738308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-hbpsx,Uid:d16037e0-98a4-42ed-af6f-e50b6dd57e80,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b1eb63975a15701ecdb15cccb511c0a2effa425fdb5a3bba11882871c2a2085\"" Feb 13 07:37:39.867908 systemd[1]: Started cri-containerd-688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601.scope. Feb 13 07:37:39.868532 env[1471]: time="2024-02-13T07:37:39.868518053Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 07:37:39.873161 systemd[1]: cri-containerd-688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601.scope: Deactivated successfully. Feb 13 07:37:39.873318 systemd[1]: Stopped cri-containerd-688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601.scope. Feb 13 07:37:39.878180 kubelet[1847]: I0213 07:37:39.878165 1847 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ef40e6d3-001f-48c3-82da-9b0db1166435 path="/var/lib/kubelet/pods/ef40e6d3-001f-48c3-82da-9b0db1166435/volumes" Feb 13 07:37:39.880806 env[1471]: time="2024-02-13T07:37:39.880736972Z" level=info msg="shim disconnected" id=688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601 Feb 13 07:37:39.880806 env[1471]: time="2024-02-13T07:37:39.880776548Z" level=warning msg="cleaning up after shim disconnected" id=688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601 namespace=k8s.io Feb 13 07:37:39.880806 env[1471]: time="2024-02-13T07:37:39.880785920Z" level=info msg="cleaning up dead shim" Feb 13 07:37:39.884224 env[1471]: time="2024-02-13T07:37:39.884178118Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3734 runtime=io.containerd.runc.v2\ntime=\"2024-02-13T07:37:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 13 07:37:39.884354 env[1471]: time="2024-02-13T07:37:39.884301089Z" level=error msg="copy shim log" error="read /proc/self/fd/68: file already closed" Feb 13 07:37:39.884498 env[1471]: time="2024-02-13T07:37:39.884436401Z" level=error msg="Failed to pipe stderr of container \"688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601\"" error="reading from a closed fifo" Feb 13 07:37:39.884498 env[1471]: time="2024-02-13T07:37:39.884448245Z" level=error msg="Failed to pipe stdout of container \"688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601\"" error="reading from a closed fifo" Feb 13 07:37:39.885034 env[1471]: time="2024-02-13T07:37:39.884984202Z" level=error msg="StartContainer for \"688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 13 07:37:39.885117 kubelet[1847]: E0213 07:37:39.885080 1847 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601" Feb 13 07:37:39.885150 kubelet[1847]: E0213 07:37:39.885146 1847 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 13 07:37:39.885150 kubelet[1847]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 13 07:37:39.885150 kubelet[1847]: rm /hostbin/cilium-mount Feb 13 07:37:39.885202 kubelet[1847]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tcbcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-qpxld_kube-system(f0e67bfd-6f86-4bf1-b307-7fc50247717d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 13 07:37:39.885202 kubelet[1847]: E0213 07:37:39.885170 1847 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-qpxld" podUID=f0e67bfd-6f86-4bf1-b307-7fc50247717d Feb 13 07:37:40.073226 env[1471]: time="2024-02-13T07:37:40.073122287Z" level=info msg="StopPodSandbox for \"d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4\"" Feb 13 07:37:40.073554 env[1471]: time="2024-02-13T07:37:40.073299933Z" level=info msg="Container to stop \"688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 07:37:40.087184 systemd[1]: cri-containerd-d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4.scope: Deactivated successfully. Feb 13 07:37:40.137746 env[1471]: time="2024-02-13T07:37:40.137651449Z" level=info msg="shim disconnected" id=d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4 Feb 13 07:37:40.138027 env[1471]: time="2024-02-13T07:37:40.137750943Z" level=warning msg="cleaning up after shim disconnected" id=d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4 namespace=k8s.io Feb 13 07:37:40.138027 env[1471]: time="2024-02-13T07:37:40.137778581Z" level=info msg="cleaning up dead shim" Feb 13 07:37:40.149838 env[1471]: time="2024-02-13T07:37:40.149777135Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3764 runtime=io.containerd.runc.v2\n" Feb 13 07:37:40.150340 env[1471]: time="2024-02-13T07:37:40.150259141Z" level=info msg="TearDown network for sandbox \"d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4\" successfully" Feb 13 07:37:40.150340 env[1471]: time="2024-02-13T07:37:40.150300342Z" level=info msg="StopPodSandbox for \"d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4\" returns successfully" Feb 13 07:37:40.343137 kubelet[1847]: I0213 07:37:40.342923 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcbcd\" (UniqueName: \"kubernetes.io/projected/f0e67bfd-6f86-4bf1-b307-7fc50247717d-kube-api-access-tcbcd\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.343137 kubelet[1847]: I0213 07:37:40.343022 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-lib-modules\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.343137 kubelet[1847]: I0213 07:37:40.343079 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-etc-cni-netd\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.343137 kubelet[1847]: I0213 07:37:40.343136 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-xtables-lock\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343196 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cni-path\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343184 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343206 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343252 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-run\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343297 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343286 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343323 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cni-path" (OuterVolumeSpecName: "cni-path") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343422 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0e67bfd-6f86-4bf1-b307-7fc50247717d-clustermesh-secrets\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343569 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-config-path\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343672 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-host-proc-sys-kernel\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343753 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343787 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-ipsec-secrets\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343908 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-host-proc-sys-net\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.343984 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0e67bfd-6f86-4bf1-b307-7fc50247717d-hubble-tls\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.344021 kubelet[1847]: I0213 07:37:40.344039 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-hostproc\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344003 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344093 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-bpf-maps\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.346502 kubelet[1847]: W0213 07:37:40.344052 1847 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f0e67bfd-6f86-4bf1-b307-7fc50247717d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344151 1847 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-cgroup\") pod \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\" (UID: \"f0e67bfd-6f86-4bf1-b307-7fc50247717d\") " Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344174 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-hostproc" (OuterVolumeSpecName: "hostproc") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344263 1847 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-lib-modules\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344239 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344329 1847 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-etc-cni-netd\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344387 1847 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-xtables-lock\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344460 1847 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-run\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344509 1847 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cni-path\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344490 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344548 1847 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-host-proc-sys-net\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.346502 kubelet[1847]: I0213 07:37:40.344584 1847 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-host-proc-sys-kernel\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.349073 kubelet[1847]: I0213 07:37:40.349040 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 07:37:40.349251 kubelet[1847]: I0213 07:37:40.349189 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e67bfd-6f86-4bf1-b307-7fc50247717d-kube-api-access-tcbcd" (OuterVolumeSpecName: "kube-api-access-tcbcd") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "kube-api-access-tcbcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:40.349299 kubelet[1847]: I0213 07:37:40.349289 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:37:40.349382 kubelet[1847]: I0213 07:37:40.349374 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e67bfd-6f86-4bf1-b307-7fc50247717d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 07:37:40.349402 kubelet[1847]: I0213 07:37:40.349378 1847 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e67bfd-6f86-4bf1-b307-7fc50247717d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f0e67bfd-6f86-4bf1-b307-7fc50247717d" (UID: "f0e67bfd-6f86-4bf1-b307-7fc50247717d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 07:37:40.444970 kubelet[1847]: I0213 07:37:40.444870 1847 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-config-path\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.444970 kubelet[1847]: I0213 07:37:40.444945 1847 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0e67bfd-6f86-4bf1-b307-7fc50247717d-clustermesh-secrets\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.444970 kubelet[1847]: I0213 07:37:40.444979 1847 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-ipsec-secrets\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.445484 kubelet[1847]: I0213 07:37:40.445008 1847 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0e67bfd-6f86-4bf1-b307-7fc50247717d-hubble-tls\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.445484 kubelet[1847]: I0213 07:37:40.445037 1847 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-hostproc\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.445484 kubelet[1847]: I0213 07:37:40.445067 1847 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-cilium-cgroup\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.445484 kubelet[1847]: I0213 07:37:40.445099 1847 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0e67bfd-6f86-4bf1-b307-7fc50247717d-bpf-maps\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.445484 kubelet[1847]: I0213 07:37:40.445130 1847 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tcbcd\" (UniqueName: \"kubernetes.io/projected/f0e67bfd-6f86-4bf1-b307-7fc50247717d-kube-api-access-tcbcd\") on node \"10.67.80.31\" DevicePath \"\"" Feb 13 07:37:40.747213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2165180be0da955299d3d828ccaffbf3490e52f0cb253de2b19ad7cf25625b4-shm.mount: Deactivated successfully. Feb 13 07:37:40.747272 systemd[1]: var-lib-kubelet-pods-f0e67bfd\x2d6f86\x2d4bf1\x2db307\x2d7fc50247717d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtcbcd.mount: Deactivated successfully. Feb 13 07:37:40.747314 systemd[1]: var-lib-kubelet-pods-f0e67bfd\x2d6f86\x2d4bf1\x2db307\x2d7fc50247717d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 07:37:40.747353 systemd[1]: var-lib-kubelet-pods-f0e67bfd\x2d6f86\x2d4bf1\x2db307\x2d7fc50247717d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 07:37:40.747391 systemd[1]: var-lib-kubelet-pods-f0e67bfd\x2d6f86\x2d4bf1\x2db307\x2d7fc50247717d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 13 07:37:40.817282 kubelet[1847]: E0213 07:37:40.817175 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:41.013183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419920842.mount: Deactivated successfully. Feb 13 07:37:41.082937 kubelet[1847]: I0213 07:37:41.082868 1847 scope.go:115] "RemoveContainer" containerID="688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601" Feb 13 07:37:41.085723 env[1471]: time="2024-02-13T07:37:41.085606570Z" level=info msg="RemoveContainer for \"688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601\"" Feb 13 07:37:41.087884 env[1471]: time="2024-02-13T07:37:41.087843952Z" level=info msg="RemoveContainer for \"688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601\" returns successfully" Feb 13 07:37:41.088611 systemd[1]: Removed slice kubepods-burstable-podf0e67bfd_6f86_4bf1_b307_7fc50247717d.slice. Feb 13 07:37:41.139937 kubelet[1847]: I0213 07:37:41.139922 1847 topology_manager.go:212] "Topology Admit Handler" Feb 13 07:37:41.140011 kubelet[1847]: E0213 07:37:41.139957 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0e67bfd-6f86-4bf1-b307-7fc50247717d" containerName="mount-cgroup" Feb 13 07:37:41.140011 kubelet[1847]: I0213 07:37:41.139975 1847 memory_manager.go:346] "RemoveStaleState removing state" podUID="f0e67bfd-6f86-4bf1-b307-7fc50247717d" containerName="mount-cgroup" Feb 13 07:37:41.142802 systemd[1]: Created slice kubepods-burstable-poda1822dd0_5321_4bf7_90ca_21da68ef04cc.slice. Feb 13 07:37:41.235695 kubelet[1847]: I0213 07:37:41.235679 1847 setters.go:548] "Node became not ready" node="10.67.80.31" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-13 07:37:41.235650164 +0000 UTC m=+61.774254318 LastTransitionTime:2024-02-13 07:37:41.235650164 +0000 UTC m=+61.774254318 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 13 07:37:41.248856 kubelet[1847]: I0213 07:37:41.248811 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1822dd0-5321-4bf7-90ca-21da68ef04cc-clustermesh-secrets\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.248856 kubelet[1847]: I0213 07:37:41.248830 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-host-proc-sys-kernel\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.248856 kubelet[1847]: I0213 07:37:41.248843 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-hostproc\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.248856 kubelet[1847]: I0213 07:37:41.248856 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-lib-modules\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.248951 kubelet[1847]: I0213 07:37:41.248867 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-xtables-lock\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.248951 kubelet[1847]: I0213 07:37:41.248897 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1822dd0-5321-4bf7-90ca-21da68ef04cc-cilium-config-path\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.248951 kubelet[1847]: I0213 07:37:41.248919 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1822dd0-5321-4bf7-90ca-21da68ef04cc-cilium-ipsec-secrets\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.248951 kubelet[1847]: I0213 07:37:41.248935 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-host-proc-sys-net\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.249019 kubelet[1847]: I0213 07:37:41.248955 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-cilium-run\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.249019 kubelet[1847]: I0213 07:37:41.248979 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-cilium-cgroup\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.249019 kubelet[1847]: I0213 07:37:41.248996 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1822dd0-5321-4bf7-90ca-21da68ef04cc-hubble-tls\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.249019 kubelet[1847]: I0213 07:37:41.249007 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-bpf-maps\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.249019 kubelet[1847]: I0213 07:37:41.249019 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-cni-path\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.249101 kubelet[1847]: I0213 07:37:41.249041 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1822dd0-5321-4bf7-90ca-21da68ef04cc-etc-cni-netd\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.249101 kubelet[1847]: I0213 07:37:41.249066 1847 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27j4\" (UniqueName: \"kubernetes.io/projected/a1822dd0-5321-4bf7-90ca-21da68ef04cc-kube-api-access-z27j4\") pod \"cilium-pgkkb\" (UID: \"a1822dd0-5321-4bf7-90ca-21da68ef04cc\") " pod="kube-system/cilium-pgkkb" Feb 13 07:37:41.459541 env[1471]: time="2024-02-13T07:37:41.459475790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgkkb,Uid:a1822dd0-5321-4bf7-90ca-21da68ef04cc,Namespace:kube-system,Attempt:0,}" Feb 13 07:37:41.465282 env[1471]: time="2024-02-13T07:37:41.465232573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 07:37:41.465282 env[1471]: time="2024-02-13T07:37:41.465271573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 07:37:41.465282 env[1471]: time="2024-02-13T07:37:41.465278273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 07:37:41.465416 env[1471]: time="2024-02-13T07:37:41.465369108Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1 pid=3791 runtime=io.containerd.runc.v2 Feb 13 07:37:41.472052 systemd[1]: Started cri-containerd-67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1.scope. Feb 13 07:37:41.483569 env[1471]: time="2024-02-13T07:37:41.483539620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgkkb,Uid:a1822dd0-5321-4bf7-90ca-21da68ef04cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\"" Feb 13 07:37:41.485193 env[1471]: time="2024-02-13T07:37:41.485176487Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 07:37:41.487737 env[1471]: time="2024-02-13T07:37:41.487696160Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:41.488711 env[1471]: time="2024-02-13T07:37:41.488670878Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:41.490003 env[1471]: time="2024-02-13T07:37:41.489958985Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb\"" Feb 13 07:37:41.490095 env[1471]: time="2024-02-13T07:37:41.490067855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 13 07:37:41.490201 env[1471]: time="2024-02-13T07:37:41.490167206Z" level=info msg="StartContainer for \"d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb\"" Feb 13 07:37:41.490682 env[1471]: time="2024-02-13T07:37:41.490663665Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 07:37:41.491998 env[1471]: time="2024-02-13T07:37:41.491981647Z" level=info msg="CreateContainer within sandbox \"8b1eb63975a15701ecdb15cccb511c0a2effa425fdb5a3bba11882871c2a2085\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 07:37:41.496791 env[1471]: time="2024-02-13T07:37:41.496765855Z" level=info msg="CreateContainer within sandbox \"8b1eb63975a15701ecdb15cccb511c0a2effa425fdb5a3bba11882871c2a2085\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"02ceec6c344011b36d32948b5ea3402db80817df1bbba1e3ed7b6aedcc88c478\"" Feb 13 07:37:41.497170 env[1471]: time="2024-02-13T07:37:41.497134339Z" level=info msg="StartContainer for \"02ceec6c344011b36d32948b5ea3402db80817df1bbba1e3ed7b6aedcc88c478\"" Feb 13 07:37:41.499001 systemd[1]: Started cri-containerd-d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb.scope. Feb 13 07:37:41.505586 systemd[1]: Started cri-containerd-02ceec6c344011b36d32948b5ea3402db80817df1bbba1e3ed7b6aedcc88c478.scope. Feb 13 07:37:41.512758 env[1471]: time="2024-02-13T07:37:41.512706491Z" level=info msg="StartContainer for \"d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb\" returns successfully" Feb 13 07:37:41.517118 env[1471]: time="2024-02-13T07:37:41.517093955Z" level=info msg="StartContainer for \"02ceec6c344011b36d32948b5ea3402db80817df1bbba1e3ed7b6aedcc88c478\" returns successfully" Feb 13 07:37:41.517298 systemd[1]: cri-containerd-d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb.scope: Deactivated successfully. Feb 13 07:37:41.680778 env[1471]: time="2024-02-13T07:37:41.680657925Z" level=info msg="shim disconnected" id=d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb Feb 13 07:37:41.680778 env[1471]: time="2024-02-13T07:37:41.680770825Z" level=warning msg="cleaning up after shim disconnected" id=d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb namespace=k8s.io Feb 13 07:37:41.681314 env[1471]: time="2024-02-13T07:37:41.680804109Z" level=info msg="cleaning up dead shim" Feb 13 07:37:41.698046 env[1471]: time="2024-02-13T07:37:41.697915006Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3926 runtime=io.containerd.runc.v2\n" Feb 13 07:37:41.817653 kubelet[1847]: E0213 07:37:41.817528 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:41.883311 kubelet[1847]: I0213 07:37:41.883251 1847 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f0e67bfd-6f86-4bf1-b307-7fc50247717d path="/var/lib/kubelet/pods/f0e67bfd-6f86-4bf1-b307-7fc50247717d/volumes" Feb 13 07:37:42.098908 env[1471]: time="2024-02-13T07:37:42.098665465Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 07:37:42.107942 kubelet[1847]: I0213 07:37:42.107846 1847 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-hbpsx" podStartSLOduration=1.485316313 podCreationTimestamp="2024-02-13 07:37:39 +0000 UTC" firstStartedPulling="2024-02-13 07:37:39.868364139 +0000 UTC m=+60.406968297" lastFinishedPulling="2024-02-13 07:37:41.490767281 +0000 UTC m=+62.029371443" observedRunningTime="2024-02-13 07:37:42.107465566 +0000 UTC m=+62.646069812" watchObservedRunningTime="2024-02-13 07:37:42.107719459 +0000 UTC m=+62.646323669" Feb 13 07:37:42.114462 env[1471]: time="2024-02-13T07:37:42.114200816Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b\"" Feb 13 07:37:42.114810 env[1471]: time="2024-02-13T07:37:42.114779818Z" level=info msg="StartContainer for \"6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b\"" Feb 13 07:37:42.123312 systemd[1]: Started cri-containerd-6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b.scope. Feb 13 07:37:42.134204 env[1471]: time="2024-02-13T07:37:42.134179231Z" level=info msg="StartContainer for \"6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b\" returns successfully" Feb 13 07:37:42.137321 systemd[1]: cri-containerd-6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b.scope: Deactivated successfully. Feb 13 07:37:42.146923 env[1471]: time="2024-02-13T07:37:42.146893137Z" level=info msg="shim disconnected" id=6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b Feb 13 07:37:42.147015 env[1471]: time="2024-02-13T07:37:42.146925346Z" level=warning msg="cleaning up after shim disconnected" id=6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b namespace=k8s.io Feb 13 07:37:42.147015 env[1471]: time="2024-02-13T07:37:42.146932417Z" level=info msg="cleaning up dead shim" Feb 13 07:37:42.150832 env[1471]: time="2024-02-13T07:37:42.150810260Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3986 runtime=io.containerd.runc.v2\n" Feb 13 07:37:42.750270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b-rootfs.mount: Deactivated successfully. Feb 13 07:37:42.818135 kubelet[1847]: E0213 07:37:42.818072 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:42.988095 kubelet[1847]: W0213 07:37:42.987973 1847 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e67bfd_6f86_4bf1_b307_7fc50247717d.slice/cri-containerd-688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601.scope WatchSource:0}: container "688cd9421426a6a4797d288d3fb654beec096fdc4d3b0135d251ae7d6fcb8601" in namespace "k8s.io": not found Feb 13 07:37:43.106335 env[1471]: time="2024-02-13T07:37:43.106199547Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 07:37:43.125407 env[1471]: time="2024-02-13T07:37:43.125287988Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749\"" Feb 13 07:37:43.125939 env[1471]: time="2024-02-13T07:37:43.125893010Z" level=info msg="StartContainer for \"fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749\"" Feb 13 07:37:43.134718 systemd[1]: Started cri-containerd-fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749.scope. Feb 13 07:37:43.146737 env[1471]: time="2024-02-13T07:37:43.146684027Z" level=info msg="StartContainer for \"fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749\" returns successfully" Feb 13 07:37:43.147993 systemd[1]: cri-containerd-fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749.scope: Deactivated successfully. Feb 13 07:37:43.157989 env[1471]: time="2024-02-13T07:37:43.157963872Z" level=info msg="shim disconnected" id=fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749 Feb 13 07:37:43.158080 env[1471]: time="2024-02-13T07:37:43.157994817Z" level=warning msg="cleaning up after shim disconnected" id=fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749 namespace=k8s.io Feb 13 07:37:43.158080 env[1471]: time="2024-02-13T07:37:43.158001216Z" level=info msg="cleaning up dead shim" Feb 13 07:37:43.161945 env[1471]: time="2024-02-13T07:37:43.161924258Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4041 runtime=io.containerd.runc.v2\n" Feb 13 07:37:43.750715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749-rootfs.mount: Deactivated successfully. Feb 13 07:37:43.819064 kubelet[1847]: E0213 07:37:43.818991 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:44.113430 env[1471]: time="2024-02-13T07:37:44.113288612Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 07:37:44.139638 env[1471]: time="2024-02-13T07:37:44.139588449Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad\"" Feb 13 07:37:44.139898 env[1471]: time="2024-02-13T07:37:44.139847547Z" level=info msg="StartContainer for \"15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad\"" Feb 13 07:37:44.140958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount374941631.mount: Deactivated successfully. Feb 13 07:37:44.149660 systemd[1]: Started cri-containerd-15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad.scope. Feb 13 07:37:44.165531 env[1471]: time="2024-02-13T07:37:44.165465922Z" level=info msg="StartContainer for \"15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad\" returns successfully" Feb 13 07:37:44.166316 systemd[1]: cri-containerd-15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad.scope: Deactivated successfully. Feb 13 07:37:44.199192 env[1471]: time="2024-02-13T07:37:44.199095841Z" level=info msg="shim disconnected" id=15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad Feb 13 07:37:44.199192 env[1471]: time="2024-02-13T07:37:44.199163777Z" level=warning msg="cleaning up after shim disconnected" id=15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad namespace=k8s.io Feb 13 07:37:44.199192 env[1471]: time="2024-02-13T07:37:44.199181553Z" level=info msg="cleaning up dead shim" Feb 13 07:37:44.209628 env[1471]: time="2024-02-13T07:37:44.209582911Z" level=warning msg="cleanup warnings time=\"2024-02-13T07:37:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4095 runtime=io.containerd.runc.v2\n" Feb 13 07:37:44.750874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad-rootfs.mount: Deactivated successfully. Feb 13 07:37:44.820183 kubelet[1847]: E0213 07:37:44.820123 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:44.858891 kubelet[1847]: E0213 07:37:44.858815 1847 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 07:37:45.122990 env[1471]: time="2024-02-13T07:37:45.122834821Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 07:37:45.141075 env[1471]: time="2024-02-13T07:37:45.140952949Z" level=info msg="CreateContainer within sandbox \"67e87f08b195220ff959cd6d1c9cc515dc88a36c55196a2a2ec3a0fcca21baf1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c1c5746d7b93c4ce15cfe604e2e89f21fe4088b2143f7554b9b7c9e5fc49c04\"" Feb 13 07:37:45.141996 env[1471]: time="2024-02-13T07:37:45.141912172Z" level=info msg="StartContainer for \"5c1c5746d7b93c4ce15cfe604e2e89f21fe4088b2143f7554b9b7c9e5fc49c04\"" Feb 13 07:37:45.182544 systemd[1]: Started cri-containerd-5c1c5746d7b93c4ce15cfe604e2e89f21fe4088b2143f7554b9b7c9e5fc49c04.scope. Feb 13 07:37:45.220746 env[1471]: time="2024-02-13T07:37:45.220682481Z" level=info msg="StartContainer for \"5c1c5746d7b93c4ce15cfe604e2e89f21fe4088b2143f7554b9b7c9e5fc49c04\" returns successfully" Feb 13 07:37:45.395445 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 07:37:45.820514 kubelet[1847]: E0213 07:37:45.820416 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:46.102583 kubelet[1847]: W0213 07:37:46.102392 1847 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1822dd0_5321_4bf7_90ca_21da68ef04cc.slice/cri-containerd-d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb.scope WatchSource:0}: task d1393f87a07bf03845d9aa3999f744359d061d0eabb2198d0c9a457339fa1ffb not found: not found Feb 13 07:37:46.161380 kubelet[1847]: I0213 07:37:46.161279 1847 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-pgkkb" podStartSLOduration=5.161186158 podCreationTimestamp="2024-02-13 07:37:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-13 07:37:46.160830799 +0000 UTC m=+66.699435025" watchObservedRunningTime="2024-02-13 07:37:46.161186158 +0000 UTC m=+66.699790387" Feb 13 07:37:46.821316 kubelet[1847]: E0213 07:37:46.821268 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:47.822521 kubelet[1847]: E0213 07:37:47.822405 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:48.223334 systemd-networkd[1320]: lxc_health: Link UP Feb 13 07:37:48.244136 systemd-networkd[1320]: lxc_health: Gained carrier Feb 13 07:37:48.244439 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 13 07:37:48.823442 kubelet[1847]: E0213 07:37:48.823392 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:49.216231 kubelet[1847]: W0213 07:37:49.216119 1847 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1822dd0_5321_4bf7_90ca_21da68ef04cc.slice/cri-containerd-6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b.scope WatchSource:0}: task 6c92fb99efd7a6b6628e7fa01041598afb8e074f5004df8ac8081ab5d999547b not found: not found Feb 13 07:37:49.629544 systemd-networkd[1320]: lxc_health: Gained IPv6LL Feb 13 07:37:49.824164 kubelet[1847]: E0213 07:37:49.824117 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:50.824879 kubelet[1847]: E0213 07:37:50.824756 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:51.826029 kubelet[1847]: E0213 07:37:51.825840 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:52.322826 kubelet[1847]: W0213 07:37:52.322771 1847 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1822dd0_5321_4bf7_90ca_21da68ef04cc.slice/cri-containerd-fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749.scope WatchSource:0}: task fecaaaa0cbfd662530becacb7a5b3421c8bfb788e0761658c376f1b535f3f749 not found: not found Feb 13 07:37:52.826740 kubelet[1847]: E0213 07:37:52.826615 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:53.827098 kubelet[1847]: E0213 07:37:53.826985 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:54.827946 kubelet[1847]: E0213 07:37:54.827863 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 07:37:55.431283 kubelet[1847]: W0213 07:37:55.431173 1847 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1822dd0_5321_4bf7_90ca_21da68ef04cc.slice/cri-containerd-15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad.scope WatchSource:0}: task 15fae6a44f2d0df94353a23ff950af2e2391ca9c0815a05e5eb4b19d971fb7ad not found: not found Feb 13 07:37:55.829064 kubelet[1847]: E0213 07:37:55.828938 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"